Assessing Learning Objects: The Importance of Values, Purpose and Design

by Diane J. Goldsmith, Connecticut Distance Learning Consortium

While recent debate has swirled around the internet as to whether learning objects are in fact “dead” (Wiley 2006; Norman 2006; Downes 2006), learning object repositories continue to grow and interest is still high. So it is important to consider how to assess such projects. This article is designed to demonstrate how to apply some principles of assessment to projects and programs engaged in the development of learning objects.

A learning object is generally defined as a resource which supports learning, which is granular or self-contained, and which can be reused. Further, most definitions, but not all, assert that learning objects must be digital and capable of being easily searchable (having metadata tags). The promise of learning objects is commonly described as that of “building it once and using it many times,” either within an institution or globally.

Carol Twigg, a well-known advocate for the use of technology in education, had this criticism of learning objects in an interview printed in Educause Review. It is useful in that it sheds light on some of the issues related to assessment.

MERLOT claims to have 7,000 or so learning objects in a database. But if these learning objects haven’t been evaluated in terms of whether or not they increase student learning, you then just have 7,000 sort of mildly interesting things collected in a database.” Carol Twigg (Veronikas and Shaughnessy 2006)

I don’t believe Twigg’s comments about learning objects were directed at MERLOT specifically, but rather at any repository of learning objects. And while it contains some truths, her critique has some problems. The first is the notion of “increasing” learning. Carol Twigg has been a strong proponent of holding online education to the same standard as on-ground education. Do online learning objects always have to be “better,” or can they be just as good, but serve some other purpose, such as saving money?

The Role of Values and Purpose in LO Assessment

This question emphasizes the fact that all assessment is values-driven. All assessment exists within a context–whether something is successful or not depends on whether you can show that it meets or exceeds your expectations in your particular context. If your only context is “increased learning” then that may be your only standard for assessment. But learning objects don’t exist in a vacuum. Most learning objects are deployed by faculty, often in a variety of situations, and therefore, have a variety of outcomes depending on how they are used. A simulation may be used to help students learn a series of steps (memorization), analyze the steps, or create a similar simulation. All of those would be reasonable outcomes not necessarily built into the object itself, but designed by the instructor or student using it.

There are other facets to learning objects that must be taken into consideration. Maybe they increase student learning, but are very expensive to implement.  Maybe they can help standardize content, but faculty are extremely opposed to such standardization. Maybe it is a wonderful object, but it is difficult to use. It’s important when thinking about assessment to consider what values are important and in what context the object will be used.

Of course, we aren’t going to spend lots of time and money on learning objects unless they really do help students learn. So it makes sense to look at AAHE’s principles of assessment for student learning (Astin et al, 1992), many of which, I believe, can be adopted for learning objects.

  1. The assessment of student learning begins with educational values.
  2. Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.
  3. Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.
  4. Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.
  5. Assessment works best when it is ongoing not episodic.
  6. Assessment fosters wider improvement when representatives from across the educational community are involved.
  7. Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about.
  8. Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change.
  9. Through assessment, educators meet responsibilities to students and to the public.

Here again there is an emphasis on values. Assessment is not possible without articulating these values. Good assessment is multi-dimensional, and since learning objects have a multi-dimensional aspect to them–student learning, how they are used, cost, whether they are easily re-usable, whether they are granular–it makes sense for your assessment to include as many of these aspects as are relevant and as fit into the goals you have set.

Developing learning objects requires the involvement of many different individuals–faculty and content experts, IT, learning developers, and of course, students. Therefore, all need to be involved in any process of assessment. The decisions about how learning objects will be deployed should shape your assessment strategy.

Assessment is ongoing. Assessment includes both outcomes and the experiences of getting to those outcomes. Here, issues like ease of use and cost, as well as the students’ and faculty’s experiences, are relevant. A one-time assessment isn’t as helpful as an assessment that begins when you start the development process and continues several years into implementation. Obviously, how important this is depends on the time and resources you are putting into development. Ultimately, assessment is about improvement. While a summative assessment is important, formative assessment throughout the process allows for change and improvement during development.

Questions to Ask in the Development of LO Assessment

There are a set of basic questions, who, why, how, when and what, that can provide a framework for your assessment process.

Ask first, who is interested in learning objects and this development process? Who are the stakeholders? Who is driving the use of learning objects? To whom will you have to prove you are successful? Are any of these folks “opponents?” What sort of evidence will you need to make them allies?

Secondly, why do they want learning objects, or a particular learning object? What values will they use to judge these objects? What are their expectations? What level of evidence do they need to be convinced that this project/object is “successful?” This important consideration depends on the learning object itself. If the learning object is a one-million-dollar chemistry lab simulation, then the values by which it is assessed to justify that expenditure–student learning, cost savings, increased safety–may be different than a repository created by faculty as part of their normal course development.

Next, which assessment methods are most appropriate? It is essential to use methods that your audience understands. A quick story which illustrates this involves an Institutional Researcher who did a wonderful study of retention. He used a fairly sophisticated statistical analysis called “path analysis.” Two years later, he re-did the study adding some variables, and he used multiple regression for his analysis. As he explained, even though path analysis probably was a more robust method of understanding retention, his administrators–the people he had to convince to use his analysis to create policy “didn’t trust it.” They understood or, at least, had heard of multiple regression. The lesson is, make sure you are assessing what the decision makers want assessed and that you are using methods they will accept as trustworthy. Keeping that lesson in mind, decide how you will collect the evidence you think is necessary. You have to factor in how much time and money you have for assessment. One neat thing about learning objects is that you may be able to build some types of assessment into the object or the repository itself–data on how often it is used, who uses it, and possibly, some outcomes measures. I offer examples of built in assessment in my review of assessment types below.

Consider when will you evaluate–at what stages, how often? Again, this is dependent on resources, but as discussed above, it is important to build in formative evaluation so you can make improvements as you go.

And lastly, what are you going to do with what you find? Assessment should lead to improvements–in the object itself, in how it is deployed, in how it can be found, in how it can be re-used. In other words, there must be effective communication between those doing the assessing and those creating the learning objects so that improvements in the categories your stakeholders have identified can be implemented.

Models for Assessment of Learning Objects

There are some interesting models available for the assessment of learning objects. Depending on the assessment plan you have created from the questions above, and the criteria which are most salient to your stakeholders, the following offer examples of the types of assessments that can be incorporated into that plan:

  • Despite Twigg’s criticism, MERLOT has built in some significant qualitative and quantitative assessment tools into its repository. There is peer review, involving teams of subject experts who review objects for content. Merlot provides a space for users to leave comments, essentially a place to collect assessments by those who have used the object. It provides a method of assessing usability by examining how others used the object within an assignment. And it counts how many people have “collected” the object as a method of assessing re-usability. With the exception of the peer review, these methods are built into the repository and require no additional data collection, only analysis.
  • The University of Wisconsin has built a repository of learning objects designed to support their goal of developing learning objects for each competency within their General Education Courses. They have articulated two specific goals for this project: accelerating the development of online courses and minimizing cost by identifying and sharing best practices. Unfortunately, there is nothing at the site that helps assess whether they are meeting these two goals. What they have built in is a place for users, both faculty and students, to comment on the object. In some cases, they have built in an assessment of a specific outcome. They have not necessarily matched the learning object with an outcome.
  • If you are spending major dollars on creating a learning object, one reason may be to save costs. However, actually being able to calculate those cost savings is not always easy. The Center for Academic Transformation has created an easy-to-understand, tested methodology for doing that type of assessment. The tool at their web site also provides instructions and examples of others who have used it.
  • Another check list you might want to adapt for your particular context can be found at AliveTek. It is mostly aligned with instructional design issues, but this too may be a part of the assessment plan. This is a particularly good checklist to use and to adapt when doing formative assessment of learning objects. Developing a check list and keeping it in mind throughout the development process may ensure that important features are included.
  • Wesleyan University is building a learning object repository, based on a clearly-thought-out, multi-faceted, longitudinal plan which employs monitoring technology, surveys, traditional classroom assessments and focus groups. The plan places a strong focus on assessing the impact of the objects on student learning. It uses technology to track student usage, providing information on which students use it, in what ways, and how often. It can also provide reports and comparison data, noting where students are coming from and which software and systems they are using. Student surveys assess the usefulness of the object in the learning process, as well as what students like and dislike about them. Faculty are asked to describe how they deployed them. Learning outcomes are assessed by using traditional classroom assessments, such as exams, in conjunction with interviews of faculty and students to ascertain those factors that contributed to success.

The programs above provide examples of robust assessment methods which focus on student learning, cost savings, good instructional design issues, ease of use and alignment to learning outcomes. Which of these make the most sense for an assessment plan depends on who your stakeholders are, what values learning objects hold, and the resources you have to conduct assessment activities (which will inform how and when you gather your data). A clearly-thought-out and well-implemented assessment plan is one way to ensure that the promise of learning objects can be fulfilled.

This article is a revision of a presentation given at the NERCOMP SIG workshop on Learning Objects, Amherst, MA in October 2004.

References:

Astin, A, T. W. Banta; K. P. Cross; E. El-Khawas; P. T. Ewell; P. Hutchings; T. J. Marchese; K. M. McClenney; M. Mentkowski; M. A. Miller; E. T. Moran; B. D. Wright. 1992. 9 Principles of Good Practice for Assessing Student Learning.
http://ultibase.rmit.edu.au/Articles/june97/ameri1.htm.

Downes, S. 2006. Learning Objects: Their Use, Their Potential, and Why They Are Not Dead Yethttp://redes.colombiaaprende.edu.co/seminario/files/StephenDownes-LearningObjects-TheirUse-TheirPotential-AndWhyTheyAreNotDeadYet.pdf.

Norman, D. 2006. Learning Objects: RIP or 1.0?
http://www.darcynorman.net/2006/01/09/learning-objects-rip-or-10

Wiley, D. 2006.  RIP-ping on Learning Objects.
http://opencontent.org/blog/archives/230.

Veronikas, S.W. and Shaughnessy, M.F. 2004. “Teaching and Learning in a Hybrid World:
An Interview with Carol Twigg,” EDUCAUSE Review, vol. 39, no. 4 (July/August): 50-62.
http://www.educause.edu/pub/er/erm04/erm0443.asp?bhcp=1