Can We Promote Experimentation and Innovation in Learning as well as Accountability? Interview with Terrel Rhodes

by Randy Bass, Georgetown University

Editor’s Note: What does the learning revolution inherent in the expansion of social and digital media have to do with the national conversation around assessment and accountability? Faculty often fear that “assessment” (especially mandated assessment) will have a reductive effect, either by reducing the rich complexity of teaching and learning to simplistic metrics, or by limiting what’s being measured to lower-order skills that can easily be measured. Among those who experiment with new media technologies the tension is exacerbated, as student learning gains in new digital environments seem increasingly expansive, holistic and difficult to measure. How then might we find common ground between an impulse to get a more trenchant read on institutional effectiveness at inducing learning and the cultivation of innovation in teaching that higher education so badly needs?

The VALUE project comes into the middle of this tension, as it proposes to create frameworks (or metarubrics) that provide flexible criteria for making valid judgments about student work that might result from a wide range of assessments and learning opportunities, over time. In this interview, Terrel Rhodes, Director of the VALUE project and Vice President of the Association for American Colleges and Universities (AAC&U) describes the assumptions and goals behind the Project. He especially addresses how electronic portfolios serve those goals as the locus of evaluation by educators, providing frameworks for judgments tailored to local contexts but calibrated to “Essential Learning Outcomes,” with broad significance for student achievement. The aims and ambitions of the VALUE Project have the potential to move us further down the road toward a more systematic engagement with the expansion of learning. –Randy Bass

Randy Bass: What is VALUE? What problem is it trying to solve?
Terrel Rhodes: In short, the VALUE Project (Valid Assessment of Learning in Undergraduate Education) works to develop approaches to assessment based upon examples of student work completed in their courses and saved over time in an e-portfolio. The project collects and synthesizes best practices in assessing student work using rubrics developed by faculty members. One of the project’s core purposes is to identify commonalities of outcome expectations of achievement across a variety of institutions.

The project really grew out of the national conversation that was begun with the Essential Learning Outcomes (ELOs) articulated as part of AAC&U’s ten-year LEAP (Liberal Education and America’s Promise) initiative and developed through campus-community conversations (AAC&U 2007). There are fourteen ELO’s, ranging from skills–perhaps more readily assessable–such as written communication or quantitative literacy, to broader abilities and dispositions, such as problem solving, critical thinking, and ethical reasoning. Also included among the ELO’s were more abstract–but no less “essential”–learning goals such as civic engagement, intercultural knowledge, creative thinking, and integrative learning. (See a complete list and description of the Essential Learning Outcomes.)

What we were finding was that there was broad agreement about the value of these learning outcomes, but considerable lack of clarity and precedent for how to be accountable to them. That is, how could a campus or a program use one or more of these Essential Learning Outcomes as a driver for changes and improvement in practice, or even as a measure of how well current curricula were achieving these goals? People were asking, “if we wanted to take these learning outcomes seriously how would we do that? Where would we look? How would we have results that might be comparative and valid?”

We were responding to the growing consensus that to achieve a high-quality education for all students, valid assessment data are needed to guide planning, teaching, and improvement. That was one core assumption. And it was clear that colleges and universities were interested in fostering and assessing many of these essential learning outcomes beyond those addressed by currently available standardized tests–or for that matter that are captured by student performance in individual courses.

We also started from some other assumptions, such as: that learning develops over time and should become more complex and sophisticated as students move through various pathways toward a degree; that good practice in assessment requires multiple assessments, over time; and that well-planned electronic portfolios provide excellent opportunities to collect meaningful data about student learning, from multiple assessments, across a broad range of learning outcomes. At the same time, the electronic portfolio process can serve to help guide student learning and build self-assessment capabilities. Ultimately, we believe that e-portfolios and the assessment of student work in them can better inform programs and institutions on how effectively they are helping students achieve their expected goals.

Say more about what kind of learning is being assessed? What kind of student performance gets looked at in the e-portfolios?
The project builds on a philosophy of learning assessment that privileges multiple expert judgments of the quality of student work over reliance on standardized texts administered to samples of students outside of their required courses. The VALUE project builds on the work campus faculty and staff have done in developing assessment rubrics to evaluate achievement of a broad range of Essential Learning Outcomes and in articulating the expectations and criteria for student learning at beginning through advanced levels of performance. The project explores how rubrics can be applied to the actual work students have done both in their required courses and co-curricular activities.

The initial reaction to the national accountability demands for indicators of student learning have resulted in calls to use tests that have some basic characteristics in common: they are in some way standardized; they result in a score or quantitative measurement that summarizes how well a group of students has performed; they test only samples of students at a given institution; they require additional costs for students or institutions to administer; they reflect a snapshot picture at one point in time; they provide an institutional rather than an individual score; and they lack high stakes for the students taking the exams.

It is ironic that just at the point when higher education research has finally developed a rich information base on effective practices that enhance learning, on cognitive development and neurobiologic bases of knowing, and technological advances that greatly expand our abilities to collect, preserve and demonstrate complex, multi-faceted learning, that we so willingly accept outmoded, snapshot, shorthand representations of the value of our educational outcomes and impact on student learning.

In contrast, the VALUE project responds to the need for multiple measures of multiple abilities and skills, many of which are not particularly well suited to snapshot standardized tests. The types of learning that employers and policy makers are calling for  need to be demonstrated through cumulative, progressive work students perform as they move through their educational pathways to graduation; rich, multifaceted representations of learning in curricular and co-curricular contexts, rather than artificial examinations divorced from applied contexts.

Why e-portfolios? How is the e-portfolio different from other kinds of assessments?
The  evidence of learning collected in an e-portfolio creates a rich portrait of achievement for an individual and, with sampling and analysis from a collection of portfolios, can create a similar portrait of a program or an entire institution. Drawing directly from curriculum-embedded and co-curricular work, e-portfolios can represent multiple learning styles, modes of accomplishment, and the quality of work achieved by students.

Although it is not a direct objective of the Project, VALUE promotes wider use of e-portfolios for assessment without impairing the developmental and progressive dimensions of e-portfolios as spaces that students can own to represent themselves as learners and to make connections across their educational experience. We believe that e-portfolios, potentially, can foster and provide evidence of high levels of student learning, across a vast range of experiences, and across programs and institution-wide outcomes.

By gathering and disseminating student work through electronic portfolios, the same set of student performance information can be used at course, program and institutional levels for assessment purposes, and faculty can collaborate on assessing and responding to student progress. Student work from on and off campus and from all the institutions a student may have attended can be included in a single presentation of student accomplishment over time and space.

We also know, from twenty or more years of pioneering work with portfolios in higher education that periodic reflections on learning by students are critical components of an education. Student reflections, along with self and peer assessments, guided by rubrics, help students to judge their own work as an expert would. These reflections and self-assessments all become part of the collection of work that gets evaluated in light of the Essential Learning Outcomes.

What are these rubrics or metarubrics? What are they supposed to do? What can’t they do?
All teachers use criteria for achievement, if only implicit. Many educators at all levels have created and make use of explicit “rubrics,” or scoring guides, with statements of expected levels of achievement using criteria vital to quality work in a chosen area. For VALUE, the criteria for the rubrics at the center of the project are determined in discussions among experts in the appropriate fields.

The VALUE project has collected rubrics from faculty and programs across the country designed to assess all of the Essential Learning Outcomes. Teams of cross-institutional faculty and staff have been assembled, bringing their own expertise to the process. They have examined the rubrics for the purpose of identifying and articulating the most commonly shared expectations or criteria for learning for each outcome and at progressively more sophisticated and complex levels of performance. This analysis has resulted in what we have been calling “metarubrics,” or shared learning expectations.

VALUE_creative_thinking_metarubric.JPG
Creative Thinking Metarubric

VALUE_critical_thinking_metarubric.JPG
Critical Thinking Metarubric

VALUE_integrative_learning_metarubric.JPG
Integrative Learning Metarubric

The VALUE project is piloting the use of these rubrics by having faculty score actual student work collected in e-portfolios on twelve leadership campuses and additional partner campuses. (See a complete list of leadership campuses.)

Although e-portfolio assessment does not typically result in a simple number or score for students, programs, or institutions, it does result in shared judgments about the quality of student performance in terms of important learning outcomes. The use of rubrics is not new, nor are the methods for creating inter-rater reliability. The resulting e-portfolio scores and judgments are more detailed, indicative of the types of learning expected, and nuanced than simple numeric scores. The examples of work upon which the assessments are based are what the students actually submitted in response to assignments and requirements of the curriculum (and co-curriculum) that comprised their educational program; therefore they reflect the students’ levels of motivation, focus, and investment in demonstrating their learning as exhibited on a day-to-day basis, i.e. the assessment data have face validity.

We hope that the VALUE project will be able to demonstrate several things:  that faculty across the country share fundamental expectations about student learning on all of the Essential Learning Outcomes deemed critical for student success in the 21st century; that rubrics can articulate these shared expectations; that the shared rubrics can be used and modified locally to reflect campus culture within this national conversation; and that the actual work of students should be the basis for assessing student learning and can more appropriately represent an institution’s learning results.

Specifically, how does student learning and student work get assessed? What is the relationship between these “metarubrics” (at a national level) and what actually happens at the local level?
From the collection of rubrics for each outcome, we have engaged teams of faculty and staff to examine the rubrics and to identify the criteria or expectations for learning that appear across multiple institutions. In essence, we have asked the teams to articulate shared expectations and criteria for each outcome. The purpose of this exercise is to demonstrate to ourselves, and to those outside the academy, that faculty across the country and at different types of institutions do have shared criteria for what student learning should look like from beginning or novice levels through advanced understandings and applications.

The shared general criteria are too broad to be useful for assessing specific student work at a course level, but the local rubrics developed for assessing student work are mirrored in these metarubrics that encapsulate the shared expectations of faculty and others for student performance. The local rubrics will use different terms and language, but the core criteria contained in the metarubric map onto these local rubrics so that faculty and staff can use what they have developed that works for their purposes with their students, and at the same time show how what they and their students are doing fits within the core expectations for learning that are shared nationally. We can reduce these shared or common expectations to numbers, but we don’t have to and we can therefore engage as a result in a much more robust conversation about what and how well our students are mastering learning outcomes.

Various campuses have been taking the core criteria of the metarubrics and translating them into the language and context of their particular discipline or program when using the rubrics to assess their students’ work. Other campuses have been testing the metarubrics along with their previously developed local rubrics and comparing the results when used side by side to assess assignment products. We are in the process right now of gathering these types of feedback to modify the metarubrics and further refine the ability of the metarubrics to represent shared expectations that can be used on a variety of campuses and programs.

Where are they being used and tested? What are some examples of what test campuses are doing?
The metarubrics are being tested by faculty on twelve leadership campuses that have histories of using rubrics and e-portfolios to assess student work. The twelve leadership campuses represent large and small, public and private, two and four year institutions, and regions of the country. Each of these campuses uses student e-portfolios in one form or another to have students capture and present examples of the work they have done in response to assignments embedded in the curriculum and co-curriculum at their institutions.

We have relied upon the established processes on these campuses for testing the metarubrics. In many instances, the campus faculty has used their local rubrics and the metarubrics for comparison of the comparability of the rubrics. No campus has piloted all of the rubrics, but all rubrics have been piloted among the campuses collectively. Based on the piloting of the metarubrics, the rubric teams have revised the metarubrics. In total, there will be three iterations of piloting and redrafting for each metarubric during the VALUE project process. Final drafts will be available in the summer of 2009.

In addition, almost sixty other campuses have requested permission to pilot test one or more of the rubrics with student work on their respective campuses (not all of these campuses are using e-portfolios of student work). On every campus, though, faculty members and student services colleagues are using the metarubrics to see how useful they are in assessing student work on the respective learning outcomes.

A lot of work with new media technologies involves student work that doesn’t fit traditional assessments. How might VALUE be useful for understanding new kinds of learning?
One of things that we have learned through the research on student learning is that newer generations of students are exhibiting a variety of learning styles. As everyone knows, current students are much more technologically savvy than earlier generations; they use and expect to use the internet, audio and video sources, social networking modes, etc. Many of our students do not perceive learning as a linear process more attuned to traditional reading and writing – hyperlinking and networked learning are more commonly apparent in the classroom. Couple this with the fact that most student learning occurs outside of the classroom, we have an environment in which we need to be able to encompass a wider variety of modes for students to demonstrate their learning processes and achievements. By definition this forces us to encompass audio and video, Web 2.0, hard copy and virtual learning.

The e-portfolio allows us to bring all of these, and other, modes of learning and demonstration of learning into the collection of evidence we use to assess student learning in the full complexity and variety of its existence. We have tried to encourage our rubric development teams to write rubrics that are not bound by the printed page conception of learning, but applicable and encompassing of other modes of performance.

Are there campuses using the VALUE rubrics to look at non-traditional kinds of learning?
Several campuses already have their students incorporate non-traditional modes of demonstrating their learning in the student e-portfolios. Portland State University has students including videos of community based work, performances, presentations to government boards, or interviews in their e-portfolios to demonstrate communication skills, civic engagement, working in teams, etc. Alverno College has all of their students record oral presentations to show the growth and development of these abilities as they move through the curriculum. LaGuardia Community College has their students deeply engaged in visual representations of their learning through art work, e-portfolio design, etc. as a way to communicate their learning to family and communities outside the academy who are often not accustomed to the text-heavy traditions of higher education. Bowling Green State University, St. Olaf College and the University of Michigan have students incorporate connections outside the classroom, whether they are in co-curricular activities or community-based learning related to the curriculum.

Often we perceive a tension between the desire to assess student learning and the interest in experimentation with new approaches to learning. Assessment of recognizable outcomes and innovation often seem at odds. Might the work of the VALUE project help address that tension?

We certainly hope so. The development of the metarubrics and their pilot testing on campuses was designed to create a shared set of standards that could be used for assessing, or judging, more traditional modes or demonstrations of learning, as well as Web 2.0, live performances or other types of learning. The outcomes for learning can be demonstrated in many ways. In the past, some have been too quick to conclude or declare that certain types of learning cannot be measured. The reality that we all face is that when we begin to evaluate learning, we are always grasping at and relying upon indicators of learning.

Learning of the essential outcomes does not occur in a vacuum or in the ether, it occurs through content and knowledge bases, and therefore will vary depending on the knowledge base on which it rests. Part of the reason we have different disciplines and interdisciplinary programs, is that different knowledge sets and ways of knowing result in learning outcomes being demonstrated in different ways. But in the deconstruction of the demonstrated learning, we tend to find similarity in the core components or criteria of learning, e.g. for critical thinking.

Just as we learn from our research and from our colleagues, we also learn from our students. Innovation and creativity are part of what we all look for in our students’ learning–it tends to be the ultimate learning outcome that we try to capture in many ways, e.g. capstone courses and projects, senior recitals, e-portfolio graduation reflections on work, etc. Having shared expectations or standards for learning outcomes is in no way in conflict with innovation. Our limitations are often due to lack of knowledge and comfort in using newer technologies to capture and represent the learning we seek in our students.

How could a campus make these viable? How would they be useful to start a conversation or provide a framework for discussion around student learning?

Our experience at AAC&U in working with faculty on campuses across the country is that faculty are typically eager to have permission to talk about and to focus on student learning. Once you get beyond complaints about teaching is not rewarded adequately, etc., faculty embrace discussing learning and teaching. So, there is no difficulty in getting faculty interested in talking about the subject. The biggest barrier is often a lack of awareness about options for assessing learning and what it would take for the individual faculty member to adapt what they know and are familiar with to some new environment or process.

Part of the selection of the VALUE leadership campuses was to identify a diverse set of campuses that are using e-portfolios and rubrics in different ways on their respective campuses to illustrate how faculty and institutions can see themselves beginning, expanding or enhancing what they are doing to assess student learning. By broadening our work to include campuses that are not using e-portfolios, we also wanted to demonstrate how similar approaches can be undertaken in the absence of the investment in e-portfolios. Increasingly, the investment in e-portfolios is becoming less and less of an obstacle for campuses since there are free Web tools that students can use to construct e-portfolios.

Essentially, we are finding that campuses are recognizing that student learning is something that the entire campus community is engaged with; each person on the campus participates in the learning, but no one is responsible for all of the learning. By creating and articulating shared learning expectations, we are helping faculty and others on campus see how they can contribute to student learning for essential outcomes; we help students become better judges of their own learning progress; and we create the evidence we can use to communicate to other audiences exactly what it is that our students are learning and what they can do with that learning.

By experimenting with e-portfolios and Web technology, we expand the robustness for capturing learning and the opportunities for students to apply their learning in “real world” situations, which employers, civic leaders and policymakers are calling for. E-portfolios also reflect the attendance patterns of so many of our students who attend multiple institutions (often at the same time) as they move through their educational careers. Their learning is shared in ways we often overlook–different faculty and colleagues in different institutions, perhaps in different states, and different spans of time. The sharing of rubrics, of expectations for learning, perhaps most importantly allows our students to have a much clearer picture of what their learning should look like. They can use the rubrics to frame the demonstration of their learning in an e-portfolio when transferring among institutions, when applying for a job, or for graduate school. The rubrics allow students to better assess their own strengths and weaknesses in areas of learning.

Having been a faculty member on several campuses for over twenty years, I know that using rubrics and e-portfolios does not have to create more work–it requires working differently, shifting my time and focus a bit–but it is richer and more rewarding than what I used to struggle with in trying to communicate my expectations for learning and how students could more readily succeed in meeting those expectations. There is a transparency and communication ability that enriches the conversations both with students and with colleagues.

Attachment Size
Creative Thinking Metarubric Fall 2008 Draft for Public Release.pdf 120.08 KB
IntegrativeLearningMetarubricF08.pdf 121.14 KB
Fall 2008 VALUE Critical ThinkingMetarubric Draft for Public Release.pdf 105.71 KB
Value-Rhodes-Interview.pdf 1.8 MB