Tags » assessment

 
 
 

NISO’s webinar on item-level (i.e. article) usage statistics

Categories: Midd Blogosphere

On September 15, I “attended” a webinar presented by the National Information Standards Organization (NISO) which focused on the improving capacity to measure usage at the article level.  The presenters contend that article-level usage information more accurately reflects scholarly impact than the current ‘gold-standard,’ citation-based measures.

The first presenter was Paul Needham describing the progress of the PIRUS 2 project. Their mission is to establish global standards for finding article-level usage information.  (It is funded by Joint Information Systems Committee (JISC) in the UK and Counting Online Usage of NeTworked Electronic Resources (COUNTER), the group that established protocols for journal-level usage statistics, participates.)  Needham reported that PIRUS has loaded data from six publishers that includes 555,000 articles from 5500 journals, and just shy of 17 million download events.  They expect there will be upwards of 3 billion article download events to ‘count’ each year when most publishers are participating.  They are also working with institutional repositories.  More information can be found at their website: www.cranfieldlibrary.cranfield.ac.uk/pirus2/

The second presenter was Johan Bollen describing the Metrics from Scholarly Usage of Resources (MESUR) project.  http://www.mesur.org/MESUR.html   “The project’s major objective is enriching the toolkit used for the assessment of the impact of scholarly communication items, and hence of scholars, with metrics that derive from usage data.”  Bollen contends that impact factors based on citation data does not honor the full scope of scholarly activities; further citation tracking takes more than two years to develop.  In contrast, usage data is real-time and can use large scales of data.  He described MESUR as a “scientific project to study science itself from real-time indicators.”  The usage data he’s been working with includes “clickstreams” so they can follow what articles are clicked on after a given article is downloaded.  One of his articles on his research is available at http://arxiv.org/abs/0902.2183 and includes a graphic that shows how citation-based metrics do not reflect actual usage. It is fascinating in an academic scholarly kind of way.

NITLE Camp 2010 Days 1 & 2

Categories: Midd Blogosphere

NITLE Camp 2010 was 4 days of in-depth discussion and learning about assessment activities and the pedagogy and support of mobile devices. For me, it was a fantastic intro. to these topics and I have so much more to discuss than what you’ll see here (so find me and we can talk if you want to hear more!) but here are some highlights of what I learned:

Day 1: Assessment: Ideas for inquiry & student success

The focus here was on learning-centered / student-centered assessment (as opposed to teaching-centered) Ashley Finley, Director of Assessment for Learning at the American Association of Colleges & Universities, lead this day’s workshop

Assessment as a conversation

Consider the idea that both formative (continual throughout the learning process) and summative assessment (at the end of learning) approaches have a role to play within an overall assessment program, but that assessment is inherently continual–a conversation, if you will.

Planning for assessment

Create a plan using a logic model (create it from right to left and then implement the plan working left to right). Start by defining the goal/outcome, define the evidence needed, and define the resources needed to effect the change, then work through them in the opposite order. Make sure the plan involves clear steps to analyze and share the data with as broad an audience as possible, and a clear timeline for doing so.

Making assessment a campus-wide endeavor

Approach assessment as a holistic and integrated, campus-wide activity. Many departments are already involved in assessment work. Take stock of current assessment activities in other college departments (involves conversations). Establishing a map of currently ongoing assessment helps everyone identify redundancies AND places where potential collaboration may occur. Ask your institutional research, college advancement, alumni, student life, civic engagement, admissions, and (in Middlebury’s case) Commons offices what they are doing to assess student learning outcomes.

E.g. Say you work in LIS and talk to the campus Alumni office. Imagine that you find out about an annual survey that goes out to alumni 5 years out that asks them to reflect on the value of their college experience. LIS is interested in obtaining feedback about the effectiveness of its information literacy program and adds one question to this survey asking what technology skills they learned, found most useful (or wished they’d learned about) while an undergraduate. This tactic doesn’t create yet another survey but piggybacks on a tool already being used. It also provides a method of measuring an outcome beyond the traditional 4-year time period (continuing the conversation).

Implement and adjust

Make adjustments to the assessment program as needed while it is running. Following the run-through of the assessment program, take some time to evaluate the program’s effectiveness. Revise and amend the assessment program on a regular (yearly) basis!

Day 2: Assessing instructional technology community meeting

Examples of assessment activities at other colleges and universities

DePauw Univeristy, Carol Smith, Director : assessment as a way to inform institutional priorities in IT
Colgate University: Collaboration for enhanced learning
St. Lawrence: ECAR, HEDS, CIRP, MISO, etc. and “run, don’t walk, to your institutional research officer”
Colgate University: Institutional research, planning, assessment effectiveness survey review
AAC&U and MISO: Inter-institutional assessment; VALUE rubrics and MISO survey
Stonehill College: Information literacy assessment program
Centre College: Assessing student literacy through new first year course
Trinity University: Information literacy quality enhancement plan “Expanding Horizons
Meeting participants resolved to check in on progress of assessment activities at home institutions sometime in September.

Poster Session

In the evening on day 2, I attended a poster session presented by other camp participants. Click to view a pdf of all the poster abstracts. I think I gravitated towards the posters on the topics for which I wasn’t attending workshops or meetings (moodle, digital storytelling). 2 highlights:

Woodle (Moodle at Wooster) findings

I particularly enjoyed hearing from Matt Gardzina, Director of Instructional Technology at the College of Wooster, about his school’s experiences with learning management system (LMS) Moodle (nicknamed Woodle :) . As the poster abstract explains, and he related in person, the faculty at Wooster ended up not really using Woodle for much more than course readings and a parking spot for their syllabi. They used Woodle elements like quizzes and forums far less. As a result, the instructional technologists at Wooster have started to downplay Woodle and amped up support for their blogging and wiki platforms as alternatives to the LMS. I mentioned the Curricular Technology team at Middlebury’s recommendation to support a suite of tools as opposed to a single LMS, and he agreed that it was a good recommendation, especially given his findings at Wooster. (Kudos to the CT team on validation for their recommendation from a comparable institution! I bet Matt would be willing to discuss this further if you wanted to learn more about the specifics of the Wooster findings.)

Before and After: Augmenting Digital Story Projects

When we teach with technology how can we ensure a balance between student technology fluency and the other student learning outcomes for the course? Brett Boessen, Associate Professor of Media Studies at Austin College, shared some good examples when he explained how he has begun integrating formative accompanying materials (like storyboards) and self-reflective elements (students’ author statements) into a digital storytelling assignment in one of his classes. He played some delightful (and quite good) examples of videos ranging from video screencasts to mashups created by students in his course on Participatory Cultures. By embedding planning and reflective elements in the assignment requirements, Brett seems to have struck a good balance between successfully engaging students with their own process of creating and sharing a story, and achieving technology fluency.

Agenda for 20 May Manager’s Meeting

Categories: Midd Blogosphere

1. Discuss assessment pilot projects (see https://docs.google.com/Doc?docid=0AYZPHKugPdOiZGNzdDV0aDNfMjVnNnpma2ZmMg&hl=en )

2. Discuss annual planning calendar (see http://sites.middlebury.edu/lis/2010/05/19/annual-planning-calendar/ )

3. Brainstorm ideas for how to orient new staff members to LIS (we’ll post the list of ideas to the blog after the meeting)