As part of our ongoing coverage of how Middlebury is engaging scholarship in the digital age, we take a look at scholarly publishing and some of the questions academia faces with evaluating digital scholarship.
The field of digital scholarship is quickly emerging as one of academia’s great frontiers, with plenty of exciting, and occasionally disconcerting, questions. On Tuesday, Middlebury’s Digital Scholarship Working Group hosted a roundtable discussion tackling specific issues around the topic of “Transforming Scholarly Publishing: New Forms of Peer Review, Open Access, and Building Academic Communities.”
Speaking to a mixed audience of faculty and staff at the Axinn Center, the panel featured two Middlebury faculty members, professors Jason Mittell and Alison Byerly — both of whom have rising profiles in the digital humanities — and national experts Kathleen Fitzpatrick, director of scholarly communication for the Modern Language Association, and previously a professor of English and media studies for Pomona College, and Katherine Rowe, professor of English at Bryn Mawr College and director of the Tri-Co Digital Humanities Center.
Fitzpatrick, whose earlier research focused on the relationship between traditional forms of publishing and communication and newer media forms, began thinking about scholarly publishing as one of those “modes” in which things are moving from print to digital. She particularly thought about what this meant for peer review, and whether the traditional forms of peer review for print publishing made sense for work produced digitally. With the blessing of her publisher, she put her ideas to the test with her book, Planned Obsolescence: Publishing, Technology, and the Future of the Academy (NYU Press, 2011), posting the entire manuscript to MediaCommons, an online network promoting new forms of publishing in media studies, as well as sending it to traditional reviewers through her publisher. Fitzpatrick says that the experiment generated some excellent discussion about the manuscript and provided valuable information about what works and doesn’t work in open peer review.
Rowe, a Shakespeare scholar, was also exploring what the great digital revolution meant to her field. “Like Kathleen, I’m interested in what you learn about the texts that you care about as your longstanding texts migrate across platforms,” she said. While serving on the board of the journal Shakespeare Quarterly, she was asked to guest edit a special edition themed around Shakespeare and new media. “I said, ‘It seems to me that the most important transformation new media is going to bring to the field of Shakespeare studies is in our modes of scholarly communication.” In a radical departure from tradition, she collaborated with MediaCommons to develop a new open review process for the journal, which attracted nearly 350 comments from 41 scholars in humanities fields.
A big benefit to editors, Rowe noted, is that such highly visible commentary makes it easier for an editor to see the points of stress in an argument, the points of debate, and, of course, the points of agreement among reviewers. She says the process was highly successful for both participants and editors. “The most important takeaway, I think, from a big picture perspective of this experiment is that it gave us a process that was assessable in its own right; archivable as a process, and therefore replicable. This is not a kind of product that humanist scholars generally produce. Now we have a lot of data about what the strengths and weaknesses of this process are.”
Mittell spoke from real-time experience. He is currently in the midst of an open review — placing a pre-print draft online and soliciting comments from both expert and lay readers — of the manuscript for his new book, Complex TV: The Poetics of Contemporary Television Storytelling. Mittell chose to serialize the release of his manuscript online by chapter rather than as a whole. He says that one of the things scholars have learned is that comments on a full manuscript can be robust at the beginning and middle of a book, but then taper off. Release by chapter, along with plenty of outreach through online outlets and social media, has helped produce a more even distribution of comments throughout his text.
The nature of open review comments can be helpful in ways the traditional model rarely affords, says Mittell. “Generally, the conversation has been quite good. It tends to be very granular, focused on the individual paragraph. Of course, that’s the kind of commentary you don’t get when you submit a full book manuscript. You very rarely see ‘In paragraph 17 in chapter 3 your argument loses track.’ That kind of granularity has been very helpful in my own revision process.”
Byerly, who served as provost at Middlebury for several years, offered a different angle on digital scholarship. She is particularly interested in the process of how to evaluate the quality of digital work in the context of promotion and tenure, and has presented on the topic at the MLA annual conference. “It’s interesting because I don’t have anything like the level of expertise of my colleagues here, but it shows that there’s a real desire to find people who can bridge the gap between the work being done by practitioners and the institutional structures that they inevitably have to get slotted into.”
Byerly sees potential for the faculty evaluation process to emulate the emerging open review process in publishing. She says a lot of valuable feedback is lost in the traditional process because much of the detailed discussion in a closed review committee never makes it to the faculty member who could benefit most from hearing it. “I really see an analogy between the publishing industry and the way in which a lot of evaluation that the whole academy is founded on really needs to shift in ways that publishing is starting to take account of.”
The challenges of evaluating this type of scholarship, says Byerly, often come down to format — how the work is presented. “But also questions of what we actually look for in scholarship, in something that you call scholarship, and what constitutes a scholarly argument? Does it have to be a text-based argument, or does a database and a set of information presented visually constitute a kind of intellectual product, and argument in itself?” The frequently collaborative nature of digital work makes it especially challenging, says Byerly. If you’re reviewing someone who has worked on a digital archive with four other colleagues, for example, it can be difficult to assess that individual’s specific intellectual contribution.
One of the biggest challenges, says Byerly, is simply finding people with the right expertise to evaluate digital content. “You realize it’s a kind of shell game, where everyone is looking for some source of authority. The committees look to publishers, the publishers look to readers, readers look to other colleagues for somebody to say, ‘Is this worth looking at?’ So the idea of putting it in a space where there can be a mutual process of validation, where a variety of voices can enter into some kind of dialogue that produces some kind of judgment about whether a work is worth spending more time on seems to me a very productive way to move forward in a context where traditional sources of authority are very hard to find.”
Whether scholars find themselves on the producing end or as evaluators, it is “absolutely where the fields are headed,” said Byerly, “where most disciplines are ultimately going to end up in some way. What I think most of us, who, as scholars trained in different ways, probably have to figure out is where we ourselves fit in, either as colleagues who are in a position to be participating in these trends, or as department chairs who are sitting in judgment some day on junior colleagues who are doing work that presents itself in a way that’s different from what you’re traditionally accustomed to.”