Digital Scholarship and the Tenure and Promotion Process

by Kristine M. Bartanen

Bartanen_BioKristine Bartanen is academic vice president and dean of the University of Puget Sound in Tacoma, Washington, a position she has held since 2004. She has served Puget Sound as director of forensics, professor and chair of the Communication and Theatre Arts department, associate academic dean, and vice president for student affairs and dean of students. Dr. Bartanen’s work has included particular attention to development of academic-residential programs on the campus, including residential first-year seminars; growth of the interdisciplinary curriculum, most recently in neuroscience; and support of civic scholarship, such as the Sound Policy Institute and the Race and Pedagogy Initiative.

Many liberal arts college faculty members are interested in and increasing their use of digital resources in teaching and scholarly work. Some have been developing digital teaching resources for nearly two decades, some have begun to publish scholarship in on-line journals and other digital venues, and some are doing ground-breaking work in open source, collaborative scholarly projects. Others, particularly pre-tenure or pre-promotion faculty, are reticent to venture into digital work out of concern for how that work will be acknowledged, valued, and rewarded in existing faculty tenure, promotion, and merit award systems. That reticence lives in tension with recognition that advances in technology-enabled teaching and scholarship are progressing in other institutions – academic and non-academic alike – and that professional currency in the academy demands new or amended frameworks in the liberal arts college for evaluation of digital work.

For some, digital scholarship is considered difficult to evaluate because it more often crosses customary boundaries of teaching, scholarship, and service than does traditional scholarship. This territory is a “mixed-use landscape” that liberal arts college faculty members often cross in their careers on teaching-intensive, residential campuses where multiple forms of scholarship[1] are evident in evaluation portfolios, where mentoring of undergraduate research produces co-authored scholarship with students, and where community-based teaching and civic scholarship[2] transcend or elide those same traditional boundaries. My own experience – as a faculty member who always had off-site teaching and administrative components to her portfolio[3]; who chaired a department with multiple programs in which laboratories for learning extended beyond the traditional classroom[4]; and as a dean who has supported academic-residential initiatives, civic scholarship, field school projects, and interdisciplinary work and who has evaluated faculty members for more than a decade – suggests that faculty members on liberal arts campuses can succeed (indeed, in some cases may even be able to lead the way for others) in demonstrating effective use of evaluation guidelines for digital scholarship.

There are challenges, however, of at least two kinds. Evaluation of digital scholarship demands new conceptions of traditional measures of peer review and scholarly authority. Faculty and administrators are challenged to articulate, for example, what standards of peer review are appropriate for digital scholarship; what process revisions are needed to make visible and to communicate the rigor of digitally published or born digital work; how “open” components of digital work such as collaborative authoring, ongoing iteration, absence of page limits or page costs, or markedly different time-to-publication than print monographs intersect with assessments of quality; and what are credible markers of “impact” for digital scholarship. As Michael Jensen has observed:

For universities, the challenge will be ensuring that scholars who are making more and more of their material available online will be fairly judged in hiring and promotion decisions. It will mean being open to the widening context in which scholarship is published, and it will mean that faculty members will have to take the time to learn about — and give credit for — the new authority metrics, instead of relying on scholarly publishers to establish the importance of material for them.[5]

Further, evaluation of digital scholarship demands new conceptions of what constitutes scholarly work. Digital scholars are not only moving work to new venues; nor, as William Thomas III[6] argues, are they merely migrating analog data to digital form. Digital scholars, he summarizes, are engaged in “a humanistic scholarly endeavor, a process of encoding, editing, interpreting, and curating.” They are blurring traditional boundaries of teaching, scholarship, and service, and they are engaged in interdisciplinary, multidisciplinary, and transdisciplinary work. In order to support and reward appropriately such scholarship, evaluators need guidance on how to assess such boundary-crossing work and on what standards account for the scholarly work involved in learning or building new technological tools and archival resources, in analyzing and curating “big data,” as well as in using new tools and resources to extend scholarly arguments and create new knowledge.

The goal of this NITLE project is to provide access to resources that can assist liberal arts college faculties in crafting evaluation criteria and guidelines on their campuses that will appropriately support and encourage digital scholarship. While any given faculty, department, or faculty member might address this task on its own, the existence of framework guidelines – particularly those endorsed by professional groups – will add credibility to new or revised evaluation practices, serve as potential benchmarks for best processes, and reduce redundant efforts.

Definitions

While the roots of digital scholarship extend back several decades, definitions and descriptions of the work continue to evolve. Collaborative definitions from Wikipedia provide: 

Digital scholarship is the use of digital evidence, methods of inquiry, research, publication and preservation to achieve scholarly and research goals. Digital scholarship can encompass both scholarly communication and publication using digital media and research on digital media. . . . Digital scholarship may also include born-digital means of scholarly communication that are more traditional, like online journals and databases, e-mail correspondence and the digital or digitized collections of research and academic libraries.[7]

The Digital Humanities are an area of research, teaching, and creation concerned with the intersection of computing and the disciplines of the humanities. Developing from the fields of humanities computing, humanistic computing, and digital humanities praxis . . . digital humanities embrace a variety of topics, from curating online collections to data mining large cultural data sets. Digital humanities (often abbreviated DH) currently incorporate both digitized and born-digital materials and combine the methodologies from traditional humanities disciplines (such as history, philosophy, linguistics, literature, art, archaeology, music, and cultural studies) and social sciences with tools provided by computing (such as data visualisation, information retrieval, data mining, statistics, text mining) and digital publishing.[8]

Given that “DH” now encompasses areas of study beyond the humanities (and there is evidence that the sciences are wrestling with evolution of impact metrics in the digital age[9]), this paper uses “digital scholarship” as the more inclusive designation, even while recognizing the term is subject to debate.[10]

Need for Tenure and Promotion Guidelines

Literature that calls for, outlines, or discusses guidelines for evaluation of digital scholarship is robust, even as incorporation of such guidelines into campus tenure and promotion processes appears to remain inchoate. In advance of the 2014 MLA national meeting, Mark Sample reported that “digitally-inflected” sessions comprised nearly ten percent of approximately 800 convention slots,[11] which is roughly double the number in 2012. To gain some indication of the degree to which, particularly at liberal arts colleges, faculty members are engaged in digital scholarship, this project included review of two selected membership groups: NITLE members and Annapolis Group institutions. A web survey used institutional home page search boxes to identify “digital scholarship”(DS) or “digital humanities” (DH) endeavors, with activity “counted” if the institution (1) had a dedicated DS/DH website, (2) participated in a consortia-level DS/DH program, (3) listed at least one faculty member with DH as a research interest or area of expertise, (4) offered at least one course with “digital humanities” in the title, (5) hosted a THATCamp[12], or (6) had been awarded a grant to fund DS/DH activities.

Results are contained in Table 1 and Table 2 and suggest that faculty involvement in digital scholarship is and will be growing in the years ahead. Yet a web search of the 27 institutions listed in Table 1 did not result in finding any evaluation, promotion, or tenure policies specific to digital humanities or digital scholarship.[13] This finding aligns with the conclusion articulated by Sheila Cavanaugh, in the Fall 2012 special issue of the Journal of Digital Humanities devoted to faculty evaluation: “Written guidelines for digital assessment rarely exist and many tenured faculty members remain unable, unwilling, or blind to the need to adapt current promotion criteria to digital scholarship.”[14] Eric Schnell concurs: “Review committees have a difficult time understanding the significance of digital scholarship, let alone knowing how to assess its impact. . . . As a result, emerging forms of digital scholarship are often not defined in criteria documents and therefore not fully valued [in] the faculty rewards system.”[15] Fortunately, resources are available to provide guidance both for scholars engaged in exciting and vibrant digital scholarship and the colleagues called upon to evaluate them fairly and adequately.

Model Guidelines

The Modern Language Association has crafted evaluation guidelines for digital scholarship which have served as models for other associations and for colleges and departments.

The MLA Committee on Information Technology hosts a very comprehensive evaluation wiki  (http://wiki.mla.org/index.php/Evaluation_Wiki) that includes materials from periodic MLA evaluation workshops, a “Short Guide to Evaluation of Digital Work,” suggestions for candidates preparing an evaluation file, example guidelines from selected universities, and a list of resource links and bibliography related to digital scholarship in the humanities. Other association guidelines linked there include:

Institutional evaluation guidelines[16] linked on the MLA evaluation wiki include:

The American Historical Association adapted the MLA guidelines and, upon recommendation of its American Association for History and Computing, endorsed the following guidelines in January 2002:

NINES (Networked Infrastructure for Nineteenth-Century Electronic Scholarship), through summer workshops supported by the National Endowment for the Humanities and involving faculty members (2011) and chairs of literature departments (2012), has drafted guidelines to support literature scholars:

One of NINES’ primary functions is to provide scholarly peer review. These two documents provide additional guidance toward that end:

Resources

In addition to sample guidelines, a summary of components of which is charted in Table 3, leaders in digital scholarship have provided useful presentations and articles that provide guidance in support of appropriate evaluation of this work. The Fall 2012 issue of Journal of Digital Humanities[18] was published as a composite, single location for “proposals, guidelines, and experiences . . . and a living bibliography that will grow as additional examples are published across the web.” Included there is a particularly clear and thorough statement by Todd Presner et al. of the UCLA Digital Humanities Program, “How to Evaluate Digital Scholarship,” that is reproduced with permission in Appendix 1 of this paper.  Additional resources are contained in Appendices 3-11.

Suggestions

Successful evaluation is both formative and summative. Evaluation candidates look forward to helpful and detailed feedback; they also expect full and fair consideration for change-of-status recommendations for tenure and promotion. Successful reviews can be supported through three important steps:

  1. As appropriate for the particular campus, draft departmental or college-wide evaluation guidelines for digital scholarship. On my campus, a faculty code governs faculty evaluation across the campus; in addition, each department and program articulates its own evaluation guidelines, which are subject to review and approval by the faculty’s professional standards committee. We have used addenda to departmental guidelines in cases where unique or distinctive appointments or roles (of which digital scholarship could be an example) require specifically- or specially-defined guidelines. Creation of the new guidelines or an addendum (such as the example in Appendix 2) also brings departmental and/or campus colleagues into conversation, well in advance of the review, which helps to build mutual understanding of expectations.
  1. Senior colleagues take responsibility for completing and forwarding the guidelines for approval to the faculty’s professional standards committee, faculty senate, or full faculty. Collaboration with educational technology and digital collections librarians in the process of completion can help to insure that system requirements are in place and also reinforces the collaborative nature of digital work. Again, discussion of guidelines or addenda in these broader forums raises awareness and understanding of digital work as a legitimate and valuable form of scholarship.
  1. Mentor evaluation candidates in the development and preparation of their portfolios. One form of mentorship is to encourage and make space for pre-evaluation digital scholars to share their work in campus seminars or workshops so that familiarity about their work is cultivated prior to the evaluation period. Another source of mentorship is sample evaluation statements made available by digital scholars such as Cheryl Ball,[19] editor of Kairos: A Journal of Rhetoric, Technology and Pedagogy and Katherine D. Harris,[20] who published her research statement in the Fall 2012 issue of Journal of Digital Humanities. A senior campus colleague or another digital scholar on campus, perhaps in another department, could review the clarity of a candidate’s draft evaluation statement and the intersection of the candidate’s presentation of evidence with campus guidelines; offer feedback on the wisdom of including in the file a reading, such as the Presner et al. document appended here; and provide pre-evaluation feedback well in advance of the due dates for submission of evaluation materials. Mentors can also help to temper candidate or colleague expectations that – as captured in a recent Chronicle Vitae article – digital scholars must work twice as hard as “traditional” scholars do to meet evaluation standards.[21]

Conclusion

In the rapidly changing digital environment, this project is necessarily a work-in-progress designed to open conversation. Additional papers and articles, example documents, and approved evaluation guidelines will emerge to join the resource lists. The resources, I hope, will reduce faculty members’ reticence to engage digital scholarship and increase faculty members’ understanding of the nature of digital work. Those two changes will make space for the “larger possibilities” envisioned by Edward Ayers in his recent Educause Review essay:

Digital scholarship can reframe issues of enduring interest with broad arrays of information, it can integrate vast scholarly literature into more useful forms, and it can significantly broaden our temporal or spatial comprehension. In short, digital scholarship needs to do things that simply cannot be done on paper. . . . How can we advance digital scholarship? By thinking of larger possibilities. . . . As we try to foster digital scholarship in the years ahead, we need to begin by understanding the cultural, economic, personal, and institutional world in which the new scholarship will live. That scholarly world defines a problem within a meaningful conversation, it arrays evidence to address the problem, it makes the clearest case for a solution to the problem, and it conveys that case to every relevant audience. Digital scholarship must assume all of those responsibilities if it is to take its place as academic scholarship and if it is to align the core purpose of higher education with the possibilities of our time.[22]

Appropriate and fair evaluation of digital scholarship in the promotion and tenure process is one small, but important step toward such a vision.

Kristine M. Bartanen (acadvp@pugetsound.edu) is academic vice president and dean, University of Puget Sound, Tacoma, WA, and a 2013-2014 NITLE Fellow. Credit for the literature review and website survey components of this paper belongs to Elizabeth Knight, MLIS, Seattle archivist and librarian, who contributed thorough and efficient research support as well as perceptive insights essential to the project.

 

Table 1: Digital Scholarship Activity among Annapolis Group and NITLE member Institutions

Institution Activity
Bowdoin College Digital and Computational Studies Initiativehttp://www.bowdoin.edu/digital-computational-studies/index.shtml
Bryn Mawr College Member of Tri-College DH project, plus visible local activity
City University of New York Digital Humanities Initiative: http://cunydhi.commons.gc.cuny.edu/New faculty: http://www1.cuny.edu/mu/forum/2012/11/02/renowned-digital-humanities-expert-lev-manovich-joining-graduate-center-faculty
Bucknell University Digital Scholarship Initiative https://www.bucknell.edu/newsevents/current-news/2013/july/rethinking-bucknell-in-the-21st-century.html
Davidson College Digital Studies Initiative http://dsi.davidson.edu
Dickinson College Digital Humanities Advisory Committee http://blogs.dickinson.edu/digitalhumanities/
Emory University Center for Digital Scholarship http://digitalscholarship.emory.edu/
Hamilton College Digital Humanities Initiative (DHi) http://www.dhinitiative.org/
Harvey Mudd College Member of CCDH, hired UCLA Center for DH CIO, Joseph Vaughan
Haverford College Member of Tri-College DHDigital Scholarship Program http://library.haverford.edu/services/digital-scholarship
Hope College Mellon Scholars Program, Praxis Network member, William Pannapacker home institution
Lafayette College Mellon DH grant recipient
Macalaster College With Carleton and St. Olaf received Mellon DH grant
Middlebury College DISH: Digital Scholarship Hub http://sites.middlebury.edu/futures/dish-digital-scholarship-hub/
Mt. Holyoke College TEI Initiative and TAPAS involvement https://www.mtholyoke.edu/courses/smoss/TEI_Initiative_MHC.pdf
Occidental College Center for Digital Learning + Research http://www.oxy.edu/center-digital-learning-research
Reed College $800,000 Mellon/$150,000 Keck grants to advance student digital research
Richard Stockton College Center for Digital Humanities https://dh.stockton.edu/
St. Olaf College Digital Humanities on the Hill project receives $700k Mellon granthttp://wp.stolaf.edu/blog/st-olaf-receives-mellon-grant-for-digital-humanities-project/
Swarthmore College Abundant DH activity
University of Chicago Active DH English Department, Annual Chicago Colloquium on DH and Computer Science, ARTFL project
University of Richmond Digital Scholarship Lab http://dsl.richmond.edu/
Virginia Tech University Center for Digital Research and Scholarship http://www.cdrs.lib.vt.edu/
Washington and Lee University DH Working Group http://digitalhumanities.wlu.edu/
Wellesley College Digital Scholarship Initiatives Program http://www.wellesley.edu/lts/about/dsi/
Wheaton College (MA) Digital History Project: http://wheatoncollege.edu/digital-history-project/
Digital Humanities: http://wheatoncollege.edu/technology/academic/technologies/digital-humanities-2/
Whittier College DigLibArts Center: http://www.whittier.edu/academics/diglibarts;Center received $750k Mellon grant

 

 

 

Table 2: Digital Scholarship Activity among selected consortia

Consortium
(total members)
Project
Associated Colleges of the Midwest (14) Enhancing Midwest Knowledge Eco-system initiative (EMKE), including collaboration in the digital humanities http://www.acm.edu/features/news/477, awarded Mellon Grant for liberal arts – research university collaboration
Claremont University Consortium (7) Center for Digital Initiatives http://libraries.claremont.edu/CDI/default.asp
Five College Consortium (5) Five College Digital Humanities Program https://www.fivecolleges.edu/dh
Five Colleges of Ohio Consortium (5) Awarded $775,000 Mellon Grant for digital collections and programs: http://www.wooster.edu/news/releases/2013/may/mellon-grant
Great Lakes Colleges Association (12) Digital Liberal Arts Initiative http://www.hope.edu/2013/10/01/william-pannapacker-direct-glca-digital-liberal-arts-initiative
New York Six Colleges Consortium (6) New York Six Digital Humanities Consortium http://www.dhinitiative.org/
Tri College Digital Humanities (3) Tri College Digital Humanities http://tdh.brynmawr.edu/

 

Table 3: Chart of key components to support guideline development

Organization or Institutional Guidelines:[23] MLA AHA Maine Nebraska NINES/NEH Presner et al.
Department: Delineate and communicate responsibilities at hire X X
Department: Engage qualified peer reviewers X X X X X
Faculty member: ask about evaluation and support X X
Faculty member: Negotiate and document your role X X
Faculty member: Document and explain your work X X X X
Evaluators: Review work in the medium in which it was developed X X X X X
Evaluators: Recognize interdisciplinary dimensions of digital work X X X
Evaluators: Recognize intrinsically collaborative nature of digital projects X X X
Evaluators: Recognize iterative nature of projects and assess such work in the context of appropriate stage of development X
Evaluators: Recognize multiple metrics/measures of impact and multiple dimensions of intellectual rigor X X X
Evaluators: Recognize work crosses boundaries of teaching, research, and service X
Evaluators: Recognize experimentation and risk-taking, including negative results X
Evaluators: Avoid approximating equivalencies with print artifacts X
All: Stay informed about accessibility issues X X
All: See also a checklist of questions included in these example guidelines X X X

Distributed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Creative Commons License

Appendix 1: “How to Evaluate Digital Scholarship”

From “A Short Guide to Digital Humanities.” In the cooperative and open access book by Peter Lunenfeld, Anne Burdick, Johanna Drucker, Todd Presner, and Jeffrey Schnapp, Digital_Humanities. Cambridge, Mass.: MIT Press, 2012; pp. 128-129.  Open access edition (PDF e-book) at: https://mitpress.mit.edu/books/digitalhumanities-0https://mitpress.mit.edu/books/digitalhumanities-0.

The document is also available in the Fall 2012 Journal of Digital Humanities (http://www.journalofdigitalhumanities.org). Lead author Todd Presner affirmed in personal correspondence (3/13/14) that while the document has not been formally adopted by any institutional body at UCLA, it has informed discussion at all levels, particularly at the departmental level where it has been used in tenure and promotion cases. Presner and collaborators are affiliated with the UCLA Digital Humanities program (http://www.digitalhumanities.ucla.edu).

This text provides a set of guidelines for the evaluation of digital scholarship in the humanities, social sciences, arts, and related disciplines. The guidelines are aimed, foremost, at academic review committees, chairs, deans, and provosts who want to know how to assess and evaluate digital scholarship in the hiring, tenure, and promotion process.

The list is also intended to inform the development of institution-wide policies for supporting and evaluating scholarship and creative work that reflects traditional values while incorporating specific understandings of new platforms and formats.

Fundamentals for initial review

The work must be evaluated in the medium in which it was produced and published. If it is a website, that means viewing it in a browser with the appropriate plug-ins necessary for the site to work. If it is a virtual simulation model, that may mean going to a laboratory outfitted with the necessary software and projection systems to view the model. Work that is time-based—such as videos—will often be represented by stills, but reviewers also need to devote attention to clips in order to fully evaluate the work. The same can be said for interface development, since still images cannot fully demonstrate the interactive nature of interface research. Authors of digital works should provide a list of system requirements (both hardware and software, including compatible browsers, versions, and plug-ins) for viewing the work. It is incumbent upon academic personnel offices to verify that the appropriate technologies are available and installed on the systems that will be used by the reviewers before they evaluate the digital work.

Crediting

Digital projects are often collaborative in nature, involving teams of scholars who work together in different venues over various periods of time. Authors of digital works should provide a clear articulation of the role or roles that they have played in the genesis, development, and execution of the digital project. It is impractical—if not impossible—to separate out every micro-contribution made by team members since digital projects are often synergistic, iterative, experimental, and even dynamically generated through ongoing collaborations. Nevertheless, authors should indicate the roles that they played (and time commitments) at each phase of the project development. Who conceptualized the project and designed the initial specifications (functional and technical)? Who created the mock-ups? Who wrote the grant proposals or secured the funding that supported the project? What role did each contributor play in the development and execution of the project? Who authored the content? Who decided how that content would be accessed, displayed, and stored? What is the “public face” of the project and who represents it and how?

Intellectual rigor

Digital projects vary tremendously and may not “look” like traditional academic scholarship; at the same time, scholarly rigor must be assessed by examining how the work contributes to and advances the state of knowledge in a given field or fields. What is the nature of the new knowledge created? What is the methodology used to create this knowledge? It is important for review committees to recognize that new knowledge is not just new content but also new ways of organizing, classifying, and interacting with content. This means that part of the intellectual contribution of a digital project is the design of the interface, the database, and the code, all of which govern the form of the content. Digital scholars are not only in the position of doing original research but also of inventing new scholarly platforms. Five hundred years of print have so fully naturalized the “look” of knowledge that it may be difficult for reviewers to fully understand these new forms of documentation and the intellectual effort that goes into developing them. This is the dual burden—and the dual opportunity—for creativity in the digital domain.

Crossing research, teaching, and service

Digital projects almost always have multiple applications and uses that enhance research, teaching, and service. Digital research projects can make transformative contributions in the classroom and sometimes even have an impact on the public-at-large. This ripple effect should not be diminished. Review committees need to be attentive to colleagues who dismiss the research contributions of digital work by cavalierly characterizing it as a mere “tool” for teaching or service. Tools shape knowledge, and knowledge shapes tools. But it is also important that review committees focus on the research contributions of the digital work by asking questions such as the following: How is the work engaged with a problem specific to a scholarly discipline or group of disciplines? How does the work reframe that problem or contribute to a new way of understanding the problem? How does the work advance an argument through both the content and the way the content is presented? How is the design of the platform an argument? To answer this last question, review committees might ask for documentation describing the development process and design of the platform or software, such as database schemata, interface designs, modules of code (and explanations of what they do), as well as sample data types. If the project is, in fact, primarily for teaching, how has it transformed the learning environment? What contributions has it made to learning and how have these contributions been assessed?

 Peer review

Digital projects should be peer-reviewed by scholars in fields who are able to assess the project’s contribution to knowledge and situate it within the relevant intellectual landscape. Peer review can happen formally through letters of solicitation but can also be assessed through online forums, citations, and discussions in scholarly venues, by grants received from foundations and other sources of funding, and through public presentations of the project at conferences and symposia. Has the project given rise to publications in peer-reviewed journals or won prizes by professional associations? How does it measure up to comparable projects in the field that use or develop similar technologies or similar kinds of data? Finally, grants received are often significant indicators of peer review. It is important that reviewers familiarize themselves with grant organizations across schools and disciplines, including the humanities, the social sciences, the arts, information studies and library sciences, and the natural sciences, since these are indicators of prestige and impact.

Impact

Digital projects can have an impact on numerous fields in the academy as well as across institutions and even the general public. They often cross the divide that arises among research, teaching, and service in innovative ways. Impact can be measured in many ways, including the following: support by granting agencies or foundations, number of viewers or contributors to a site and what they contribute, citations in both traditional literature and online (blogs, social media, links, and trackbacks), use or adoption of the project by other scholars and institutions, conferences and symposia featuring the project, and resonance in public and community outreach (such as museum exhibitions, public policy impact, adoption in curricula, and so forth).

Approximating equivalencies

Is a digital research project “equivalent” to a book published by a university press, an edited volume or a research article? These sorts of questions are often misguided since they are predicated on comparing fundamentally different knowledge artifacts and, perhaps more problematically, consider print publications as the norm and benchmark from which to measure all other work. Reviewers should be able to assess the significance of the digital work based on a number of factors: the quality and quantity of the research that contributed to the project; the length of time spent and the kind of intellectual investment of the creators and contributors; the range, depth, and forms of the content types and the ways in which this content is presented; and the nature of the authorship and publication process. Large-scale projects with major funding, multiple collaborators, and a wide-range of scholarly outputs may justifiably be given more weight in the review and promotion process than smaller-scale or short-term projects.

Development cycles, sustainability, and ethics

It is important that review committees recognize the iterative nature of digital projects, which may entail multiple reviews over several review cycles, as projects grow, change, and mature. Given that academic review cycles are generally several years apart (while digital advances occur more rapidly), reviewers should consider individual projects in their specific contexts. At what “stage” is the project in its current form? Is it considered “complete” by the creators, or will it continue in new iterations, perhaps through spin-off projects and further development? Has the project followed the best practices, as they have been established in the field, in terms of data collection and content production, the use of standards, and appropriate documentation? How will the project “live” and be accessible in the future, and what sort of infrastructure will be necessary to support it? Here, project specific needs and institutional obligations come together at the highest levels and should be discussed openly with deans and provosts, library and IT staff, and project leaders. Finally, digital projects may raise critical ethical issues about the nature and value of cultural preservation, public history, participatory culture and accessibility, digital diversity, and collection curation which should be thoughtfully considered by project leaders and review committees.

Experimentation and risk-taking

Digital projects in the humanities, social sciences, and arts share with experimental practices in the sciences a willingness to be open about iteration and negative results. As such, experimentation and trial-and-error are inherent parts of digital research and must be recognized. The processes of experimentation can be documented and can prove to be essential in the long-term development process of an idea or project. White papers, sets of best practices, new design environments, and publications can result from such projects, and these should be considered in the review process. Experimentation and risk-taking in scholarship represent the best of what the university, in all its many disciplines, has to offer society. To treat scholarship that takes on risk and the challenge of experimentation as an activity of secondary (or no) value for promotion and advancement can only serve to reduce innovation, reward mediocrity, and retard the development of research.

Appendix 2: Sample addendum for evaluating a digital scholar[24]

ADDENDUM TO THE [INSERT NAME] DEPARTMENT EVALUATION  STANDARDS

FOR FACULTY MEMBERS ENGAGED IN DIGITAL SCHOLARSHIP

Approved by the [insert name] Department: ______ (date)

Approved by the Professional Standards Committee: _____ (date)

Preface

The faculty member engaged in digital scholarship should satisfy each evaluation criterion – teaching effectiveness, scholarship, and service to the department, college/university, and community – at the same level of quality expected of colleagues. Due to distinctive features of digital scholarship, however, evidence of scholarship and markers of quality may distinguish the digital scholar from colleagues. Evaluation of the digital scholar offers unique opportunities and requires unique delineations because, by definition and practice, the digital scholar’s role is one that may often challenge traditional academic categories and metrics for evaluation. The existence of these distinctions, however, does not lessen the excellence of digital scholarship. The faculty member under review must provide evidence that her or his work as a digital scholar meets standards of quality prevalent among the communities of digital scholars in his or her disciplinary or interdisciplinary field.

This evaluation addendum is informed by digital scholarship evaluation guidelines endorsed by the [insert relevant professional association(s)].

Participation

Colleagues who participate in the review of a faculty member engaged in digital scholarship, whether at the department or college/university level, should be prepared to review digital work in the medium in which it was created and published. This means both that the faculty member being reviewed will be expected to provide accessible links to digital work and information on system requirements for viewing it. The review committee chair will be expected to work with the Office of the Dean to insure that appropriate technologies are available and in working order for reviewers before they evaluate the digital work.

If external peer review of scholarship is required for the evaluation or, if not required, such peer review is desired by the faculty member being evaluated, those reviewers should similarly be expected to access digital work in its medium with appropriate systems needs in place. Furthermore, if external review is required, at least [#] of the standard number of such reviewers should be peers engaged in digital scholarship.

Definition of work

The faculty member engaged in digital scholarship may demonstrate excellence through such traditional vehicles as publication in print journals or monographs; presentations at conventions, conferences, workshops or similar forums; publication of instructional materials; and book of materials reviews. The digital scholar may also demonstrate excellence by publishing in digital media, such as online journals and databases; conducting research or creating digital media, such as digitized collections of materials; curating online collections; or mining large cultural data sets. The faculty member engaged in digital scholarship should be expected to explain and document the work being pursued, to discuss its distinctive components and stage(s) of development, and to both articulate appropriate metrics for and evidence of the impact of the work.

Expectations

The reviewers of the digital scholar’s portfolio should expect to see:

  • work that is collaborative (often with colleagues who may be in staff roles in library or IT organizations);
  • work that may be interdisciplinary, multidisciplinary, or trans-disciplinary;
  • work that may involve experimentation and risk-taking; and
  • the faculty member and peer reviewers discuss these dimensions of the work.

The reviewers of digital scholarship should avoid:

  • Attempting to equate digital products with traditional print products;
  • Expecting the digital scholar to do more work than other colleagues, but rather expect that quantity of work may differ even as quality of work is expected to be high;
  • Dismissing digital work that includes teaching about new forms of scholarship, or service to the development of new scholarly communication organizations, as not also scholarly in its contribution to the academy.

Questions and notice

Departmental or college/university-level questions about this addendum should be resolved prior to the initiation of the review process. The addendum itself should be completed and approved as early as possible in a digital scholar’s appointment, preferably by the end of the first-year and no later than two years prior to a scheduled promotion or tenure review.

Appendix 3: Selected resources for evaluation guideline development

Braun, Catherine C. Cultivating Ecologies for Digital Media Work: The Case of English Studies. 2014. Southern Illinois University Press.

Note especially, Chapter 3, “Scholarship through a New Lens: Digital Production and New Models of Evaluation,” pp. 91-131which contains questions to guide departmental considerations of evaluation of digital scholarship.

Byerly, Alison. “Evaluating Digital Scholarship.” NITLE Seminar, October 10, 2012. http://www.slideshare.net/nitle/byerly-nitle-digital-scholarship

Slides 4-6 summarize that “Good evaluation depends on shared context: clear expectations, defined roles, recognized measures of success” and outline essential tasks for scholars and evaluators.

Fitzpatrick, Kathleen. “Beyond Metrics: Community Authorization and Open Peer Review.” In Debates in the Digital Humanities. 2012. University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates#text/7

Discusses ways both scholars engaged in digital work and evaluation committees can come to understand and articulate the values of open vs. traditional peer review.

Fitzpatrick, Kathleen. “Peer Review, Judgment, and Reading.” Profession. Modern Language Association of America. 2011. http://humanities.case.edu/digital/evaluatingDigital_6.pdf

Discusses traditional and new forms of review of scholarship in tenure and promotion evaluations, with emphasis on the responsibility of reading and on how to read open peer review of digital scholarship.

Flanders, Julia. “Time, Labor, and ‘Alternate Careers’ in Digital Humanities Knowledge Work.” In Debates in the Digital Humanities. 2012. University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/26

Discusses the “para-academic” roles involved in digital humanities work (e.g., graduate assistant, consultant, freelancer, technology services staff, adjunct faculty member), how those roles intersect with and are valued relative to the traditional “faculty” paradigm, and future evolution likely in collaborative digital humanities work.

The Journal of American History. Website Reviews.  http://www.journalofamericanhistory.org/submit/websitereviews.html

Outlines types of websites and provides general guidelines for review of them:

  • Content: Is the scholarship sound and current? What is the interpretation or point of view?
  • Form: Is it clear? Easy to navigate? Does it function effectively? Does it have a clear, effective, and original design? Does it have a coherent structure?
  • Audience/Use: Is it directed at a clear audience? Will it serve the needs of that audience?
  • New Media: Does it make effective use of new media and new technology? Does it do something that could not be done in other media—print, exhibition, film?

Lunenfeld, Peter, Anne Burdick, Johanna Drucker, Todd Presner and Jeffrey Schnapp. Digital Humanities. Cambridge, Mass.: MIT Press, 2012. PDF eBook http://mitpress.mit.edu/books/digitalhumanities-0

“… [a] report on the state of contemporary knowledge production… explores methodologies and techniques unfamiliar to traditional modes of humanistic inquiry – including geospatial analysis, data mining, corpus linguistics, visualization, and simulation… the authors argue that the digital humanities offers a    revitalization of the liberal arts tradition…”

Williford, Christa and Charles Henry. 2012. One Culture: Computationally Intensive Research in the Humanities and Social Sciences. http://www.clir.org/pubs/reports/pub151

Based on eight international projects involving computational analysis of large data sets, this resource offers recommendations to enable universities “to adapt to, support, or sustain this emerging research over time.”

 Appendix 4: Key Digital Humanities Organizations

Alliance of Digital Humanities Organizations:  http://adho.org/

American Academy of Arts and Sciences, Humanities Indicators: http://www.humanitiesindicators.org/

Association for Computers and the Humanities, hosted by Boston College: http://ach.org/

CenterNet: International Network of Digital Humanities Centers: http://digitalhumanities.org/centernet/

North America: http://digitalhumanities.org/centernet/centers

Council on Library and Information Resources, Digging into Data Challenge (NEH): http://www.diggingintodata.org/

June 2012 report: http://www.clir.org/pubs/reports/pub151

DevDH http://devdh.org/

Recommended readings: http://devdh.org/recommended-readings/

Digital Humanities Now:  http://digitalhumanitiesnow.org/

A search for “evaluating digital scholarship” provides a chronological list of resources.

DH Commons:  http://dhcommons.org/

DHThis:  http://www.dhthis.org/

Harvard MetaLab http://metalab.harvard.edu/

Institute for Computing in Humanities, Arts, and Social Science (iCHASS):  http://chass.illinois.edu/

Institutes for Advanced Topics in the Digital Humanities (NEH):  http://www.neh.gov/grants/odh/institutes-advanced-topics-in-the-digital-humanities

NEH Office of Digital Humanities:  http://www.neh.gov/divisions/odh

NITLE Digital Humanities Council:
http://blogs.nitle.org/2011/11/01/announcing-the-nitle-digital-humanities-council/

Open Scholar Foundation: http://www.force11.org/node/4383

Open Scholar platform: http://openscholar.harvard.edu/

Princeton Digital Humanities Initiative (2013) http://digitalhumanities.princeton.edu/

Appendix 5: ITHAKA and ITHAKA S+R resources on digital humanities

These sites do not appear to contain content focused specifically or primarily on faculty evaluation. Below are reports pertaining to digital humanities, faculty research support, or sustaining local digital scholarship.

NOTE: A final report will be issued in Spring 2014, and will be accompanied by a toolkit to support campus administrators in developing a digital strategy for supporting digital humanities projects at their institution.

Appendix 6:  Selected Digital Scholarship Tools

  • Cohen, Daniel and Roy Rosenzweig. 2005. Digital History: A Guide to Gathering, Preserving and Presenting the Past on the Web. University of Pennsylvania Press.  http://chnm.gmu.edu/digitalhistory/

Appendix 7: Additional literature on faculty evaluation of digital scholarship

Andersen, D. L. (ed.) Digital Scholarship in the Tenure, Promotion and Review Process. 2003. http://www.mesharpe.com/mall/resultsa.asp?Title=Digital+Scholarship+in+the+Tenure%2C+Promotion%2C+and+Review+Process

“…examines the evolution of nontraditional scholarship, analyzes the various formats, and suggests                   guidelines for assessment on a scholarly level…” Part III Section 11.:  The Development of Criteria for the           Inclusion of Digital Publications in the Tenure Process: A Case Study of Washington State University    Libraries Ryan Johnson…”

Clement, Tanya E. “Half Baked: The State of Evaluation in the Digital Humanities.” American Literary History. vol. 24, no. 4, October 2012, pp. 876–890. doi:10.1093/alh/ajs051. Review of: 

The American Literature Scholar in the Digital Age. 2011. Amy E. Earheart and Andy Jewell, eds. University of Michigan Press and University of Michigan Library.

and

Switching Codes: Thinking Through Digital Technology in the Humanities and the Arts. 2011.Edited by Thomas Bartscherer and Roderick Coover, eds. University of Chicago Press.

“… The American Literature Scholar in the Digital Age and Switching Codes are unique and valuable as collections whose historicizing illuminates current fissures in discussions about the ways that the academic community can evaluate digital scholarship in the humanities…”

Cheverie, Joan F., Jennifer Boettcher and John Buschman. 2009. “Digital Scholarship in the University Tenure and Promotion Process: A Report on the Sixth Scholarly Communication Symposium at Georgetown University Library.” Journal of Scholarly Publishing. April 2009. doi: 10.3138/jsp.40.3.219.

“… Four notable scholars who have done significant work in digital scholarly projects were invited to speak on the theme of ‘digital scholarship in the university tenure and promotion process’ at Georgetown    University Library to explore the scholarship and the continuing problems with evaluating it – particularly            for promotion and tenure…”

Liu, Alan. December 2011. “The state of the digital humanities: A report and a critique.” Arts and Humanities in Higher Education. DOI: 10.1177/1474022211427364

“… the scholarly field of the digital humanities has recently expanded and integrated its fundamental concepts, historical coverage, relationship to social experience, scale of projects, and range of interpretive approaches…[the field] has the potential not just to facilitate the work of the humanities but to represent the state of the humanities at large in its changing relation to higher education in the postindustrial state…”
MLA Profession. 2011.Evaluating Digital Scholarship (contains six articles, pp. 123-196, focused on the topic). http://www.mlajournals.org/toc/prof/2011/1

“There is a growing consensus that humanities disciplines must find ways not simply of evaluating but also        of valuing digital scholarship as part of hiring, promotion, and tenure decisions. National scholarly                 organizations such as the Modern Language Association and the American Council of Learned Societies              have called for departments and institutions to “recognize the legitimacy of scholarship produced in new  media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship” (Report of the MLA Task Force).”
Purdy, James P. and Joyce R. Walker. 2010. “Valuing Digital Scholarship: Exploring the Changing Realities of Intellectual Work.” Profession. 19. pp. 177-195. https://docs.google.com/viewer?url=http%3A%2F%2Fdmp.osu.edu%2Fdmac%2Freadings%2FPurdyWalker.pdf

“…many tenure guidelines . . . label research as either creative or scholarly,” counting only the scholarly…. This narrow binary is perhaps the most significant (and problematic) aspect of current attitudes regarding the value of digital scholarship… One way to evaluate scholarly production that avoids a simple print-digital binary opposition is to think … more about what it produces, participates in, or does. To develop a more robust, complex evaluation framework, we might ask these questions:…Such an               approach to assessment would not look very much like the tenure-and-promotion activities now in place at most institutions.”
Richardson, James. 2013. “Establishing a New Paradigm: the Call to Reform the Tenure and Promotion Standards for Digital Media Faculty.” Journal of Interactive Technology and Pedagogy. Issue 3.
http://jitp.commons.gc.cuny.edu/establishing-a-new-paradigm-the-call-to-reform-the-tenure-and-promotion-standards-for-digital-media-faculty/

“… in the case of new and evolving fields of study, there are alternative criteria that would be better suited for the digital disciplines, and would serve as a more accurate assessment on the quality of faculty scholarship as they march towards tenure, promotion and reappointment….”

Schreibman, Susan and Ann M. Hanlon. 2010. “Determining Value for Digital Humanities Tools: A Report on a Survey of Tool Developers.” Digital Humanities Quarterly. Volume 4 Number 2. http://digitalhumanities.org:8080/dhq/vol/4/2/000083/000083.html

“… the authors conducted an online survey of developers of digital humanities tools in March 2008. The survey focused on their perceptions of their work, how they felt their tool development fit into a structure of academic rewards, and the value of tool development as a scholarly pursuit. Survey results indicate that tool development is indeed considered a scholarly activity by developers, but recognition of this work and rewards for it lag behind rewards for traditional scholarly pursuits…”

Zorich, Diane M. 2012. “Transitioning to a Digital World: Art History, Its Research Centers, and Digital Scholarship.” A Report to The Samuel H. Kress Foundation and The Roy Rosenzweig Center for History and New Media, George Mason University. http://www.academia.edu/2973600

Appendix 8: Selected faculty perspectives on evaluation of digital scholarship

Davidson, Cathy. 2009. “Respecting the Meaning of Tenure.”
http://www.hastac.org/node/2168

Davidson, Cathy. September 2012. “How Can a Digital Humanist Get Tenure?” http://www.hastac.org/blogs/cathy-davidson/2012/09/17/how-can-digital-humanist-get-tenure
Harley, Diane, et al. 2010. Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines. Center for Studies in Higher Education. http://escholarship.org/uc/cshe_fsc

Includes faculty responses from fields of archeology, astrophysics, biology, economics, history, music, and political science.

Jaschik, Scott. 2009. “Tenure in the Digital Era.” http://www.insidehighered.com/news/2009/05/26/digital

Kolowich, Steve. 2012. “The Promotion That Matters.
http://www.insidehighered.com/news/2012/01/04/evaluating-digital-humanities-enthusiasm-may-outpace-best-practices

Koh, A. 2012. “The Challenges of Digital Scholarship: A Report on the MLA Preconference on Evaluating Digital Work for Promotion andTenure.http://chronicle.com/blogs/profhacker/the-challenges-of-digital-scholarship/38103

Lee, Valerie and Cynthia L. Selfe. 2008. “Our Capacious Caper: Exposing Print-Culture Bias in Departmental Tenure Documents. ADE Bulletin, No 145. http://susandelagrange.com/cccc/Lee&Selfe.pdf

Rhee, Jennifer. 2012. “More Ammo: Digital Scholarship and Activity in Tenure and Promotion.
http://www.hastac.org/blogs/jrhee/2012/06/12/more-ammo-digital-scholarship-and-activity-tenure-and-promotion

Richardson, James.Redefining the Standards of Tenure and Promotion for Multi-Media and Digital Arts Faculty. ”http://www.academia.edu/1684630/Redefining_the_Standards_of_Tenure_and_Promotion_For_Multimedia_and_Digital_Arts_Faculty

Roy, Michael. 2013. “Is Linking Thinking? Addressing and Assessing Scholarship in the Digital Era.”
http://www.educause.edu/ero/article/linking-thinking-addressing-and-assessing-scholarship-digital-era

Schnell, Eric. 2013. “Digital Scholarship and the Faculty Reward System.”http://library.osu.edu/blogs/digitalscholarship/2013/04/29/digital-scholarship-and-the-faculty-rewards-system/#more-405

Schreibman, Susan, Laura Mandell, Stephen Olsen. 2011. “Evaluating Digital Scholarship: A Case Study in the Field of Literature.” Digital Humanities. http://dh2011abstracts.stanford.edu/xtf/view?docId=tei/ab-142.xml;query=;brand=default

Starkman, Ruth. 2013. “What Counts?” http://www.insidehighered.com/advice/2013/02/20/essay-issues-related-what-digital-scholarship-counts-tenure-and-promotion

Takats, Sean. 2013. “A Digital Humanities Tenure Case, Part 2: Letters and Committees.”
http://quintessenceofham.org/2013/02/07/a-digital-humanities-tenure-case-part-2-letters-and-committees/

Appendix 9: Literature on evaluation metrics in science

Boardman,  P. Craig and Branco L. Ponomariov. 2007. “Reward Systems and NSF University Research Centers: The Impact of Tenure on University Scientists’ Valuation of Applied and Commercially Relevant Research.” The Journal of Higher Education. 78(1).
http://archive.cspo.org/rvm/publications/pubs_docs/78[1].1BoardmanPonomariov.pdf

Bollen, Johan, et al. 2009. “A principal component analysis of 39 scientific impact measures.”
Abstract: http://arxiv.org/abs/0902.2183v1
Full text: http://arxiv.org/pdf/0902.2183v1.pdf

“…We performed a principal component analysis of the rankings produced by 39 existing and proposed measures of scholarly impact that were calculated on the basis of both citation and usage log data. Our results indicate that the notion of scientific impact is a multi-dimensional construct that cannot be adequately measured by any single indicator, although some measures are more suitable than others. The commonly used citation Impact Factor is not positioned at the core of this construct, but at its periphery, and should thus be used with caution…”

Cacioppo, John, T. 2013. Metrics of Science. Association for Psychological Science.
http://www.psychologicalscience.org/index.php/publications/observer/2008/january-08/metrics-of-science.html

Howard, Jennifer. 2013. “Rise of ‘Altmetrics’ Revives Questions About How to Measure Impact of Research.” Chronicle of Higher Education. http://chronicle.com/article/Rise-of-Altmetrics-Revives/139557/

Lozano, G.A., Lariviè, V., & Gingras, Y. 2012. “The weakening relationship between the Impact Factor and papers’ citations in the digital age.” Journal of the American Society for Information Sciences and Technology. 63(11): 2140-2145. doi: 10.1002/asi.22731

NSF Workshop on “Scholarly Evaluation Metrics: Opportunities and Challenges.” 2009. http://informatics.indiana.edu/scholmet09/announcement.html

“… we have seen a rapid expansion of proposed metrics to evaluate scientific impact. This expansion has been driven by interdisciplinary work in web, network and social network science, e.g. citation PageRank, h-index, and various other social network metrics. Second, new data sets such as usage and query data, which represent aspects of scholarly dynamics other than citation, have been investigated as the basis for novel metrics. The COUNTER and MESUR projects are examples in this realm. And, third, an interest in applying Web reputation concepts in the realm of scholarly evaluation has emerged and is generally referred to a Webometrics….”

Piwowar, H. 2013. “Altmetrics: Value all research products.” Nature. 493(159). doi:10.1038/493159a.

 “Science Metrics.” 2010. Nature. http://www.nature.com/news/specials/metrics/index.html

Appendix 10:  Additional articles related to digital scholarship and faculty evaluation

Arbaugh, J. Ben. 2009. “Digital Scholarship in the Tenure, Promotion, and Review Process.” Academy of Management Learning and Education. 8:3 460-462.

Borgman, Christine L. 2008. “Supporting the ‘Scholarship’ in E-Scholarship.” Educause Review 43, no. 6: 32-33. http://www.educause.edu/ero/article/supporting-%E2%80%9Cscholarship%E2%80%9D-e-scholarship

Buller, Jeffrey L. 2007. “Improving Documentation for Promotion and Tenure.” Academic Leader 23, no. 11: 7-8.

Collins, Ellen; Monica E. Bulger and Eric T. Meyer. 2011. “Discipline matters: Technology use in the humanities.” Arts and Humanities in Higher Education 2012 11: 76.    DOI: 10.1177/1474022211427421

Burgess, Helen J. and Jeanne Hamming. 2011. “New Media in the Academy: Labor and the Production of Knowledge in Scholarly Multimedia.” Digital Humanities Quarterly. 5(3).
http://digitalhumanities.org:8080/dhq/vol/5/3/000102/000102.html

Cross, Jeanne G. 2008. “Reviewing Digital Scholarship: The Need for Discipline-Based Peer Review.” Journal Of Web Librarianship 2, no. 4: 549-566.  doi: 10.1080/19322900802473936

Delagrange, Susan H. 2009. “When Revision Is Redesign: Key Questions for Digital Scholarship.” Kairos: A Journal Of Rhetoric, Technology, And Pedagogy 14, no. 1

Diamond, Robert M. and Bronwyn E. Adam. 2004. “Balancing Institutional, Disciplinary and Faculty Priorities with Public and Social Needs: Defining scholarship for the 21st century.” Arts & Humanities in Higher Education. vol 3(1) 29–40 doi: 10.1177/147402204039643

Filetti, Jean S. 2009. “Assessing service in faculty reviews: mentoring faculty and developing transparency.” Mentoring & Tutoring: Partnership In Learning 17, no. 4: 343-352.

Hardré, Patricia, and Michelle Cox. 2009. “Evaluating faculty work: expectations and standards of faculty performance in research universities.” Research Papers in Education 24, no. 4: 383-419.

Howard, Jennifer. 2010. “Hot Type: No Reviews of Digital Scholarship = No Respect.”
http://chronicle.com/article/Hot-Type-No-Reviews-of/65644/

Howard, Jennifer. 2008. “A New Field Study Identifies Eight Major Types of Digital Scholarship.” Chronicle Of Higher Education 55, no. 13: A11.

NEH Grantees Experiment with New Kinds of Peer Review. 2010.  http://www.neh.gov/divisions/odh/featured-project/neh-grantees-experiment-new-kinds-peer-review

 O’Meara, Kerry Ann. 2005. “Encouraging multiple forms of scholarship in faculty reward systems: Does It Make a Difference?” Research in Higher Education 46, no. 5: 479-510.

“This article presents findings from a national study of Chief Academic Officers of 4-year institutions on the impact of policy efforts to encourage multiple forms of scholarship in faculty roles and rewards….CAOs at campuses that initiated reforms reported a greater congruence between faculty priorities and institutional mission, and greater improvement in attention to undergraduate learning over the last decade.”

Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future. 2011. Center for Studies in Higher Education.
Abstract: http://cshe.berkeley.edu/publications/publications.php?id=379
Full text: http://escholarship.org/uc/item/1xv148c8#page-1

“… This report includes (1) an overview of the state of peer review in the Academy at large, (2) a set of              recommendations for moving forward, (3) a proposed research agenda to examine in depth the effects of      academic status-seeking on the entire academic enterprise… The document explores, in particular, the              tightly intertwined phenomena of peer review in publication and academic promotion, the values and    associated costs to the Academy of the current system, experimental forms of peer review in various                  disciplinary areas, the effects of scholarly practices on the publishing system…”

Raben, Joseph. 2007. “Tenure, Promotion and Digital Publication.” Digital Humanities Quarterly. Volume 1, Number 1.  http://www.digitalhumanities.org/dhq/vol/001/1/000006/000006.html

Zorich, D.M. 2008. A Survey of Digital Humanities Centers in the United States. https://docs.google.com/viewer?url=http%3A%2F%2Fwww.clir.org%2Fpubs%2Freports%2Fpub143%2Fpub143.pdf Washington D.C.: Council on Library and Information Resources.

Discusses criteria for project selection by digital humanities centers (see Section 4.4.3) as well as means for measure project success (see: Section 4.4.4).

Appendix 11: Additional literature on digital scholarship

The Credibility of Electronic Publishing: A Report to the Humanities and Social Sciences. Federation of Canada. 2001. Raymond Siemens, Project Coordinator.

Executive Summary: http://web.viu.ca/hssfc/Final/Summary.htm
Full text:http://web.viu.ca/hssfc/Final/Credibility.htm

Fitzpatrick, Kathleen. 2011. Planned Obsolescence. http://simon.ups.edu/record=b1807771~S0

Fitzpatrick, Kathleen. 2011. “The Humanities, Done Digitally.” Chronicle of Higher Education 57, no. 36: B26.

Heap, Tania, and Shailey Minocha. 2012. “An Empirically Grounded Framework to Guide Blogging for Digital Scholarship.” Research in Learning Technology 20, 176-188.

Kuhn, Virginia. 2013. “Embrace and Ambivalence.” Academe 99, no. 1: 8-13.

Littlejohn, A., H. Beetham, and L. McGill. 2012. “Learning at the digital frontier: a review of digital literacies in theory and practice.” Journal of Computer Assisted Learning 28, no. 6: 547-556.

Losoff, Barbara, and Harry E. Pence. 2010. “Digital Scholarship and Open Access.” Journal of Educational Technology Systems 38, no. 2: 95-101.

Magnan, Sally Sieloff. 2007. “Commentary: The Promise of Digital Scholarship in SLA Research and Language Pedagogy.” Language Learning & Technology 11, no. 3: 152-155.

Maron, Nancy L. and K. Kirby Smith. 2008. Current Models of Digital Scholarly Communication: Results of an Investigation Conducted by Ithaka for the Association of Research Libraries. Washington D.C.: Association of Research Libraries.

Pannacker, William. 2013. “Stop Calling It ‘Digital Humanities’.” Chronicle Of Higher Education 59, no. 24: A21-A22.

Pannapacker, William. 2012. “No DH, No Interview.” Chronicle of Higher Education. Vol. 58, Issue 42: A25-A26.

Pannapacker, William. 2011. “Big-Tent Digital Humanities: a View From the Edge, Part 2.” Chronicle Of Higher Education 58, no. 5: A32.

Prescott, Andrew. 2011. “Consumers, creators or commentators?: Problems of audience and mission in the digital humanities.” Arts and Humanities in Higher Education.2012 11: 61. DOI: 10.1177/1474022211428215.

Scanlon, Eileen. 2012. “Digital Futures: Changes in Scholarship, Open Educational Resources and the Inevitability of Interdisciplinarity.” Arts and Humanities In Higher Education: An International Journal Of Theory, Research And Practice 11, no. 1-2: 177-184.

Seaman, John T., and Margaret B. W. Graham. 2012. “Sustainability and the Scholarly Enterprise.” Journal of Scholarly Publishing 43, no. 3: 257-293.

Svensson, Patrik. 2012. “The digital humanities as a humanities project.” Arts and Humanities in Higher Education 2012 11: 42 originally published online 1 December 2011. DOI: 10.1177/1474022211427367.

Veletsianos, G. 2012. “Higher education scholars’ participation and practices on Twitter.” Journal of Computer Assisted Learning 28, no. 4: 336-349.

Veletsianos, George, and Royce Kimmons. 2012. “Networked Participatory Scholarship: Emergent techno-cultural pressures toward open and digital scholarship in online networks.” Computers & Education 58, no. 2: 766-774.

“Working Together or Apart: Promoting the Next Generation of Digital Scholarship. CLIR Publication No.145.” 2009. Washington, DC. Council on Library and Information Resources, and Washington, DC. National Endowment for the Humanities. https://docs.google.com/viewer?url=http%3A%2F%2Fwww.clir.org%2Fpubs%2Freports%2Fpub145%2Fpub145.pdf

[1] For example, some know well the impact of Ernest Boyer’s 1990 Scholarship Reconsidered: Priorities of the Professoriate, andthe ways in which his definitions of the scholarship ofdiscovery, integration, application, and teaching have been debated and integrated into evaluation guidelines across the academy. For a concise summary, see http://en.wikipedia.org/wiki/Ernest_L._Boyer.

[2] See, for example, the Clearinghouse and National Review Board for the Scholarship of Engagement at http://schoe.coe.uga.edu/evaluation/evaluation_criteria.html.

[3] I served as Director of Forensics, leading and coaching the college’s intercollegiate speech and debate program. We wrote and used departmental guidelines based on resources for forensics educators provided by the American Forensic Association; contact the author for a copy of current guidelines.

[4] The department was then a combined Communication and Theatre Arts program. For an example of evaluation guidelines for the distinct types of teaching, research, and service involved in multiple positions in theatre, see Association of Theatre in Higher Education materials http://www.athe.org/?page=tp_guide&terms=%22evaluation+and+guidelines%22.

[5] Michael Jensen, “The New Metrics of Scholarly Authority,” The Chronicle Review – The Chronicle of Higher Education, June 15, 2007.

[6] William G. Thomas III, “The Promise of Digital Humanities and the Future of the Liberal Arts,” http://railroads.unl.edu/blog/?p=1094, posted October 26, 2013.

[7] http://en.wikipedia.org/wiki/Digital_scholarship.

[8] http://en.wikipedia.org/wiki/The_Digital_Humanities.

[9] See for example: John T. Cacioppo, “Metrics of Science,” Association for Psychological Science, 2013, http://www.psychologicalscience.org/index.php/publications/observer/2008/january-08/metrics-of-science.html; a 2009 National Science Foundation workshop on scholarly evaluation metrics, http://informatics.indiana.edu/scholmet09/announcement.html; and a 2010 special issue of the journal Nature, http://www.nature.com/news/specials/metrics/index.html.

[10] See, for example, Alison Byerly, “Everything Old is New Again: The Digital Past and the Humanistic Future,” Modern Language Association Conference, January 2012, and William Pannapacker, “Stop Calling It ‘Digital Humanities’,” Chronicle of Higher Education 59, no. 24: A21-22.

[11] Mark Sample, “Digital Humanities at MLA 2014,” at http://www.samplereality.com/2013/09/19/digital-humanities-at-MLA-2014/.

[12] THATCamp (The Humanities And Technology Camp) is a user-generated unconference for technologists and humanities professionals, including university and college faculty, librarians and archivists, and museum staff; see http://en.wikipedia.org/wiki/THATCamp or www.thatcamp.org.

[13] The author recognizes that faculty evaluation guidelines may not be visible beyond a campus intranet, so both welcomes correction of this data and need for additional survey work.

[14] Sheila Cavanaugh, “Living in a Digital World: Rethinking Peer Review, Collaboration, and Open Access,” Journal of Digital Humanities, Vol 1, No. 4, Fall 2012), www.journalofdigitalhumanities.org.

[15] Eric Schnell, “Digital Scholarship and the Faculty Rewards System,” www.library.osu.edu, April 29, 2013.

[16] Two additional sets of campus guidelines from a decade prior are:  TLT Group. Sample guidelines: Duquesne University. 1996. http://www.tltgroup.org/resources/rduqten.html

and University of Virginia. Evaluating Digital Scholarship, Promotion & Tenure Cases. 2001. http://artsandsciences.virginia.edu/dean/facultyemployment/evaluating_digital_scholarship.html

[17] For a paper by University of Maine faculty members that “argues for redefining evaluation criteria for faculty working in new media research and makes specific recommendations for promotion and tenure committees in U.S. universities,” see: Ippolito, J., et al. 2009, “New Criteria for New Media” Leonardo, Vol. 42, No. 1, pp.71-75. http://www.mitpressjournals.org/doi/pdf/10.1162/leon.2009.42.1.71

[18] “Closing the Evaluation Gap,” Journal of Digital Humanities,Vol. 1, No. 4, Fall 2012. http://journalofdigitalhumanities.org/1-4/

[19] See http://ceball.com/research/tenure-letter/

[20] See http://journalofdigitalhumanities.org/1-4/explaining-digital-humanities-in-promotion-documents-by-katherine-harris/

[21] Sydni Dunn, “Digital Humanists: If You Want Tenure, Do Double the Work,” Chronicle of Higher Education, 2014 January 6. https://chroniclevitae.com/news/249-digital-humanists-if-you-want-tenure-do-double-the-work/

[22] Ayers, Edward L. “Does Digital Scholarship Have a Future?” Educause Review. July/August 2013, pp. 24-34. http://www.educause.edu/ero/article/does-digital-scholarship-have-future

[23] Several of the guidelines reference MLA as an important influence.

[24]Written by the author based on other addenda in use in campus evaluations; a model for digital scholarship has not been used or adopted yet at Puget Sound.