Cyberinfrastructure For Us All: An Introduction to Cyberinfrastructure and the Liberal Arts

This is going to be big. According to Arden Bement, Director of the National Science Foundation, the Cyberinfrastructure Revolution that is upon us “is expected to usher in a technological age that dwarfs everything we have yet experienced in its sheer scope and power.”[1]

With a trajectory shooting from the solitary performance of legendary room-size machines (with less computing power than today’s handhelds) to the complex interactions within a pulsing infrastructure of many layered, parallel and intersecting networks, “computing” is continuing to develop exponentially. But in fact, as David Gelernter has put it, “the real topic in computing is the Cybersphere and the cyberstructures within it, not the computers we use as telescopes and tuners.”[2]

We are currently in the middle of the second big opportunity we’ve had to collectively take stock of our computing capabilities, assessing social, intellectual, economic, and industrial requirements, envisioning the future, and calling for coordinated planning across agencies and sectors. The early 1990s was the first such period. As the technical components of the Web came together in Geneva, Senator Al Gore’s High Performance Computing and Communication Act of 1991 led to the creation of the National Research and Education Network and proposals for a “National Information Infrastructure” (NII). These led in turn to funding structures that enabled the construction of hardware and software, of transmission lines and switches, and of a host of physical, connectible devices and interactive services that gave rise to the Internet we know today.

Just as the NII discussions had a galvanizing effect on building those earlier networks, the National Science Foundation’s 2003 report on Revolutionizing Science and Engineering Through Cyberinfrastructure is having a similar effect today. The product of a more sophisticated understanding of our civilization’s dependence on computer networking–a dense, multi-layered cyberinfrastructure that goes beyond switches and technical standards–the NSF report calls for a massive set of new investments (public and private), for leadership from many quarters, for changing professional practices, and for necessary institutional and organizational changes to match the opportunities provided by the tremendous recent advances in computing and networking. That report, often referred to as the Atkins report (justifiably named after Dan Atkins, the visionary chair of the NSF Blue-Ribbon Advisory Panel on Cyberinfrastructure), inspired no less than 27 related reports on cyberinfrastructure and its impacts on different sectors.[3]

These reports have essentially laid out the territory for how best to harness the power of distributed, computer-assisted collaborative production. They forcefully and formally call attention to the shift in economic and social production from a classic industrial base to a networked information base. Interestingly, this time around, the reports not only acknowledge, they highlight humanistic values and the role of the arts, humanities and social sciences (“the liberal arts”) in a way that was not done in the documents of the National Information Infrastructure. At the heart of NSF’s mission to build the most advanced capacity for scientific and engineering research is the emphasis that it be “human-centered.”[4] This is an invitation for the liberal arts to contribute to the design and construction of cyberinfrastructure (CI).

Most significant of those 27 reports for the liberal arts community is Our Cultural Commonwealth, the 2006 report of the Commission on Cyberinfrastructure for the Humanities and Social Sciences, created by the American Council of Learned Societies (ACLS). The report underscores the values of designing an environment that cultivates the richness and diversity of human experience, cultures and languages, using the strengths of this community: “clarity of expression, the ability to uncover meaning, the experience of organizing knowledge and above all a consciousness of values.”[5] It reminds us of the founding legislation of the NEH that asserts that, parallel to the core activities of the sciences, there needs to be a healthy capacity, provided by humanities disciplines, to achieve “a better understanding of the past, a better analysis of the present and a better view of the future.” As we understand the power of software tools to parse massive amounts of data, and the potential of collaborative expertise to wield those tools and articulate the results, we need to emphasize the place of the values of individual and collective creative imagination.

In the wake of these reports, as the term “cyberinfrastructure” gains currency, as initiatives are born and decisions made, this seemed a good moment for Academic Commons to capture a range of perspectives from scholars, scientists, information technologists and administrators, on the challenges and opportunities CI presents for the liberal arts and liberal arts colleges. What difference will cyberinfrastructure make and how should we prepare?

How do we get there from here? Reviewing Our Cultural Commonwealth, art historian Gary Wellsnotes some key challenges. First, who’s to pay for some of the necessary transformations and how? Budget, especially for technology, has always been a big issue for a community which, in Wells’s words, “has had to make do with inadequate tools, incompatible standards, tiny budgets and uninterested leaders.” There’s a gap between what is possible and what is available for faculty right now. How do we effectively make the case for attention to CI among the other competing demands for a limited budget? How can the budget be expanded, especially when there are strong calls to make this CI both a means for greater collaboration within and among academic disciplines but also as a route out to the general public? Who will lead this call to arms?

While institutional response and organizational change is called for, classics scholar and Georgetown University Provost James O’Donnell, a bold yet pragmatic voice for envisioning change, affirms that change will have to come from the faculty, who have been mostly quite complacent about the future of the Web. Humanists, for the most part, are changing their practices incrementally through the benefits of email and the Web, but the compelling vision that will inspire faculty to develop a new kind of scholarship is still missing, despite the individual accomplishments of a notable few.[6]

Cyberinfrastructure draws attention to another significant challenge to academic liberal arts culture: in a word, collaboration. While that culture is created through scholarly communication–journals, conferences, teaching, the activity of scholarly societies and the continuing evolution of “disciplines”–much of the daily activity of the humanities is rooted in the assumption that humanities research and publication is essentially an individual rather than a collaborative activity. Will CI bring a revolution in the degree of real and active collaboration in research and the presentation/publication of the results?

In confronting this thorny issue, Sayeed Choudhury and colleague Timothy Stinson step back and take a long-term view. Perhaps scientists were not always such good collaborators. Perhaps there’s a cycle to the culture and practice of disciplines as they evolve. With tongue slightly in cheek, looking backward as well as forward, they make a modest proposal for a new paradigm for humanities research.

Computer scientist, Michael Lesk has had a long interest in bridging the Two Cultures and in building digital libraries. While at the NSF, he spearheaded the development of the Digital Libraries Initiative (1993-1999) that funded a number of advanced humanities projects.[7] Observing a new paradigm at work in the sciences, where direct observation is often replaced by consulting results posted in massive data repositories like the Sloan Digital Sky Survey, the Protein Data Base or Genbank, he turns to the humanities and sees little progress beyond the digitizing of material. But while waiting for new creative uses of what digitized material there is, Lesk underscores the significant economic, legal, ethical and political problems that need resolution. Citing just one: there is still great confusion among all players about which economic models should apply: who pays for what, when and how?

But again, how do we begin? John Unsworth, chair of the ACLS Commission, and now well-versed in defining and describing CI (you’ll enjoy his culinary analogies in his discussion with Kevin Guthrie), sees construction of a humanities cyberinfrastructure as necessarily incremental.[8] The first wave is the fundamental (but still difficult) task of building the digital library: bringing together representations of the full array of cultural heritage materials in as interoperable, usable and sustainable a digital form as possible. This is ‘content as infrastructure.’

Different disciplines are doing this with different degrees of success. Aided now by the operations of Google, the Open Content Alliance [see a profile in this issue], the Internet Archive and others, our libraries and archives have made available a wide panoply of materials in digital form: certainly the core texts of Western history and culture, and a considerable array of material from the West and other cultures in other media. The Getty’s Kenneth Hamma, however, argues here that, despite the images that are available in some form, many museums are holding a lot of cultural heritage material hostage. Even public domain work is still under digital lock and key by many gatekeepers who worry about the fate of “their” images once they are released into the digital realm. Millions of well-documented images of objects held by museums (of art, history, natural history), more easily accessible in very high-definition formats, will have a tremendous impact on all kinds of disciplines, let alone on the traditional ‘canon’ of works central to Art History. Along these lines, museum director John Weber writes convincingly here of the potential offered by CI for campus museums such as his own to be radically more relevant and useful for curricula around the globe by transforming museum exhibitions into three-dimensional, interactive and visceral “texts” for study and response.

While even public domain material is proving elusive, material still under copyright is often a nightmare both to find and to use in digital form, as the traditional sense of “fair use” is under siege and many faculty clearly have a lot to learn about copyright law.[9] Elsewhere, John Unsworth has cited intellectual property as “the primary data-resource-constraint in the humanities” (paralleling privacy rights as the “primary data-resource-constraint in the social sciences”). Believing the solutions to be partly technical, Unsworth sees them as the “primary ‘cyberinfrastructure’ research agenda for the humanities and social sciences.”[10] Michael Lesk underscores this message in his essay in this issue, reporting that much of the cyberinfrastructure-related discussion in the humanities is not so much “about how to manage the data or what to do with it, but what you are allowed to do with it.” Some combination of technical, social and legal answers are surely called for here.

But all of this, as Lesk reiterates, is just the beginning. Only a comparative handful of scholars in a variety of fields have begun to build new knowledge and experiment with forms of new scholarship. Here, we are fortunate to have noted media scholar Janet Murray open up some other paths in her gripping account of what the process and products of a new cyberscholarship might look like.

Murray’s starting point is that a new medium requires new genres and new strategies for making meaning; she suggests some approaches that will be more practical as the Semantic Web, sometimes nicknamed Web 3.0,[11] approaches. When software can analyze everything online as if it were in the form of a database, we will have access to tremendously powerful tools that will enable us to conduct “computer-assisted research, not computer-generated meaning.” Such structure will help us “share focus across large archives and retrieve the information in usable chunks and meaningful patterns.” Just as the highly evolved technology of the book (with its segmentation and organization into chapters and sections, with titles, section heads, tables of contents and indices, etc.) allows us greater mastery of information than we had using oral memory, so better established conventions of “segmentation and organization in digital media could give us mastery over more information than we can focus on within the confines of linear media.” Overall, she stresses cyberinfrastructure’s potential as a “facilitator of a vast social process of meaning making” (a more developed collaborative process) rather than focusing on the usual data-mining approach.

For a closer look at how one discipline might change with access to cyberinfrastructure, we asked three art historians (Guy Hedreen, Amelia Carr, and Dana Leibsohn) to discuss their expectations. How might their practice and their discipline evolve? Their roundtable discussion focuses initially on the critical importance of access to images (the “content infrastructure”) before moving on to consider the importance of taking responsibility for fostering new forms of production “more interesting than the book.” Ultimately, CI will be useless unless it not only revolutionizes image access and metadata management, but also helps us to think differently about vision and objects: “what kind of image work is the work that matters most?”

Zooming out again to get the big picture beyond any one discipline, I’d like to encourage all readers of this collection to read the recent, groundbreaking report out of a joint NSF/JISC Repositories Workshopon data-driven scholarship. The report, The Future of Scholarly Communication: Building the Infrastructure for Cyberscholarship, defines cyberscholarship (“new forms of research and scholarship that are qualitatively different from traditional ways of using academic publications and research data”), reviews the current state of the art, the content and tools still required, the palpable resistance to the changes necessary for it to take hold, and some of the international organizational issues. It even sketches out a roadmap for establishing an international infrastructure for cyberscholarship by 2015. Reviewing the report, Gregory Crane, one of the workshop participants, zeroes in on the core issue, the first requirement for launching sustainable cyberscholarship: getting a system of institutional repositories for scholarly production in place, working and actively being used by scholars. By the way, two of the papers in this Academic Commons collection (those by Choudhury and Murray) had their roots in position papers delivered at the NSF/JISC Repositories Workshop.

How all this goes down on the college campus is examined here by physicist Francis Starr, speaking from his experience in installing the latest in “cluster computing” at Wesleyan University. While hooking into a network is part of what cyberinfrastructure is about, so is developing one’s own local infrastructure as efficiently as possible. His main theme though is the equal importance of human expertise (local and distributed) and installed hardware. This theme is carried further by Todd Kelley in his demonstration of the wisdom of using cyber services that outside organizations can provide. Kelley stresses the balance to be achieved among the human, organizational and technological components when implementing such services.

Finally, chemist Matthew Coté beautifully illustrates how cyberinfrastructure might be visible on a small liberal arts campus through the example of one small but powerful new building: the Bates College Imaging and Computing Center. Designed more specifically to bring the arts and sciences together in exemplifying the potency of the liberal arts ideal (as codified by Bates’s recently-adopted General Education Program), the building should prove to be one of the most creative and plugged-in cyberinfrastructure-ready places on campus. Its almost iconic organization into lab, gallery/lounge and classroom links group research and learning, individual creativity and discovery and the key role of open social space. Artists, humanists, and scientists are equally welcome in this space with open equipment (with training programs and nearby expertise for help in using it). As Professor Coté puts it, “Its array of equipment and instrumentation, and its extensive computer networking, make [the Imaging Center] the campus hub for collaborative and interdisciplinary projects, especially those that are computationally intensive, apply visualization techniques, or include graphical or image-based components.”

Where do we go from here? The focus of these pieces has been on institutions and disciplines. Cyberinfrastructure will bring significant changes to both, and the evolution of both are intertwined. But cyberinfrastructure is not a one-way street, but rather a massive intersection. Just as Web 2.0 has provided more of a user-oriented network in which communities create value from multiple, individual contributions, so the future limned here by our guests is one that will depend not only on large supercomputing centers and government agencies but on the changing practices of multiple arrays of individuals, all of whose developing practices are at work in designing this new environment.

NOTES

[1] “Shaping the Cyberinfrastructure Revolution: Designing Cyberinfrastructure for Collaboration and Innovation.” First Monday 12, no. 6 (June 2007). http://firstmonday.org/issues/issue12_6/bement/index.html. Accessed September 26, 2007.

[2] David Gelernter, “The Second Coming–A Manifesto.” The Edge, 2000. http://www.edge.org/3rd_culture/gelernter/gelernter_p1.html.
Accessed October 30, 2007.

[3] National Science Foundation Office of Cyberinfrastructure, Cyberinfrastructure Vision for 21st Century Discovery, Sec3:46 (2007): Appendix B, “Representative Reports and Workshops.” http://www.nsf.gov/od/oci/CI_Vision_March07.pdf. Retrieved August 8, 2007.

[4] “The mission is for cyberinfrastructure to be human-centered, world-class, supportive of broadened participation in science and engineering, sustainable, and stable but extensible.” Cyberinfrastructure Vision, Sec3:2.

[5] Our Cultural Commonwealth, p.3.

[6] See for example, Edward Ayers’s questioning article, “Doing Scholarship on the Web: 10 Years of Triumphs and a Disappointment,” Chronicle of Higher Education 50, no. 21 (January 30, 2004) B24-25.

[7] See Michael Lesk, “Perspectives on DLI-2 – Growing the Field.” D-Lib Magazine 5 no. 7/8 (July/August 1999) http://www.dlib.org/dlib/july99/07lesk.html. Accessed October 30, 2007.

[8] For a superb introduction to the issues, see John Unsworth’s address at the 2004 annual meeting of the Research Libraries Group: “Cyberinfrastructure for the Humanities and Social Sciences.”http://www3.isrl.uiuc.edu/~unsworth/Cyberinfrastructure.RLG.html. Accessed October 30, 2007.

[9] See, for example, Renee Hobbs, Peter Jaszi, Pat Aufderheide, The Cost of Copyright Confusion for Media Literacy. Center for Social Media, American University 2007.http://www.centerforsocialmedia.org/files/pdf/Final_CSM_copyright_report.pdf. Accessed October 31, 2007.

[10] Unsworth, ibid.

[11] The classic document here is Tim Berners-Lee, James Hendler and Ora Lassila, “The Semantic Web.” Scientific American (May 2001) http://www.sciam.com/article.cfm?articleID=00048144-10D2-1C70-84A9809EC588EF21. Accessed October 30, 2007. See Berners-Lee’s recent thoughts in Nigel Shadbolt, Tim Berners-Lee, Wendy Hall, “The Semantic Web Revisited,” IEEE Intelligent Systems 21, no. 3 (May/Jun, 2006) 96-101.http://eprints.ecs.soton.ac.uk/12614/01/Semantic_Web_Revisted.pdf. Accessed October 30, 2007.