Cyberinfrastructure as Cognitive Scaffolding: The Role of Genre Creation in Knowledge Making

  • warning: Invalid argument supplied for foreach() in /home/commons/drupal/includes/file.inc on line 553.
  • warning: Invalid argument supplied for foreach() in /home/commons/drupal/includes/file.inc on line 553.

Information infrastructure is a network of cultural artifacts and practices.[1] A database is not merely a technical construct; it represents a set of values and it also shapes what we see and how we see it. Every time we name something and itemize its attributes, we make some things visible and others invisible. We sometimes think of infrastructure, like computer networks, as outside of culture. But pathways, whether made of stone, optical fiber or radio waves, are built because of cultural connections. How they are built reflects the traditions and values as well as the technical skills of their creators. Infrastructure in turn shapes culture. Making some information hard to obtain creates a need for an expert class. Counting or not counting something changes the way it can be used. Increasingly it is the digital infrastructure that shapes our access to information and we are just beginning to understand how the pathways and containers and practices we build in cyberspace shape knowledge itself.The advent of the computer has made possible an event that has happened only a few times in human history: the creation of a new medium of representation. The name “computer” fails to adequately convey the power of this medium, since a machine that executes procedures and processes vast quantities of symbolic representation is not merely a bigger calculator. It is a symbol processor, a transmitter of meaningful cultural codes. The advent of the machinery of computing is similar to that of the movie camera or the TV broadcast. The technical substrate is necessary but not sufficient for the process of meaning-making, which also depends on the related cultural process of inventing the medium. Cyberinfrastructure is an evolving creation. It is both technical and cultural, constrained and empowered by human skills and traditions, and possessing the same power to shape and expand the knowledge base that the print infrastructure has maintained for the past 500 years, and that the broadcast and moving image infrastructures have for the past 100 years.[2]

It is useful to think of a medium as having three characteristics: inscription, transmission, and symbolic representation. Inscription concerns the physical properties (the mark in the clay, the ink on the page, the current through the silicon); transmission concerns the logical codes (alphabet, ASCII, HTML). Representation is the trickiest part because it is a combination of logical codes like the alphabet and indeterminate cultural codes like words themselves. A logical code can be mechanically deciphered; it always means the same thing. But a cultural code is arbitrary, shifting, and context dependent. There is no reason why the particular sounds of the word “pottery” should refer to ceramics, or why “Pottery Barn Kids” should refer to a place without ceramics, a barn, and usually without kids as well. Representation relies on the negotiation of conventions of interpretation, on symbolic systems that acquire meaning through logical systems and through familiarity with customs. These conventions tell us how to interpret a store name or a street address or a URL. They set up the framework for receiving new information as variations on familiar patterns. When these conventions coalesce into complex, stable, widely recognizable units, we have genres.

Genre creation is how we use a new inscription and transmission medium to get smarter. For example, the printing press allowed us to put words on a page in a standardized manner and to distribute the words in multiple portable copies. This is the technical substrate. But the scientific treatise did not appear until two centuries later, because it required the invention of new representational conventions such as the learned essay, the scientific diagram, the specification of an experiment, and new social conventions such as the dispassionate tone of argumentation and presenting evidence based on careful observation.[3] The spread of reading also furthered habits of humanist introspection and fostered sustained, consistent storytelling, leading to the development of the genre conventions of the confessional autobiography and the psychologically detailed novel.

Just as the novel developed different expressive conventions from the prose narratives of earlier eras, scientific essays developed different expressive structures from the philosophical essays and practical craft diagrams that preceded them. In fact, all of our familiar representational genres from scholarly journal articles to TV sitcoms make sense to us because they draw upon centuries of evolving representational conventions, from footnotes to laugh tracks. Each of these conventions contributes to a familiar template that allows us to take in new information more efficiently because it comes to us as a variant on an intuitively recognized pattern. In terms of signal processing, genre conventions form the predictable part of the signal, allowing us to perceive the meaningful deviations as carrying the information. In cognitive terms we can think of representational genres as schemas–conceptual or perceptual frameworks that speed up our mental processing by allowing us to fit new stimuli into known patterns. In computational terms, we can think of genres as frames, recursive processing strategies for building up complexity by encapsulating pattern within pattern.

Genre creation links strategies of cognition with strategies of representation. For example, titled chapters are a media convention that makes it easier to follow the sustained argument of a book. It chunks a longer argument into memorable sections. The invention of the book involved standardizing the convention of naming chapters within the genre of persuasive or explanatory writing. Books increased our ability to focus our individual and shared attention, allowing us to sustain and follow an argument too long to state in oral form and to elaborate and examine an argument together over time and across distances. The proliferation of book-based discourse led to the growth of domains of systematic knowledge, represented by shelves of books in standardized arrangements that collocate works on the same topic. We can understand one another across time and place when we refer to a domain of investigation because we have the shelf full of books to refer to.[4]

The invention of a genre, therefore, is the elaboration of a cognitive scaffold for shared knowledge creation. When we build up conventions of representation like the labeling of scientific diagrams or the visual and auditory cues that indicate a flashback in a movie, we are extending the joint attentional scene that is the basis of all human culture: we are defining a more expressive symbolic system for synchronizing and sharing our thoughts.[5] The work of the designer in inventing and elaborating genre conventions allows us to focus our attention together; the invention of more coherent, expressive media genres goes hand in hand with the grasping and sharing and of more complex ideas about the world.

Genre and Knowledge Creation in Digital Media
The digital medium is a capacious inscription technology with a wealth of formatting conventions and logical codes for reproducing many kinds of legacy documents, but its native genre conventions are still inadequate to allow us to focus our attention appropriately and to exploit the new procedural and participatory affordances of the medium.

An electronic spreadsheet is a good example of a legacy genre–the fixed paper spreadsheet that has changed with translation into the digital medium, gaining processing power and manipulability so that it represents not just a single financial situation but a range of possibilities within a common set of constraints. In order to move the paper spreadsheet into electronic form, conventions had to be invented for writing formulas and titling columns and rows. Lack of competition has fossilized these conventions, which may be refined into greater ease of use if competing products appear. But the basic genre of the electronic spreadsheet is established and it scaffolds our understanding of budgeting by allowing us to change entries in a single cell and see the results propagate as other cells change. An electronic spreadsheet is therefore a good example of a new genre with cognitive benefits. It does not merely make it easier for us to add up columns of numbers; it offers us a different conceptualization of a budget. But inventing the electronic equivalent of a paper spreadsheet was mostly a matter of implementing mathematical functions, which are purely logical codes. It is easier to figure out than the electronic extension of the scientific journal article, the documentary film, or the novel, which are far more culturally dependent genres.

Other representational genres are changing more slowly by a process of experimentation. The World Wide Web is mostly an additive assembly of legacy media with few procedural simulations and explorable models, and many cluttered diagrams and Powerpoint slide shows. Much of the information we receive is in the form of scrolling lists, the oldest form of information organization in written culture. Too often these listings lack adequate filtering and ordering. At the same time the encyclopedic capacity of digital inscription raises our expectations, creating what I have called the “encyclopedic expectation” that everything we seek will be available on demand. [6]

Even at this early stage, however, the computer has already brought us a limited number of new symbolic genres. The most active genre design has been in the development of video games, which exploit the procedural and participatory power of the computer to create novel interaction patterns, new ways of acting upon digital entities and receiving feedback on the efficacy of one’s actions. Will Wright, the inventor of Sim City, the Sims, and other simulation games, has called computer games “prosthetics for the mind.” His simulation worlds are perhaps the most successful implementations of the affordances that Seymour Papert first pointed out in Mindstorms[7]: the ability of computers to create worlds in which we can ask “what if” questions, in which we can instantiate rule systems and invite exploratory learning. Sim City works as a resource allocation system in which we make decisions about zoning and power plants and watch a city grow according to the parameters we have chosen. Sim City is a toy but it uses some of the assumptions of professional urban planning simulations. Similar simulation systems are in use in scientific and social science contexts, and they are increasingly used to simulate emergent phenomena that could not be captured in any other way. These are specialized tools and they do not necessarily work across disciplines or related domains.

Tim Berners-Lee[8] is the foremost advocate of a more powerful procedural strategy for meaning-making. For Berners-Lee, much of human knowledge is awaiting translation into a logical code structure, a structure that is open to change but requires social compacts within and across disciplines to succeed. His vision of the semantic web would give shared resources on web pages the coherent form of databases and would allow multiple procedures to be applied to these standardized data. The semantic web is the most ambitious vision of the development of large data resources into new knowledge. But in his recent reappraisal of the idea, Berners-Lee laments the reluctance of knowledge communities to come together to establish interoperability in all but the most trivial exchanges and, more significantly, to do the difficult work of inventing common tagging vocabularies based on common knowledge representations (ontologies). To Berners-Lee, the hard sciences are a good example of domains in which there is a desire to establish common terminologies and knowledge representations. But even in the sciences there has been reluctance to devote resources to standardization and resistance to imposing conformity. It would seem even less likely that humanists could be persuaded to come up with common representations of concepts.

In fact, the thrust of humanist involvement in the digital arena could be characterized as the antithesis of the semantic web, as a “rhizome of indeterminacy,” epitomized by the title of a widely celebrated artistic work, Talan Memmott’s “Lexia to Perplexia,” whose indeterminate structure is the subject of an admiring monograph by N. Katherine Hayles, one of the leading scholar-critics of electronic literature. Memmott’s text is purposely perplexing in its subverting of common conventions of meaning-making. This lexia, or screen of text, is from the section called “The Process of Attachment”:

The inconstancy of location is transparent to the I-terminal
as its focus is at the screen rather than the origin of the
image. It is the illusory object at the screen that is of interest
to the human enactor of the process — the ideo.satisfractile
nature of the FACE, an inverted face like the inside of a
mask, from the inside out to the screen is this same
<HEAD>[FACE]<BODY>, <BODY> FACE </BODY>
rendered now as sup|posed other.
Cyborganization and its Dys|Content(s)
Sign.mud.Fraud [9]

Hayles finds significance in the disorienting process by which Memmott’s hypertext dissolves meaning:

To the extent the user enters the imaginative world of this environment and is structured by her interactions with it, she also becomes a simulation, an informational pattern circulating through the global network that counts as the computational version of human community.[10]

For Hayles, the genre creates the reader; the reader fuses with the cyberinfrastructure. Her delight in the frustrations of Memmott’s witty instantiation of perplexity, in the elusiveness of meaning within his text, contrasts markedly with Berners-Lee’s pursuit of a more coherent, inclusive logical code. With the html link as the crucial enabling technology, there are in fact many divergent web genres currently in the process of formation, and multiple expressive communities engaged in inventing them.

Humanists are likely to resist attempts at creating a common ontology as a resurrection of the totalizing ideologies and culturally imperialistic hegemonies that the work of the late twentieth century exposed and repudiated. In fact, the early embrace of hypertext by literary scholars was based in part upon its power to subvert the organizational power of the book. However, there has also been a consistent strand of celebration of the possibility of hypertextual and hypermedia environments to bring together large bodies of information and to create multivocal information structures.[11] In fact, the most promising area of overlap between the scientific pursuit of self-organizing data on the one hand and the humanist pursuit of procedurally generated ambiguity on the other, is the mutual affirmation of gathering multiple points of views and multiple kinds of information in a common framework. For scientists, the process is seen as the accumulation of a common dataset. This is the next step in the process of shared witness to experimentation: the amassing of a common pool of information that is so well collected that its various discrete data points form meaningful patterns. Humanists are pursuing common archives as well in the interest of preserving cultural heritage, such as the collection of all surviving papyrus texts, or the recording social history by StoryCorps Project of the Library of Congress. In every discipline the encyclopedic nature of the digital medium is leading to a massive archiving effort, and the aggregation of these archives is motivating the creation of common means of access and common formats of contribution.

There is also a large community of practice growing up among lay users of electronic resources who are creating and navigating vast archives of media. Current strategies for sense-making of large data sources have had limited success but they point to the kinds of strategies that, over time, hold promise for creating a richer shared representation. Search engines still return much unnecessary information and miss key information; folksonomies provide uneven tagging of large resources. But to the extent that Google and Flickr and del.icio.us are useful to us, it is because they leverage the efforts of many distributed annotators. Google owes its success to a key insight that the syntax of links is itself semantic. By using anchor text–the words that are used as clickable links to other pages–as a collectively created index to web content, the inventors of the most successful search engine of the early twenty-first century captured a more reliable representation of what is most important about the linked pages than the other technologists captured by relying on full text search of the pages themselves. Google can be thought of as an exemplar of the evolving genre of the search engine portal, and its presentation of listings with brief excerpts and its use of marginal advertising are important conventions of the genre.

Although we can now collect more data than anyone could possibly hope to examine, that does not mean that the answer to understanding the data lies in teaching the computer new algorithmic tricks so it can do the reading for us. Too much of what we need to understand can only occur with the focused attention of a reliable human collaborator. What we need is computer-assisted research, not computer-generated meaning. We need structures that will allow us to share focus across large archives and to retrieve information in usable chunks and meaningful patterns. Just as the segmentation and organization of printed books allows us to master more information than we could through oral memory, better conventions of segmentation and organization in digital media could give us mastery over more information than we can focus on within the confines of linear media.

I would suggest that the best way to create knowledge that exploits the vast new resources now migrating into digital form or increasingly being “born digital,” may not be through “data-driven” automation, since much of human meaning will always be contextual. The meaning of any data set, no matter how vast or well-organized, cannot be logically inferred from the data alone because we have yet to find a way to encode the full experiential context of the data. The data itself may be treated as a logical code, but the kinds of information that have been captured, the selection of what to pay attention to and what to ignore, and the framing of questions to ask of the data as well as the inevitable omission of other questions all depend on values, assumptions, and the wider cultural context.

Numbers and other logical codes are always lagging behind our understanding of the world as expressed in the cultural code of language; and language is always lagging behind what we take in as experience. The computer alone can foreground unnoticed relationships among logical units. But it cannot replace the new conceptualizations that come from experience through language. Instead of mining for knowledge in digital archives, we should see the computer as a facilitator of a vast social process of meaning-making.

The new participatory web genres associated with Web 2.0, such as media sharing sites, online social networks, and contributory information resources are steps in this direction. Wikipedia provides a useful example of a self-organizing information structure, dependent on coordinated distributed efforts rather than automated knowledge creation. Its limitation is that the coverage of topics and the quality of entries is uneven. But it owes its usefulness to the successful definition of the genre of a wikipedia entry, including guidelines for tone and attitude as well as guidelines for structure. Most of all, wikipedia is successful because it exploits pre-existing disciplinary taxonomies and media conventions. Other web 2.0 genres such as media sharing sites are considerably less well organized than wikipedia. Most rely on the voluntary collective elaboration of tagging structure, often called a “folksonomy” to differentiate it from the top-down, authority-driven taxonomies of librarians and professional information architects. The best organized sites draw upon existing structures such as music genres, but without such preexisting structures folksonomy sites are full of arbitrary labels, inconsistencies, and redundancies.

New genres of knowledge creation will arise from a combination of all three kinds of efforts exemplified by Berners-Lee, Memmott, and Web 2.0: the top-down logical standards-maker, the self-consciously artistic outlier, and the sloppy but motivated mass users. It will also draw on the most sophisticated design practice and conversation currently taken place, which is coming out of the new genre and new discipline of Game Studies. One of the most important insights of Game Studies scholars is the procedural nature of the medium.[12] Computational environments are characterized by the execution of rule systems. Games have been such successful applications of digital media because they draw on a continuous, ancient tradition of rule-making. Games represent the world as a rule-based domain, where actions have predictable consequences. We have a collaborative technology for exploring rule systems in print, but are just beginning to develop a similar technology for exploring rule systems as executable environments.

One of the most promising examples of procedural genre creation is the Simile (Semantic Interoperability of Metadata and Information in unLike Environments) Project of the MIT Libraries which builds widgets based on Berners-Lee’s semantic web technologies. For example, they have created a timeline widget that takes in any kind of information and allows for presentation, browsing, and facet-driven sorting. Widgets like these should replace the browser and search engine interface to information, creating something similar to a library or a television channel as a standard format for containing and transmitting information. But the container would no longer be fixed and linear. It would be dynamic and procedural.

For a collectively created knowledge structure to work it will have to include ways of implementing collectively created rule sets as well as collectively created annotations. We need ways of creating simulations that interact with one another and also ways of sharing the task of annotating texts. Part of this process may be the creation of automated tools, but the tools can only implement shared understandings that arise among different communities. We cannot generate the understandings or the rules from the data alone, nor can we leave it to self-organizing open societies to create organization out of many discrete actions. We cannot impose ontologies upon large communities because they require too much investment in social organization and because they are antithetical in particular to the humanist frame of mind. And yet, we keep accumulating media and annotations and commentaries upon it. Are we destined to drown in our own knowledge creation, unable to know anything because we are paying attention to too much?

A modest, concrete beginning: Juxtapositions
So far I have argued that we need more complex representational patterns to take advantage of the potential of the computer for helping us to become smarter and to communicate with one another about more complex understandings of the world. Yet this effort has been stymied by the resistance to imposing common understandings on shared datasets, and the most widely advocated approach–the automated generation of meaning from large data sets–seems unlikely to be accepted by humanists. At the same time the humanist enjoyment of perplexity and the popular delight in posting and annotating is unlikely to produce the equivalent of the encyclopedia or the library shelf of collocated texts. If we assume that there will come a time when we will have more powerfully organized networked information structures that serve the multi-vocal, ambiguity-seeking needs of humanists as well as the conformity and clarity needs of the data-collecting disciplines, how do we go about inventing them?

I would suggest that we focus on the core task of juxtaposition. In the old book-based knowledge culture, we spoke of collocation, of putting like books together on the same shelves. The digital medium poses several challenges to this traditional strategy:

  • too many items to browse, no common “shelving” system
  • no reliable catalog or index; we cannot get everything that is relevant and only that which is relevant.
  • segmentation by book is no longer valid; we have knowledge in multiple formats and multiple size segments and want intellectual access to content at different levels of granularity and across media
  • the same book can be “shelved” in multiple places since it only exists as bits rather than as paper, raising the expectation that an item will be discoverable under every relevant category.

We need to work toward creating new genres that accomplish what we currently accomplish through shelving according to well-developed library classification systems, but that will create these knowledge-based juxtapositions at the right scale and granularity for the giant multi-media archives of the twenty-first century.

Ted Nelson, the visionary technologist who is credited with coining the word “hypertext,” has long pointed to juxtaposition as a key underexploited affordance of digital environments. He finds the current World Wide Web inadequate largely because of its limited ability to allow a user to place one thing beside another, to compare versions side by side, or to bring together related instances of the same object.

Film art is one discipline that offers a particularly appropriate opportunity to shape scholarly discourse in a way that produces new knowledge by supporting juxtapositions that have not been apparent or representable before. A number of projects have explored this area, but copyright restrictions and the formidable legal defenses of the entertainment industry have prevented humanists from coming up with a genre for an electronic edition of a film. One solution is The Casablanca Digital Critical Edition Project[13], a prototype project locating shared resources on a web server available only to those with a legal copy of the film, which brings together a classic American film with the originating play script, shooting script, detailed production reports and memos, and an authoring environment for expert commentary. This prototyped model would allow studios to control copyright and scholars to have access to very precise, semantically segmented sequences in the film with the same precision of reference we expect to have over print materials. It would also give them the ability to juxtapose auxiliary materials like scripts, memos, outtakes, and commentaries with precise moments in the film.

To do this effectively, however, scholars will need transparent interfaces with well-established conventions for creating and following such juxtapositions. These are examples of the many design elements that go into formulating a new genre. If there are to be digital editions of films, then we will need design solutions at many levels: to guarantee copyright, to provide access to scholars and film buffs, to provide for many kinds of segmentation by authorized and private users, to provide for multiple layers of commentary at varying degrees of formality and authority. If we could establish a common format for film study, then we could also start making connections between films.

The first explorations of hypermedia were focused on the simple linking of documents that retained their legacy formats of pages, movie clips, separate images. The next design effort will focus on the creation of born-digital formats with segmentation and juxtaposition conventions that will lead to the formation of new genres. The Casablanca Digital Critical Edition Project is different from a print or DVD edition of a work of art because it is not just an artifact but an open-ended system, comprised of search tools, authoring tools, and display interfaces. It is part of the collective process of re-imagining older knowledge genres like the variorum text, the production archive, the critical edition, and perhaps the scholarly journal. As a critical edition, it is meant to live within the wider information landscape of a complete digital archive of films.

A More Ambitious Approach: Parameterized Narrative Structures
Juxtaposition of semantically segmented multimedia resources is an extension of the structures of argumentation that have always been a crucial part of knowledge creation. Narrative, like argumentation, is a basic rhetorical structure and one that seems to be among the oldest elements of human cognition. As I have argued elsewhere, new narrative structures carry the promise of expressing knowledge about the world that was not expressible or not as easily expressed in linear format. The new digital medium offers the promise of allowing us to create causal chains of events (narratives) with explanatory power that exist not as a single version but as a set of possibilities. Digital formats, such as games and simulations, let us create a world as a set of parameterized possibilities and run through multiple versions of events by changing the parameters and replaying the scenario.

We use the cognitive structure of the replay simulation in many ways already: in scientific models of the earth as an ecosystem, in videogame entertainments with multiple “lives,” in movies like Back to the Future or Groundhog Day, in military training exercises, in stock market models, and in our everyday thinking about how to spend our money or what is happening in our social relationships. Thus far, games have provided the only open-ended means of exploring parameterized situations and so we tend to think of all such frameworks as games. But it is useful to emphasize the parameterized story as a separate genre, with many overlapping features of games, because the function of a story is to explore chains of causation.

Parameterized stories, like archives of disparate texts, require semantic segmentation and strategies of juxtaposition to allow us to focus our attention appropriately. These strategies can be present in linear media. For example, in Rosencrantz and Guildenstern Are Dead, the playwright Tom Stoppard gives us signposts, such as a snippet of a famous soliloquy, to let us know where we are in the canonical text of Shakespeare’s Hamlet while we watch the parallel story told from the viewpoint of Hamlet’s foolish school friends whose unimportant offstage deaths in the original play gains poignancy and existentialist weight in the twentieth-century retelling.

Since digital environments are participatory, juxtaposition can be interactive; the viewer can choose which item to place next to another or which sequence to follow in viewing a narrative. It is the author’s job to present meaningful possibilities, to create opportunities for revealing juxtapositions that are initiated by the viewer.

The design challenges in this emerging, experimental form are similar to those underlying the more practical tasks of film study: how do we keep users aware of the context of each segment as well as focusing them on the immediate juxtaposition? How do we segment temporal media so that we can create juxtapositions that help us to grasp something that was out of our cognitive reach in traditional media? How do we allow scholars to build upon one another’s reasoning by bringing relevant information together in clearest juxtapositions? Since humanistic knowledge is concerned with contextualized, ambiguous verbal and visual artifacts more often than it is with logical datasets, we need our own genres of representation. They will be of use to other disciplines as well, however, since commentary on temporal media, argumentation by citation across media, close analysis of visual objects, and more complex narrative forms will serve analytical discourse in general. When we think of cyberinfrastructure we have to include these discourse and media analysis tools as well as the number crunchers, optical cables, and compression algorithms. Because media serve to focus our common attention in productive ways, we must exploit all the affordances of this new medium of representation, to improve the depth, breadth, and commonality of our focus. Inventing the devices that provide the technical underpinnings for a new medium is often ascribed to a single person like Gutenberg or a single moment like the display of the first film by the Lumiere brothers in December 1895. But the invention of a genre, like the movie, is a process of collective discovery usually without such clear moments of demarcation. The necessary ingredients for a humanities-friendly cyberinfrastructure will be clearer looking backwards than they are looking forwards. They will be hastened, however, by approaching the task of designing humanities projects as a collective project of genre-creation.

Notes
[1] G. C. Bowker and S. L. Star, Sorting Things Out: Classification and its Consequences (Cambridge: MIT Press, 1999).
[2] Inventing the Medium is the name of a manuscript textbook I am writing under contract to MIT Press. Some of the following argument derives from that work in progress.
[3] S. Shapin, S. Schaffer, et al., Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life(Princeton: Princeton University Press, 1985).
[4] E. Svenonius, The Intellectual Foundation of Information Organization (Cambridge: MIT Press, 2001).
[5] M. Tomasello, The Cultural Origins of Human Cognition, (Cambridge: Harvard University Press, 2001); J. H. Murray, “Toward a Cultural Theory of Gaming: Digital Games and Co-evolution of Media, Mind and Culture,” Popular Communication 4, no. 3 (2006).
[6] J. H. Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace (New York: Simon & Schuster/Free Press, 1997).
[7] S. Papert, Mindstorms: Children, Computers, and Powerful Ideas (New York: Basic Books, 1999).
[8] Tim Berners-Lee, J. Hendler, et al. “The Semantic Web,” Scientific American (May 2001); T. Berners-Lee, N. Shadbolt, et al., “The Semantic Web Revisited,” IEEE Intelligent Systems 21, no. 3 (2006), 96-101.
[9] T. Memmott, Lexia to Perplexia (2000), http://tracearchive.ntu.ac.uk:80/newmedia/lexia/.
[10] N. K. Hayles, Writing Machines (Cambridge: MIT Press, 2002), 49.
[11] G. Landow, Hypertext 2.0. (Baltimore: Johns Hopkins University Press, 1997).
[12] E. J. Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore: Johns Hopkins University Press, 1997); K. Salen and E. Zimmerman, Rules of Play: Game Design Fundamentals (Cambridge: MIT Press, 2003); T. Fullerton, C. Swain, et al., Game Design Workshop: Designing, Prototyping, and Playtesting Games (New York & Lawrence: CMP Books, 2004); I. Bogost, Unit Operations: An Approach to Videogame Criticism (Cambridge: MIT Press, 2006); I. Bogost, Persuasive Games: The Expressive Power of Videogames (Cambridge: MIT Press, 2007).
[13] J. Murray, “Here’s Looking at Casablanca,” Humanities 26, no. 5 (2005), 16-23, http://neh.gov/news/humanities/2005-09/casablanca.html. The Casablanca Digital Critical Edition Project is an NEH-funded collaboration, designed by the author and Nick DeMartino of the American Film Institute (AFI), between the AFI, Warner Home Video, and Georgia Tech.