Tag Archives: database

Clio Wired: Week 8 Reflection

This week’s readings point out the important distinction between data and interface in digital projects.  While data is contained within a randomly accessible database, the author/editor/curator of a project presents this data in what Lev Manovich calls a “hyper-narrative” form.  In other words, the website author, through various controls such as  information architecture, design, or other cues, leads the user through a series of possible “narratives.”  One iteration of the this hyper-narrative experience may be called a linear narrative (in a sense different from the sole narrative presented in a work such as a monograph).  Unlike in directly accessing the raw database, here, the user does have some element of control, but the author of the interface ultimately guides the experience.

What struck me about this article was Manovich’s cautioning that true interactivity is not constituted by the user’s ability to access a site’s pages in various orders.  I think it would be very useful to contemplate the true parameters of interactivity, especially in light of our grant projects.  It occurs to me that interactivity can exist on several layers:  From simplest to most complex, this might include the options to leave commentary, to add content (as in Philaplace where users can add their own Philadelphia-related stories), to choose data sets or other information to be displayed in various ways, or the (much more involved)  option to extract openly available data and create a totally new interface.  Great examples of the fruits of this type of “interactivity” are the “Irish in Australia: History Wall” and “Invisible Australians” projects, which use digitized data from institutional archives to create totally new projects.  What other types of interactivity have people thought about?

Dan Cohen advocates (here and here) strongly for this type of interactivity, or as he would probably call it, freeing content for reuse and reinterpretation.  Although Cohen is clearly an advocate of digital history projects which guide the user through the site and have a specific message or thesis, he also believes that the data should be “freed” so that scholars can manipulate it for uses unanticipated by the author, or can create an interpretation of their own.  As Cohen makes clear, this open source data model brings up questions of credit for scholarly work; however, as a community, academics should be able to integrate data creation into the products (like monographs, articles, and slowly-but-surely digital history sites) for which scholars receive credit and acknowledgement.

I am also intrigued by Cohen’s idea that the separation of interface and data leads to a longer life for that data.  In other words, even when the interface has gone by the wayside due to lack of upkeep, antiquated technology, or the advent of newer scholarly methodologies, the data can still persist.  If the original creator of the this data–presumably the author of the interface–is no longer acting as a steward for this valuable data, who will take the responsibility?  Cohen thinks that this might suggest new roles for libraries, which could become responsible repositories of data.  This is certainly an intriguing suggestion, as in the library world, the future libraries face in light of new technology is always a big debate.  However, being a database repository does not necessarily shore up libraries’ brick-and-mortar existence (unless they are to transform into huge server farms).  At any rate, it is certainly worth thinking about how valuable data presented by digital history projects will be maintained once those sites are defunct.  What might be other solutions to this data-maintenance problem?

Advertisements

1 Comment

Filed under Reading and practicum reflection