Clio Wired: Final Post

I think what struck me most this semester was the ability for digital tools to actually enhance, rather than just present, scholarship.  Tools like text mining and visualizations which actually help scholars produce work that wouldn’t have been physically possible to produce in the past are for me, the most surprising aspect of digital humanities.  I was also frankly surprised by how much work has already been accomplished in the digital humanities, despite not always receiving recognition from traditional academia.  Although there has not been full recognition in terms of tenure, promotion, etc., digital humanists seem to be steamrolling ahead in their goals for open access, digital learning, and more.  Although a lot of what digital humanists are up against seem like the most entrenched aspects of society–traditional tenure programs and subscription-only access to academic journals, etc.–it appears that they are not deterred, and because of that, are starting to gain a lot of attention.  I wonder if people’s skepticism about the educational possibilities of games echoed people’s reservations about the digital humanities in general many years ago.  Although I too am skeptical about games, this class made me think twice about being too shortsighted in terms of technology’s roles in society.

The other aspect of the the class that was most enlightening to me was the ability of social media like blogs and Twitter to actually become part of scholarly conversation.  From the comments and responses on my own blog and Twitter feed, to the use of these media by scholars, it’s now clear  to me that having an internet presence is not a distraction from academic work, but an enhancement.  I’m not quite sure that I will keep blogging personally, but I hope to be involved in a blog in some professional way in the future.  I think I will also continue to use Twitter, because it seems like a great place to share ideas or find out about opportunities or events.  In contrast, I think I would try to keep Facebook separate from anything professional because it has so much personal information associated with it.

Woman Reading a Letter, Gabriel Metsu

Drop me a line!

Keep in touch with me @shiragmuARTH!

Leave a comment

Filed under Uncategorized

Clio Wired: Week 14 Reflection

What I thought of the readings:

This week’s readings introduced a new tool to the historical teaching/learning arsenal: games.  The topic of historical games ties in well with the readings from last week which dealt with teaching history in the classroom or elsewhere.  As a way to critically engage with historical learning, rather than simply memorizing and regurgitating names and dates, gaming offers a novel way for teachers/developers to get students/users to engage in critical thinking.  Of course, (although this wasn’t really addressed in the readings) the idea of video games as a teaching tool might rub some scholars and teachers the wrong way.  The stereotype associated with video games is that they involve hours of mindless droning in front of a computer or TV screen, during which valuable time exercising, reading, or interacting with IRL human beings is sacrificed.  While I can’t say I’ve been totally disabused of this stereotype through my own interactions with games, gamers, and this week’s readings, I will say that I can appreciate gaming as teaching/learning tool.

I really enjoyed the article about the game Pox and the City which was the recipient of an NEH grant.  The authors describe how they worked through various ideas for teaching players about medical history–specifically the disease smallpox.  It was useful to read about the authors’ debate between creating a game which recreates a real historical event (the discovery of the smallpox vaccine) or a fictional scenario based on historical research.  The game creators came to the conclusion that games work best when are they open ended; a game about a specific historical event would have a predetermined outcome and not many choices to make.  Instead, an immersive world in which users could act out various roles (doctor, ordinary citizen, and even disease) turned out to be be a much richer venue in which to experience the medical culture of the 19th century.

Although at first glance it would seem that students/players would only be able to learn a precise set of facts or concepts pre-loaded into the game, the fact that students have agency in experiencing the game-world could, according to the authors, lead students to novel ideas or theories about the past.  They would then be able to use primary sources provided by the game to support those ideas.  I think this seems like a great learning tool, though I also wonder how much guidance the game would offer in terms of learning to read primary sources carefully, a la Historical Thinking Matters.

Lastly, I liked how the article goes into the art and graphics for the game and how that would influence its message and teaching efficacy.  For example, they decided that a 3rd person perspective and stylized or cartoony graphics would allow for maximum immersion in the game environment and concepts.  This reminds me of our discussion in other weeks about how the design of a digital tool should actively contribute to its thesis.

James Paul Gee’s “Good Video Games and Good Learning” does not necessarily advocate using games as a teaching tool, but rather as a model for teaching in the classroom.  He points to a long list of factors that characterize video games, such as players’ sense of agency, customization of difficulty, and problems being ordered in a progressively more difficult way, that could improve teaching and learning.  I bought many of the parallels and the efficacy of implementing many of the concepts into teaching.  However, (and I realize this wasn’t in the scope of a short article) Gee did not really offer ways in which this could actually be accomplished.  I am wondering if any of you who have been teachers in former lives have any comments on this.

While most of the readings focused on video gaming, am I much more of a tabletop gamer myself (think Small World, Bohnanza, or Dominion, not Monopoly), so I was really looking for some insight into board games and learning.  I liked that that Jeremy Antley gave a shout-out to board gaming in Going Beyond the Textual in History and how he differentiated the modes of learning that take place in the two types of gaming.  Essentially, he says that while in video games the player constantly goes through periods of discovery about how the game works and how to proceed, in a board game, the rules and mechanics are all laid out and should be understood ahead of time.  Then, players can move “straight to analysis and interpretation.”

Because I was interested in reading more about tabletop games for teaching history, I searched Play the Past for articles about board games.  To my delight I found an article by Trevor Owens about a game called “The New Science.”  Owens interviews the game creator, Dirk Kneymeyer about the goals and mechanics of the game.  The game is about the scientific revolution, and players get to take on roles of various scientists from that era.  Each scientist has unique “powers,” which emphasize the differing personalities, skills, and beliefs of the historical figures.  Throughout the game you research, experiment, and either publish on a discovery or hoard your knowledge.  The player has to balance gaining “prestige” points for publishing or getting ahead by hoarding knowledge.  Players also have to deal with societal forces such as the the Church or the king.  Knemeyer is not a historian, but he has clearly put a lot of thought into creating an immersive theme which communicates interesting and educational information about the time period in question.  Knemeyer was able to get his game off the ground through donations from Kickstarter (yay crowdsourcing!), and it should be on sale to the public next month.  Sign me up!

Trying it out myself:

Browsing through Playing History to find a game to play, I was mostly looking for a time period that I would be interested in.  I finally decided upon the BBC game “Muck and Brass” which is about English towns during the Industrial Revolution.  The game isn’t really immersive in style; it doesn’t have many moving graphics or a world that you can explore.  Rather you click though a series of screens that ask you to make various decisions, such as whether to replace the sewage system in your town or try to improve the air quality.  The decisions you make either deplete or increase your town’s funds, represented by coins.  The decisions you make may also improve or fail to improve the lives of your townspeople, whose misfortune is represented in coffins (ew?).  The game is quite short, but I still think I got a bit of knowledge out of it.  For example, my attempts to clean up the sewage system improved the people’s lives, but when I tried to clean up the air the game told me that my progressive policies were before my time.  Improving the air met too much resistance from factory owners, so I spend tons of money, but the coffins piled up anyway. It would’ve been great if this game were a full-length, fully immersive game like Pox and City.  That way the player could truly experience the poor living conditions the townspeople faced during the Industrial Revolution.

Muck and Brass

The bodies pile up if you don’t improve the sewage system in “Muck and Brass.”

If I were to plan a historical learning game, I would create a multi-player scenario around the art market in the 17th century Netherlands.  Players could choose the roles of various historical painters, such as Vermeer, Rembrandt, or Jan Steen, and dealers such as Gerrit or Hendrick van Uylenburgh.  Each role would have different strengths and weaknesses.  For example, Vermeer works extremely slowly, but his paintings sell for a high price.  Steen works quickly, but is sometimes out of commission because he’s a drunk.  The points scheme would largely be money-based, with painters having to buy materials and sell their works to dealers, and dealers having to buy and sell paintings.  Both painters and dealers would have to contend with fluctuating costs of painting materials, the whims of the buying public, and disasters such as plague, harsh winters, or wars which tamp down commerce.  There would be an auction component to the game as well, using both Dutch and English auctioning structures, as historically appropriate.  The game would be year-based, so every so often, the end of the year would come and whoever couldn’t pay their debts or rent would be penalized.  There would be a finite amount of years in the whole game to limit the length of each game.

Pigments similar to those used in the 17th century.

Pigments similar to those used in the 17th century.

The game would have the benefit of teaching students about 17th century Dutch life in general, and about the unique open art market which existed in the Netherlands at that time (in countries like Italy, art was mostly commissioned by wealthy patrons or the Church).  They would learn about painting materials (all the pigments were hand ground and some colors were made of wildly expensive materials), the historical figures themselves, and about events and natural disasters that occurred at the time.  They would also learn about how to navigate different auction systems and how to balance their character’s budget.  It would also be nice to populate the game with really great images of the artist’s paintings, adding to the player’s art historical knowledge.

Leave a comment

Filed under Uncategorized

Clio Wired: Week 13 Reflection

This week’s readings focus on innovative techniques for teaching history, emphasizing students’ (lack of) ability to critically evaluate primary and secondary sources.  Sam Wineburg’s piece “Thinking Like a Historian” sets the stage for the rest of the readings, exploring the reasons for that lack of interest and ability in performing historical study.  Often, this stems from prior emphasis on memorization of facts and dates rather than thinking.  Students can’t begin to imagine that doing history actually involves critical thinking, discovery, and uncertainty, because their only exposure to the field involves regurgitating bullet points.

Wineburg and Daisy Martin explore how the site Historical Thinking Matters guides students through modules on a certain event in history, while teaching them to critically interpret primary sources from various sides of the issue.  The site not only tells students about the importance of sourcing, contextualizing, close reading, using background knowledge, etc., it actually shows historians thinking out loud as they encounter a new document.  By showing rather than simply telling, the site allows students to understand how history is done.

When I went through the HTM module on the Scopes trial as part of the practicum, I felt that the site was very effective.  It definitely deepened my understanding of the various viewpoints involved in the trial.  I really liked how the first page of the module provided background for the event.  Then I was able to see an example of a historian working through a document, which I was then encouraged to do with the rest of the series of primary sources.  I liked that each primary source came with a brief introduction and questions which could be revealed after analyzing the source on your own.  I also liked how the overall question or thesis of the module asked the student to complicate or problematize the notion that the Scopes trial was simply a battle between creationists and evolutionists.  As the rest of the readings showed, one of the most difficult obstacles students face when learning history is understanding that ambiguity and uncertainty are often history’s results.  After spending years reading from what seem like authoritative textbooks, it is quite difficult for students to understand that history is not about find the answer.

Mills Kelly certainly brought innovative history teaching to a new level with his course “Lying About the Past.”  I was fascinated by Kelly’s description of his course and the fallout it caused in the scholarly community.  It was shocking that a course which was able to teach students real research skills and the highly important ability to detect unreliable sources would end up being so vilified.  Although the students did produce a few hoaxes online, they were careful to reveal the hoax fairly quickly; in fact, the public revelation of the hoax was a chance for those who hadn’t taken the course to sharpen their critical thinking skills by learning to question what they read on the internet (or anywhere else).  That Kelly was banned from Wikipedia and treated like a criminal by many in the scholarly community actually serves to prove his point about jumping to conclusions without weighing all of the facts of the situation. I think if the people attacking Kelly with such vitriol had actually understood his goals and the success of his students, they would have moderated their views.  It’s a shame that someone who took the lead in truly innovating history teaching ended up being pilloried rather than emulated or praised, especially in light of the difficulties of getting students truly involved in critical history work.

My lesson for this week’s practicum is inspired by the case studies in “Ways of Seeing: Evidence and Learning in the History Classroom”.  I particularly latched onto Jaffee’s, Felton’s, and Wies’s explorations of how students’ ability to analyze primary sources seems to evaporate when they are faced with images.  As an art historian, I found this particularly worrisome, but also not especially surprising, as historians often ignore their counterparts in the art history field (there, I said it!).  Why these professors did not consult with their colleagues in art history was puzzling, to be honest.  While it’s true that art historians often have the same issues when trying to get students to analyze artworks, clearly they have more ironed-out techniques for getting students to think about images.  Even the introductory chapter in the survey textbook Gardner’s Art through the Ages gives an overview of how students should be prompted to think about art, with questions like “how old is it?”, “what is its style?”, “what is its subject?”, “who made it?” and “who paid for it?”.  It also directs students to think about various types of evidence: documentary, visual, stylistic, physical.

I have never made a lesson plan before, but my idea centers around images of leadership and power.  I would split students into groups and assign each group an image of a leader chosen from various time periods.  Examples could include the Egyptian pharaoh Menkaure, the Roman emperor Augustus, Justinian, Louis XIV, Medici Pope Leo X, and George Washington.

I would give the students some background information on each leader and society.  I would then ask students to prepare a short PowerPoint or Prezi presentation, using the image as its main focus and using comparison images if necessary.  In the presentation they would have to answer how the image communicates the leader’s power, leadership style, type of government, etc.  The students would need to point to specific elements such as material, audience, style, location of the image (if known), accessories, dress, expression, other figures, etc.  They would also present further background information they had researched in order to substantiate their claims.  Further background might include textual primary sources or secondary sources.  The goal of the lesson would be to show how images of power are constructed, and would hopefully teach students to question the imagery they see in their everyday lives.

3 Comments

Filed under Reading and practicum reflection

Clio Wired: Week 12 Reflection

What I thought of the readings:

Being a wanna-be art historian and librarian, I was admittedly more interested in the open access (rather than open source) portion of the readings this week.  However, some of the same issues clearly come up in both realms–the questions of ownership of “intellectual property”, free knowledge exchange, the overreach of copyright or patents, and the role of for-profit companies in IP law, etc.

I enjoyed Lawrence Lessig’s Free Culture which argues that our once free culture is rapidly becoming a “permission culture.”  The exchange of ideas  which used to be part and parcel of the way we communicate, share, and generate culture is threatened by powerful companies whose commercial interests run counter to this model.  Lessig asserts that because this cultural exchange is now more public, recordable, and effortless due to the advent of the internet, companies have severely ramped up their efforts to  strengthen laws which protect so-called intellectual property. He points out that copyright trolls like Disney–by pushing for ever longer and more stringent copyright protection–not only affect their own proprietary works, but all works that fall under copyright legislation, in essence, affecting all cultural objects.  The original intent of copyright–to ensure that the creator could make a reasonable profit for a few years, and then, by design, open the work up for the public’s benefit–has essentially been thrashed by these companies.  Fair use, which is extremely ill-defined anyway, does not seem to be a sufficient defense.

Scary disclaimer about using a picture of M!ckey Mouse

Scary disclaimer about using a picture of M!ckey Mouse

Veering slightly (and delightfully) into hyperbole, Lessig compares the ever-more extremist copyright climate to the system of feudalism, in which a relatively small number of individuals or entities own all property, and which depends on maximum control and little freedom.  He implores government to resist the pull of large corporations and to preserve the tradition of free culture.  He advocates a “middle way” between “all rights reserved” and “no rights reserved” which gives creators freedom to distribute their works as they see fit.  Implied in Lessig’s title Free Culture is not just the adjective free as in “free speech, not free beer”, but also the verb “to free.”  Lessig wants us to unshackle our cultural objects and traditions.

Steering clear of the thorny territory of for-profit content, Elena Giglia and Peter Suber explore the meaning and implications of Open Access (OA) in the scholarly world. OA literature is “digital, online, free of charge, and free of most copyright and licensing restrictions.”  Importantly, however, OA does not give a free pass to plagiarism–in this model, the author is always credited for his or her work.  Suber (whose book is ironically not totally open as of yet), argues that OA is basically a no-brainer for the scholarly arena. He asserts that scholars are uniquely situated to benefit from open access, as their work model has never rested upon being paid for selling content; rather, they are paid a salary by universities or grant-funders to research, peer review, and publish in the normal course of their work.  Scholars benefit in their careers when their works have maximum impact and citations; the larger audience and heightened visibility facilitated by the OA model, then, greatly benefits them.

My one question about this model (which is perhaps answered in one of Suber’s chapters that is not currently open access), is whether this would affect the ability for researchers to use the highly effective search tools provided by databases.  If libraries no longer had to pay for access to journals through various databases, how would researchers comb through vast amounts of material without being able to search by keyword, subject, or any number of highly effective limiters?  How would they search many works at one time, rather than hunting through each title?  Even if every journal I ever wanted to read was freely open on the web, I know I wouldn’t want to be limited to Google or to painstakingly combing through tables of contents.  Perhaps someone would make a highly sophisticated (to a higher degree than GoogleScholar) search engine for scholarly research.

Although one would probably have to be an actual programmer to fully appreciate Karl Fogel’s book on avoiding failure in open source projects, I did appreciate his history of open source.  The anecdote about disenchanted open-sourcer Richard Stallman creating GNU’s General Public License (GPL) explicitly to stick it to the man was quite entertaining.  The GPL asserts that code may be copied or modified without restriction, and that both copies and derivative works must be distributed under the license with no additional restrictions.  This license provides protection for free software and disallows the “enemy”–propriety software–from benefiting from it.  I also appreciated Fogel’s exploration of the the evolution of the term open source from the formerly used “free software.”  Fogel explains that free is a tricky word in English, having no Romantic distinction between gratis and libre; programmers were always having to explain “think free as in freedom – think free speech, not free beer.”  More importantly, however, the term open source was easier to pitch to the corporate world, which didn’t associate it with free’s implication of theft, piracy, or not-for-profit.

Trying it out for myself

The Creative Commons licenses are flexible and relatively easy to implement, though I did have to Google how to attach one to my WordPress blog.  You can create a CC license similar to the GNU GPL, which allows only open access works to use your work, or simply block commercial users.  Importantly, you can also specify who and how to credit. The fact that CC creates a tidy block of code which is easy to copy and paste is convenient, and I of course enjoy the little emblem it creates as well.

2 Comments

Filed under Reading and practicum reflection

Clio Wired: Week 11 Reflection

This week’s readings on preservation of digital materials seem to speak more to the concerns of librarians and archivists than humanities scholars themselves.  They reminded me a lot of the discussions I had while studying for my master’s in library science, during which I also focused on archival work.  What really struck me during those studies, and from the readings this week, is the sheer amount and ephemeral quality of born-digital materials.  Despite that fact that we all mostly acknowledge the superior capabilities for creating data, projects, etc. in digital formats, it is also true that we have not come up with a better medium than paper for long term storage.  Not only does paper not need an appropriate “reader”, such as a CD-drive, floppy disc drive, VCR, etc., it is also highly stable in most cases, and is also still readable after sustaining some damage.  Moreover, preservation of paper is mostly passive (keeping it out of the way of water, fire, acid, etc.), while preservation of digital materials requires constant recopying to either of the same type of media (CDs begin to deteriorate after about 10-15 years) or to a completely new media (if we aren’t going to keep a museum’s worth of old readers, we need to eliminate data storage on old media-types).  This requires tons of human-power, funding, time, planning, etc.  Of course, paper isn’t a cure-all either, especially for born-digital projects.  Obviously, no one is going to print out every single one of their thousands of emails for posterity, and there are many digital works that aren’t simply text, so they cannot feasibly be stored in paper format.  In some ways, it feels like we’ve opened a Pandora’s box with the creation of such an overwhelming amount of born-digital material, but of course all we can do is adapt and try to intelligently create best practices as we go along.

The authors this week have obviously thought a great deal about these issues, but certainly don’t offer a cure-all for these problems; it is heartening, however, that they have offered plans for a way forward.  I especially like the goals or steps laid out by the The NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials:

  • Identifying the data to be preserved
  • Adopting standards for file formats
  • Adopting standards for storage media
  • Storing data on and off site in environmentally secure locations
  • Migrating data
  • Refreshing data
  • Putting organizational policy and procedures in place

The ethical and technological issues raised by Matthew G. Kirschenbaum in “Digital Forensics and Born Digital Content in Cultural Heritage Collections” in terms of mining a donor or subject’s computer to find historically pertinent information shows that librarians, archivists, and scholars will not only need the technological capabilities to engage in this activity, but will also need to seriously consider the ramifications of having access  to data that may not have been intended for public view.  Of course, this is not necessarily a new problem; as we have seen in recent years with revelations about Thomas Jefferson’s dealings with slaves, for example, even manuscript or printed materials created during a person’s life do not necessarily leave the legacy he or she always intended.  Issues of provenance or authenticity when it comes to born-digital data also have a basis in the techniques and policies of dealing with physical media; however, while techniques such as materials analysis, handwriting analysis, etc. may not be applicable, chain of ownership, word-usage analysis, etc. will still be valuable tools in the arsenal.  In fact, text mining techniques that we have discussed in other weeks could be an increasingly valuable tool for analyzing and determining authenticity of bodies of writing.

As for concerns about terminology for collections of online scholarship or documents discussed by Kenneth M. Price in “Edition, Project, Database, Archive, Thematic Research Collection: What is in a Name?” and Kate Theimer in “The Problem with the Scholar as ‘Archivist,’ or is there a Problem,” and “Archives in Context and as Context,” I think it is both important to acknowledge the correct usage of terms, and also acknowledge that the nature of language is that words evolve and change meaning over time.  However, as a librarian, I also fully understand Theimer’s concern about the implicit disregard for her profession when using the term “archive” very loosely.  Librarians and archivists both have a lot of trouble communicating their worth and professional status to the outside world–even to scholars.  It would be ideal if the scholarly community banded together with librarians and archivists to express the worth of our collective field in the face of ever-increasing budget cuts and disparaging of the worth of cultural institutions and academia in society.

2 Comments

Filed under Reading and practicum reflection

Clio Wired: Week 10 Reflection

What I thought of the readings:

This week’s readings echo many topics addressed in past weeks, such as the need to acknowledge collaborative work, the benefits of open access, and the need for academia to “count” digital history or public history work (as opposed to only the scholarly monograph) towards tenure and promotion.

Addressing the need for new modes of peer review, authorship, and publication, Kathleen Fitzpatrick’s Planned Obsolescence brings a sense of urgency to these concerns by tying them to existential threats to the humanities:  She links the “fundamentally conservative nature” of academia not just to the inability for younger or more technologically savvy scholars to get their work recognized, but to society-at-large’s dismissal of the university in general and the humanities in specific.  In other words, by resisting digital technologies and all the new modes that come with it (open access, open peer review, rethinking of intellectual property rights), academia is further isolating itself from public life, and therefore confirming the public’s misconception that scholarship (especially the humanities) is not worthy of public interest, respect, or funding.  For Fitzpatrick, academia must take responsibility for communicating its worth to the public, and embracing new technologies and forms of communication is a fundamental step.  Moreover, Fitzpatrick emphasizes that academia does not have a choice in whether to adapt to the new technological landscape or to remain in its conservative bubble:  Change is inevitable and academia must react.

In publishing his born-digital article for the American Historical Review, William Thomas experienced both the fundamentally conservative nature of academia addressed by Fitzpatrick and the benefits of embracing new modes of review and publication.  It is  interesting that the summary of the digital article which appeared in the print version of the journal was mistaken by some scholars as the “real” version of the work.  Among reviewers of the digital version, there seemed to be a fundamental misunderstanding of the difference between simply publishing text on the web and creating a dynamic digital history project.  Aside from (rightly) criticizing the gimmicky use of Flash or other convoluted navigation features, reviewers  saw the digital project as having “no argument” due to its lack of linearity and the perceived abdication of authorial control.  For Thomas, these obstacles in having his work accepted by historians show that  we need new conventions for “reading” in the digital medium.

In “Re-Visioning Historical Writing” Dorn and Tanaka also address the need for new modes of historical reading and writing.  Dorn emphasizes that digital projects reveal history to be more than just a “polished argument about the past.”  Rather,  history is a messy proposition which involves many voices, contradictions, and narratives, perhaps best suited to a hypertext, dynamic, ever-evolving presentation rather than a static, linear narrative presented in a monograph or journal article.  Tanaka also cautions against fixating on the “correct” interpretation of the past rather than a heterogeneity of interpretations.  He proposes that the evolving role of the historian will involve corralling a multitude of data in a skilled and reliable way, rather than simply mastering knowledge in a specific area of expertise and presenting that knowledge in an authoritative way.  (For me, this sounds  a lot like the job description of librarians–professionals who constantly need to justify their worth to students, funders, and sometimes even scholars.)

Related to the above authors’ calls for openness and change in academia is the Working Group on Evaluating Public History Scholarship’s guidelines for fair and transparent evaluation of public history faculty.  Again, these guidelines show that change is already upon us, and academia must adapt in order to promote not only fairness to scholars but continuing relevance to the outside world.

Practicum:

The act of commenting on Open Review actually brought up many of the issues addressed by the essay itself.  I found myself wondering whether my comment could actually be useful to the writers, who are subject specialists and have much higher academic credentials that I do.  I also wondered who would be responsible for reading my comment and for what period of time it is actually useful to receive further commentary.  Reading the essay and making a brief comment on one paragraph did not feel to me like terribly helpful or legitimate peer review.  As was said in the essay, different levels of engagement would be required for open review to be feasible, such as a certain number of reviewers being committed to reading the entire work as well as making granular comments.  I do like the idea of opening up works to the scrutiny of any interested commenter, but wonder if it could be difficult for authors and editors to cut through the noise to be able to respond to truly useful recommendations.

I have to admit that I was a bit stumped when it came to developing my own guidelines for evaluating digital history scholarship, not least of all because I am really not familiar with the process of evaluating even traditional scholarship for tenure or promotion purposes.  Therefore, I did some Googling and found various example of guidelines, such as those provided by the American Association for History & Computing, based on guidelines by the MLA.  Both sets of guidelines seem fairly comprehensive, focusing not only on the responsibilities of the reviewers, but on the responsibilities of the candidates in advocating for themselves.  Although candidates are advised to document and explain collaborative relationships, one aspect I thought these guidelines left out was the responsibility of reviewers to fully understand the collaborative nature of digital projects and to seek methods for fairly evaluating this work.  Also, while these guidelines are more general, there are a few specific actions on the part of reviewers I thought could be added:

  • Consider the audience for a digital project; it may not be directed toward scholars, but toward the public, undergraduates, etc.
  • Attempt to explore the digital project through various paths, as the full story of the project may be best communicated through various trials and revisits
  • Evaluate design as an aspect of the project’s argument, thesis, or purpose
  • Take user feedback into account if the project has been opened to the public
  • Understand that the project may be ongoing and evolving, rather than in a final or a static state

What do you think of these recommendations?

1 Comment

Filed under Reading and practicum reflection

NEH Startup Grant Prezi

Here is the Prezi for my proposed site on Dutch genre painting and art historical methods.

Leave a comment

Filed under Uncategorized

Clio Wired: Week 8 Reflection

This week’s readings point out the important distinction between data and interface in digital projects.  While data is contained within a randomly accessible database, the author/editor/curator of a project presents this data in what Lev Manovich calls a “hyper-narrative” form.  In other words, the website author, through various controls such as  information architecture, design, or other cues, leads the user through a series of possible “narratives.”  One iteration of the this hyper-narrative experience may be called a linear narrative (in a sense different from the sole narrative presented in a work such as a monograph).  Unlike in directly accessing the raw database, here, the user does have some element of control, but the author of the interface ultimately guides the experience.

What struck me about this article was Manovich’s cautioning that true interactivity is not constituted by the user’s ability to access a site’s pages in various orders.  I think it would be very useful to contemplate the true parameters of interactivity, especially in light of our grant projects.  It occurs to me that interactivity can exist on several layers:  From simplest to most complex, this might include the options to leave commentary, to add content (as in Philaplace where users can add their own Philadelphia-related stories), to choose data sets or other information to be displayed in various ways, or the (much more involved)  option to extract openly available data and create a totally new interface.  Great examples of the fruits of this type of “interactivity” are the “Irish in Australia: History Wall” and “Invisible Australians” projects, which use digitized data from institutional archives to create totally new projects.  What other types of interactivity have people thought about?

Dan Cohen advocates (here and here) strongly for this type of interactivity, or as he would probably call it, freeing content for reuse and reinterpretation.  Although Cohen is clearly an advocate of digital history projects which guide the user through the site and have a specific message or thesis, he also believes that the data should be “freed” so that scholars can manipulate it for uses unanticipated by the author, or can create an interpretation of their own.  As Cohen makes clear, this open source data model brings up questions of credit for scholarly work; however, as a community, academics should be able to integrate data creation into the products (like monographs, articles, and slowly-but-surely digital history sites) for which scholars receive credit and acknowledgement.

I am also intrigued by Cohen’s idea that the separation of interface and data leads to a longer life for that data.  In other words, even when the interface has gone by the wayside due to lack of upkeep, antiquated technology, or the advent of newer scholarly methodologies, the data can still persist.  If the original creator of the this data–presumably the author of the interface–is no longer acting as a steward for this valuable data, who will take the responsibility?  Cohen thinks that this might suggest new roles for libraries, which could become responsible repositories of data.  This is certainly an intriguing suggestion, as in the library world, the future libraries face in light of new technology is always a big debate.  However, being a database repository does not necessarily shore up libraries’ brick-and-mortar existence (unless they are to transform into huge server farms).  At any rate, it is certainly worth thinking about how valuable data presented by digital history projects will be maintained once those sites are defunct.  What might be other solutions to this data-maintenance problem?

1 Comment

Filed under Reading and practicum reflection

Clio Wired: Week 7 Reflection

What I thought of the readings:

This week’s topic of spatial history built upon last week’s discussion of data mining and visualizations, exemplified by Franco Moretti’s work Graphs, Maps, and Trees.  Delving deeper in to the “maps” aspect of visualization, the readings this week show how historical topics can be enhanced, explained, modeled, synthesized, etc. through the use of spatial visualizations.  It is important to note for our discussion–as emphasized by Richard White–that these digital visualizations are not simply static illustrations accompanying text, but can be dynamic visual aids which allow the user/reader to understand how events unfold over time and space, to ask new questions, and to scrutinize assumptions.

Todd Presner presents a rich spatial history resource, HyperCities, which allows many users to create mapping projects through its interface.  Presner makes the important point that HyperCities differs from simple, commercial mapping projects in that rather than focusing on information like traffic, weather, and commercial interests, these visualizations’ main focus is humanities scholarship related to “urban, cultural, and historical transformations of city spaces.”  Projects ranging from presenting the history of Los Angeles from prehistoric times until now to the mapping of protests in Iran’s 2009 elections show how HyperCities in particular, and spatial history in general, has the ability to present a breadth of scholarship in dynamic and innovative ways.

Presner’s emphasis on spatial history and visualization as legitimate forms of humanities scholarship is also addressed by Jo Guldi and Martyn Jessop.  Although she does not directly address visualizations, Guldi explores the “spacial turn” in a myriad of scholarly areas, explaining how in fields as diverse as psychology, anthropology, history,  and art history, scholars between 1880 and 1960 came to reflect on humans’ “nature as beings situated in space.”  Rather than continuing to concentrate on great personalities, for example, historians began to focus on history as a function of nation or city, and later, as a function of region or center/periphery.  Jessop shows how graphic aids to humanities scholarship are not actually new or out of blue, but rather have a long history, ranging from early modern Kunstkammern, to museums, to film, to theater.  For Jessop, digital technology has simply created a new medium for visualization.

Trying it out myself:

Jessop’s assertion that humanists have a lack of education in visual literacy certainly hit home for me as I was attempting to use the various tools this week.  Visualizing events in space has never been a particular strong suit of mine.  I remember reading Michael Shaara’s Civil War novel The Killer Angels in middle school and hating every minute of it; I couldn’t makes heads or tails of Shaara’s descriptions of troop movements, which at the time, seemed to make up the entirety of the book.  (If someone had made a nifty visualization of the book back then, maybe I could’ve gotten into it!)  Trying to use many of the tools this week brought back that same sense of frustration.  Neatline, for example, has a very steep learning curve.  I really couldn’t figure out how to do anything effective with the site; their demos only showed what masters were able to create, but did not show how novices could learn to use the tool.  I tried to perform the simple task of plotting my birthday in time and space, but couldn’t even figure out how to do that.

Neatline

Trying and failing to use Neatline

I was a bit more successful with GoogleEarth, where I made a map of some of the museums I visited this summer in the Netherlands.  However, as Presnor points out, I am not sure that GoogleEarth on its own is really a digital humanities tool, though clearly some other digital humanities sites, like the historical maps repository at David Rumsey Maps Collection have made use of its data.

Museum visits on GoogleEarth

Museum visits on GoogleEarth

David Rumsey Maps Collection using GoogleEarth

David Rumsey Maps Collection using GoogleEarth

Clearly many of the spatial visualization sites on this week’s tools are very useful and can help scholars produce some unique and intellectually rigorous projects.  I think, though, that I would need a lot more training in order to produce something worthwhile.

2 Comments

Filed under Reading and practicum reflection

Clio Wired: Week 6 Reflection

What I thought of the reading:

This week’s readings were enlightening because they demonstrate how digital tools are useful not only in presenting history to the public or other audiences, but also in the process of researching and creating historical scholarship.

Franco Moretti’s Graphs, Maps, and Trees was a nice introduction to what exactly can be done with manipulating and visually presenting historical data.   For Moretti, visualizations of trends, patterns, and cycles in literary history do not replace close reading of individual texts.  Rather, they add new layers of information, and sometimes even debunk generally held assumptions about literature’s history.  Tim Burke praises Moretti’s approach, in that viewing quantitative data about literature can problematize many commonplace assumptions about it.  However, Burke cautions that, while numbers can seem quite concrete and infallible, they can still be misleading.  For example, quantifying publication does not actually tell us about readership.  He also criticizes Moretti’s lack of emphasis on authors’ agency and the breaks and ruptures (as opposed to gradual divergence) in literary history.  However, I think Moretti is still useful in demonstrating how these tools can be used not just in the social and hard sciences, but also in the humanities.  Burke’s criticisms show that despite these visualizations’ seeming authoritativeness, the way in which they are interpreted or presented is still quite subjective.

While Moretti mostly deals with publication data for various genres, the rest of the authors focus on data mining specific texts or corpuses of texts in order to analyze them in new ways.  Daniel Cohen and Gregory Crane focus on the new scholarly opportunities presented by large digital collections such as Google Books or Project Gutenberg.  In conjunction with close examination of a limited number of texts, scholars who use various data mining/text mining tools can, in the words of Cohen, “find patterns, determine relationships, categorize documents, and extract information from massive corpuses.” For example, one might perform a statistical analyses of how often two keywords or phrases appear together, or find specific types of documents (such as syllabi) by assessing frequently used words in these texts.

Unfortunately, these large digital libraries can have some drawbacks, such as “noise” from incorrect OCR, missing texts due to copyright restrictions or cost of digitization, and inability to present or crawl texts in non-Roman alphabets.  For these reasons, scholars need to be careful about drawing conclusions from potentially-incomplete data sets.

Trying it out myself:

Playing around with some web-based text mining tools, it was obvious that some of the tools are better suited to entertainment than serious scholarship.  Wordle, which generates text clouds of the most frequently used words in a document, creates aesthetically pleasing visualizations.  However, aside from giving a general idea about the topics or keywords of a text, I am not sure that this tool has any serious scholarly use.  Here is my text cloud for Grimm’s Fairy Tales:

Wordle for Grimm's Fairy Tales

Wordle for Grimm’s Fairy Tales

Another tool which was entertaining but probably not statistically sound is Google’s Ngram Viewer.  Because you cannot control which texts are included in the analyzed corpus, the data may be misleading.  However, for general information rather than scholarly purposes, the Ngram Viewer can give a nice idea of when certain terms may have come in and out of fashion.  For example, in the Ngram below, you can see the shift from using the term Great War to the term World War:

Ngram: Great War vs. World War

Ngram: Great War vs. World War

Because of the user’s ability to choose texts and because of its myriad analytical tools, Voyant was the most promising tool for scholarly research.  I chose to analyze the same Grimm’s Fairy Tails text I tried in Wordle, available through Project Gutenberg.  I like how in the user can manipulate the data provided by Voyant in many ways.  Not only can you see the most frequently used words, but you can also compare the frequency of two words against each other and see words in context.  Voyant also provides a word cloud, which seems to be generated using a different algorithm than Wordle’s, as they came out differently.

Voyant analysis of Grimm's Fairy Tales

Voyant analysis of Grimm’s Fairy Tales

Although I felt like I couldn’t take full advantage of Voyant’s tools since I wasn’t undertaking an actual text-mining project, I did find it interesting that Voyant identified “said” as the most frequently used word in Grimm’s Fairy Tales.  This might say something useful about the structure of the tales or how the narrative action is pushed forward.  As you can see above, Wordle actually eliminated “said” from its word cloud, perhaps because it is too commonly used; this shows how lack of control over the algorithm or data manipulation of tools like Wordle and Ngram can lead to misleading information.

3 Comments

Filed under Reading and practicum reflection