I spent two wonderful weeks this August at the first Kress Digital Mapping and Art History Summer Institute, hosted on the campus of Middlebury College by geographer Anne Knowles and art historian Paul Jaskot. It wasn’t a bad location to retreat to.
This was the summer of digital art history institutes, with the Getty Foundation funding three separate institutes around the country in addition to the Kress. The Kress and the Getty clearly had separate goals with these institutes. The Getty programs, as a general rule, seemed to emphasize breadth: participants set up their own websites and Omeka databases, joined Twitter, took a look at geographical mapping, network analysis, text mining, and data manipulation, and discussed the place of digital projects in the world of academic credit and publishing. The Kress, on the other hand, seems to have favored the depth-first approach. They wanted applicants to come with a full project proposal, and the directors worked carefully with all of us in the lead-up to the institute to refine our databases. Our two weeks were filled with seminar discussions about not only digital art history, but also the method and theory underpinning the practice of geography and historical GIS as well. We had ample time to develop our projects in the lab, such that everyone seems to have come out of the institute having made a great deal of progress on their projects, and better understanding how exactly they could use spatial questions to advance their art historical research.
There will be a lot of discussion over the next year about the relative merits of the breadth-first or depth-first approach to teaching digital methodologies for art history. I am put in mind of a thoughtful response that Thomas Padilla had to my “Tool Trouble” post a few months back. I fretted that presenting digital methods as “tools” undermined computer-aided scholarship in the eyes of critics bent on sniffing out “positivism”. Worse, I thought it also ingrained in the aspiring digital humanist the idea that such a method operates solely in service of their “research question” — a troublesome idea, I think, as if research questions are ever conceived of absent some a priori methodological framework, however tacit that framework may be. (See this useful twitter exchange for more thoughts.) My knee-jerk reaction, then, is to favor depth-first digital humanities teaching. Thomas argued, though, that a big part of teaching digital methods is trying to make these methods approachable in the first place:
A tool based approach to introducing DH is in line with a recognition that some people become interested enough in a thing to explore it further via different paths…. This approach favors presentation of results first and the tools that made them possible. These results might be wrong. Their initial interpretation could be somewhere out in left field. However, whatever grist they are able to add to the interpretation mill may spark enough interest and curiosity in DH to commit hard fought time and attention resources to venturing down the rabbit hole – a journey which requires critical engagement with the literature that gave birth to the tool, the methods employed, the tool itself, and interpretation of results.
Is the methodological smorgasbord of a breadth-first institute a good way to foster the interest that he calls for? Does a depth-first curriculum necessarily alienate or intimidate novitiates? How long — a day, a week, a semester — do you need for a breadth-first vs. a depth-first model of DH teaching? It will be interesting to see in the years to come how these programs will grapple with the depth-first/breadth-first decision, or if they will develop a workable hybrid of the two.