Mis-evaluating digital scholarship in art and architectural history
The College Art Association has partnered with the Society for Architectural Historians on a Mellon-funded project to generate guidelines for the evaluation of digital scholarship in art and architectural history for promotion and tenure.
This is excellent news.
Yesterday I finally got around to taking a survey sent by CAA on our individual perspectives on evaluating digital scholarship. I was largely impressed with the questions the survey asked, particularly in regards to various points of evaluation, such as peer review, documentation, archivability and sustainability, and making underlying data available.
However, one question gave me pause:
“In your opinion, what types of scholarly digital activities should receive consideration for tenure or promotion?”
- Applying data visualizations
- Using three-dimensional models
- Using geospatial models
- Scholarly blogging
- Creating digital research tools
- Creating teaching tools
My reply to this is best encapsulated in the closing comment that I added to my survey response (edited for clarity):
I would argue that it is seldom the particular medium of digital scholarship that should determine its appropriateness for scholarly evaluation. For example, GIS is not innately more or less worthy than other forms of data visualization. (And, I would note, the list provided is a strange mix of relatively specific methods, such as 3D modeling, and highly general concepts such as “digital research tools” (?). I am not sure how or why the list was conceived in the way it was, but it strikes me as an example of fuzzy thinking in an otherwise thoughtfully-crafted survey.)
It is the critical rigor that scholars bring to their work, and the effectiveness and impact of the arguments or resources that result from that scholarship, that ought to be the guiding principle for evaluation. The extent to which scholars take full advantage of their digitally-aided analytical methodology (plumbing its theoretical implications, creatively applying computational methodologies to humanistic questions, and thoughtfully balancing the humanistic object with its various representations) and/or publishing platform (using inter-operable data standards, publishing reproducible code and datasets, carefully navigating choices about linear/non-linear argumentation and multimedia presentation) should be the central consideration when evaluating digital scholarship. Evaluating the extent of this scholarly rigor will necessarily involve close working knowledge of methods and platforms, but again, this is distinct from making judgments based on the nominal identity of that method/platform. It is the relationship of given methods, be they digitally-aided or not, to theory and conclusions that must lie at the heart of scholarly critiques.
This formulation may seem old hat to the “DH” crowd. But it clearly bears repeating, especially to CAA and SAH as they perform the hard and valuable work of designing guidelines for evaluation.