Enthusiasm for humanities computing has recently renewed anxiety about whether and how humanities subjects should be represented as computable, quantifiable data. Yet such concerns are relevant to all humanists, not just the digitally-inclined. Whether we work with computational tools or with more traditional methods, we are pressured to hew to a rhetoric that partitions the qualitative and quantitative aspects of our work.
However, this is a fallacy. Rather than an immutable divide, the distinctions between qualitative and quantitative models are more akin to the wave-particle duality of light, a model of coexistence where the frame of either "wave" or "particle" is determined by the perspective and purpose of the researcher within a given instant.
We present experimental visualizations to suggest the the interleaved, fractal-like nature of quantitative assumptions and arguments built into seemingly-qualitative humanities research. No question, not even ones predicated on highly subjective interpretation such as the tracking of intellectual or artistic movements, are independent of a sense of comparative quantity that makes it possible for us to claim a trend is becoming more or less prevalent.
The following case studies suggest how blurry the qualitative/quantitative boundary truly is in humanities research, and how an iterative process of models, measures, and data can help humanist scholars reckon with the many quantitative assumptions that are interwoven in our work It is this iterative, recursive process that we argue lies at the heart of any practice in humanistic experimental design.