Skip to main content

Reframing Failure in Digital Scholarship: Chapter 8 Can we be failing?

Reframing Failure in Digital Scholarship
Chapter 8 Can we be failing?
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeReframing Failure in Digital Scholarship
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Title Page
  2. Copyright
  3. Contents
  4. List of figures
  5. Notes on contributors
  6. Acknowledgements
  7. Introduction: reframing failure
  8. Part I: Innovation
    1. 1. Stop lying to yourself: collective delusion and digital humanities grant funding
    2. 2. Risk, failure and the assessment of innovative research
    3. 3. Innovation, tools and ecology
    4. 4. Software at play
  9. Part II: Technology
    1. 5. Brokenness is social
    2. 6. A career in ruins? Accepting imperfection and celebrating failures in digital preservation and digital archaeology
    3. 7. Living well with brokenness in an inclusive research culture: what we can learn from failures and processes in a digital humanities lab
    4. 8. Can we be failing?
  10. Part III: Collaboration
    1. 9. Doing, failing, learning: understanding what didn’t work as a key research finding in action research
    2. 10. Navigating the challenges and opportunities of collaboration
    3. 11. Challenging the pipeline structure: a reflection on the organisational flow of interdisciplinary projects
    4. 12. When optimisation fails us
    5. 13. Reframing ‘reframing’: a holistic approach to understanding failure
  11. Part IV: Institutions
    1. 14. Permission to experiment with literature as data and fail in the process
    2. 15. What to do with failure? (What does failure do?)
    3. 16. The remaining alternatives
    4. 17. Who fails and why? Understanding the systemic causes of failure within and beyond the digital humanities
    5. 18. Experimental publishing: acknowledging, addressing and embracing failure
    6. 19. Writing about research methods: sharing failure to support success
    7. 20. Bridging the distance: confronting geographical failures in digital humanities conferences
  12. Conclusion: on failing
  13. Index

Chapter 8 Can we be failing?

Joris J. van Zundert

It cannot be failure that is bugging me. Failure is the bread and butter of research and science. Trial and error. Philology is scholarship for the brave. We mostly lack real evidence when studying historical texts. We make do with the scraps we have. We glue them together with major leaps of conjecture and imagination, and we create a story about the text that we give as an explanation of it. We build inverted pyramids of reasoning and hope they will stand. We accept that we know so little about the texts we find and their historical contexts that it would be just as well to throw them at the wall and see how the spectacular explosion of characters from the manuscript might make sense. What sticks?

In this sense philology is all about failure. We know our task is to uncover what Jerome McGann called this ‘impossible truth’, ‘the obligation to protect human memory from neglect and erasure – as much of it as possible’ (McGann 2015). An impossible truth not only because we are fighting a lack of evidence. We also cannot trust the evidence we have. Most historical texts bring us fake news, conjured up by authors in service of the politics of whomever paid them. Audaciously we assert that we can tell true intent from intentional mystification. As if we can understand the person who is a number of centuries remote from us, who lived in a society whose rationale would appal us, and whose faith, beliefs, motivations and morality would deeply confuse us.

Knowingly we set our philological selves up for failure. But there is method in the madness. Wahrheit und methode, n’est-ce pas? So, we ward off uncertainty and subjectivity, those devils of interpretation. Because we have method. We follow Lachmann (Ciesa 2020), Maas (1958), Greetham (1994), Tanselle (1995), Matthijsen (2003), Sahle (2013), or any other we might know and choose to be our Virgil on our journey through this hell of scholarship.

And this is what we still do in the digital era. Our texts may have been transmedialised (Sahle 2016, 32) as bundles of ones and zeroes, but we treat them as the books we see in these. Online we rebuild the world we know. We produce books. Page through the digital scholarly editions that we have produced (for instance via Steinkrüger et al. 2013), and one finds that we still produce books. The convention is: a scan of a page side by side with a transcription, annotations pop up or show up in an additional column. Do not be fooled by added media, pictures and sounds as illustrations. They are books. They are page based. They have indices, in a time of full text search.

For sure these are marvellous scholarly achievements! I will be the last to condemn them. We should celebrate these wonderful fruits of hard and precarious digital scholarly labour. More praise unto them. They brought us wonderful things. Access anywhere, anytime, to texts long hidden in dusty archives and dark corners of library depositories. We can argue with the transcription now that the facsimiles are before our eyes, we can freely search the texts, we may be supported by an excellent register or faceted search to quickly take us to mentions of persons, places, events, and what have you. It is all a glorious marvel.

Still, I wonder. Is this new world of digital data and computational power merely a new way of presenting us the same document? Or should it push our envelope harder? When we entered this realm of digital scholarly editing in the early 1990s, what did we expect? Did we really assume it would just be a new way of finding the text and reading it? The Poughkeepsie Principles (Fortier 1987) on which the Text Encoding Initiative (TEI) was built, read: ‘The guidelines are intended to provide a standard format for data interchange in humanities research.’ What fascinates me here is that word ‘interchange’. It suggests some dynamic, right? We see scholars ‘interchanging’ the texts they created in TEI. It suggests something like ‘I’ll let you use my XML source if you’ll let me use yours’. And fair enough, the TEI-XML download button is more or less a must-have for state-of-the-art web-based digital scholarly editions.

But that’s it? We went through some three decades and more of digital technology development for a download button? That seems underwhelming.

Let’s back up a little: we have wonderful worldwide access, anytime, we can search, and we can link to other resources. Great! Just saying, with PDF or ePub we would have had that too. Which means we have built infrastructure and text models at very high costs to add a download button for XML sources. Mind you, the cost and effort are staggering (cf. Beavan et al. 2025; Drucker 2021; Hinsen 2019; Otis 2023; Smithies et al. 2019). PDFs or ePubs merely require some server and maybe some tailored, tinkered off-the-shelf database if you want to have some smart library lookalike catalogue access. Really mundane requirements that can be supported by any decently skilled IT support staff. But to power our XML-y workflows we demand dedicated web-based XML production and publication environments. It is not just Oxygen and TEI Publisher we want. We convinced our institutes that there should be onsite integrated workflows for real serious digital scholarly editing: XML databases, document servers, XSLT transformers, document templates, encoding schemas, web servers, web development frameworks, and so forth. The stack is deep and wide. To keep things up and running, to keep pace (at least a little bit) with GUI and UX design fads and fashions, we need skilled data stewards, web engineers, frontend developers, GUI designers. And we all suffer from ‘not invented here’ syndrome: the list of reinvented scholarly editing platforms and supporting large-scale digital research infrastructures is long, because every institution wants its own. If you take out your pocket calculator and start adding, your screen might not be wide enough to report the staggering costs of the duplicated effort in the name of sustainability. Per annum. For a download button.

In my own cluster of institutes I know of at least three parallel attempts at providing some scholarly editing platform, workflow, tool set, or whatever you want to call them. Please do not let my director read this. In defence of the three: they all have different and well-motivated angles of attack. One is standoff markup based, embraces ‘IIIF’ model philosophy, and believes in distributed services for text, data and annotations. I hope they bear in mind that the devil is in the detail, such as the base encoding of a text file switching from UTF-8 to CP-1252. Another puts the money on standardised TEI models and templates for different categories of text, trusting that 80 per cent of digital editions can be covered by 20 per cent of the TEI guidelines. They might be in for a surprise when they discover how the Pareto principle actually works, which in the scholarly digital editing arena pans out to the fact that 80 per cent of the scholarly value (but likely even more) is always in the 20 per cent of any edited document that does not fit into the template. The third effort promises a next generation markup language supporting overlap and the kitchen sink, backed by a new computational grammar built from the ground up. The last one is a daring and fundamental move, no doubt revolutionary if successful. But so was Livingstone’s mission, I presume. These attempts follow at least three earlier attempts made since 2004, all failing, one of my very own making (Beaulieu et al. 2012).

I am not arguing that these attempts are misinformed, misguided or failures waiting to happen. To be perfectly honest I do think most of them will fail for the most part, but beautiful knowledge and lessons will be learned in the process. The salient point for me is that these continuous attempts all over the globe do signal that in the domain of digital scholarly editing we are not in it just for a mere download button. Apparently, we feel that there is more to digital editions than the flat surface of the page and the screen. Given the enormous and repeated efforts, I suspect there is some subliminal motivation other than just the obligatory ‘access’ and ‘preservation’. Are we subconsciously convinced that our digital editions ought to be more than just flat representations of a text stream, living isolated on web pages?

If I can take a stab at what is in store, I would argue that we dream of digital scholarly editions that are dynamic knowledge sites. Editions that are much closer to the ideals that Vanevar Bush (1945), Douglas Engelbart and William English (1968) and Theodor Nelson (1993) had in mind, and of which Tim Berners-Lee’s variant, known as the Web, is a mere shadow. Digital editions that allow themselves to be connected to other digitally represented knowledge anywhere available. I do not mean mere hyperlinks pointing out from a digital edition’s text towards other sites, but semantically and scholarly meaningful relations connecting very specific localities in one edition to very specific loci in other digital editions or resources. The scholarly meaningfulness of those relations should be expressed explicitly by the scholar or the computational tools the researcher is applying to create such links. So, imagine links that express things like ‘This paragraph in this text source here in Tokyo University’s Komaba Library says that this verse in this digital transcript of this Old-Frisian law text on the site of the Frisian Academy of Arts and Sciences in Leeuwarden in the Netherlands expresses that Frisian kings recognized private ownership of pastures’. And, for another example: ‘This digitized seal in the collection of Sankt Gallen Abbey’s library evidences that Frisian kings did not acknowledge private ownership of pastures’. One can see that there is a logical contradiction between those two. Now, imagine that it would be possible for anyone to express such interpretations by creating links between loci deeply and specifically in arbitrary online scholarly resources. Scholars would be able to express competing conjectures, and we would be able to actually calculate the relative support for one or another automatically. We would, for a fact, be building a giant scholarly reasoning engine. We would be expressing and connecting the knowledge that we create everyday explicitly and in a collective way to make our knowledge more insightful for a common understanding.

Now, before you run away screaming from this ‘commune-like enforced dictatorship of majority interpretation’: no, this idea does not assert that there must be one common authoritative interpretation. Anyone can express any interpretation. The idea would merely be to express interpretations in a formal and explicit language that links interpretations to supporting evidence. Or one could even not link to supporting materials because one wanted to express a pure conjecture. All that would be perfectly fine. But the major pay-off of such a system would be that it enables us to really compute how strong our individual and aggregate reasoning is. Automatically being notified of new corroborating or opposing evidence would be a bonus thrown in for free.

Obviously, these possibilities are far beyond our current digital editing scholarship ecosystem. But they are very much not beyond our current digital and computational technologies. Although I am not a big fan of RDF and semantic web technology for boring but fundamental technical reasons (hint: it is not read and write, but read-only), the technology shows that interpretative language can be formalised. Knowledge graphs, which have all the dynamics we speak about, are a technology now so well understood that they make serious computer science people yawn. Moreover, even people within our community are theorising about and experimenting with these more advanced knowledge representations (Vogeler 2019; Andrews at al. 2024; Cugliana et al. 2024).

But can we be failing? So far, I think no digital scholarly edition has resulted in any significant epistemological gain. Again, improved access and discoverability are great, but they scale what we did, they do not change our praxis. For all the angle brackets that we have produced in the last four decades, we have not changed how we think about how we come to know things in philology and textual scholarship. Apart from having an add-on course in TEI-XML, any newly arrived student can still get ahead in the field by taking Greetham to hand. Is this a problem? In some ways, no. We might choose to be perfectly fine with being a conservative field, and in the end we are also about conservation of intellectual history, so why not? On the other hand: should we not also explore the potential of the textual tools that the twenty-first century throws at us? Do we have an intellectual obligation to investigate if they provide us better means of creating and representing our editions and the knowledge we create in the process?

I often worry that the field has been technologically locked in by one technology (XML) and one model (TEI), and that we are victims of our own huge success. Neither XML or TEI is very concomitant to networking and combining information dynamically across digital sources. It is not in the bones of these technologies, it is not their philosophy. But that ‘interchange’ thing keeps bugging me. I think the motivation behind it must have been genuine and deeply felt: a possibility to have different scholarly sources and resources interact. To have them ‘talk’, ‘inform’, and ‘enlighten’ each other, so that the sum would return more knowledge than the individual nodes represented. But somehow we turned our model and markup into a rather omphalocentric technology focused at representing one document in isolation, more like a static book, less like active knowledge.

If the aim is new ways to gain knowledge from philology by applying different techniques to model, connect, infer and evaluate information, then I think we are indeed failing. If this is truly the aim, we need to think vastly differently about our digital and computational scholarly editing infrastructure. Otherwise we are just building glorified digital shelves for our digitised books.

References

  • Andrews, Tara, Márton Rózsa, Katalin Prajda, Lewis Read and Aleksandar Anđelović. ‘Re-Evaluating the Eleventh Century through Linked Events and Entities’, Historical Studies on Central Europe 4, no. 1 (2024): 217–45. https://doi.org/10.47074/HSCE.2024-1.12.
  • Beaulieu, Anne, Karina Van Dalen-Oskam and Joris Van Zundert. ‘Between Tradition and Web 2.0: eLaborate as a Social Experiment in Humanities Scholarship’. In Social Software and the Evolution of User Expertise: Future Trends in Knowledge Creation and Dissemination, edited by Tatjana Takševa, 112–29. IGI Global, 2012. Accessed 24 November 2024. https://pure.knaw.nl/portal/en/publications/between-tradition-and-web-20-elaborate-as-a-social-experiment-in-.
  • Beavan, David, Andre Piza, Stuart Gillespie, Cyara Buchuck-Wilsenach, Claire Bailey-Ross, Bickford Jake, Edward Chalstrey, et al. ‘Towards a National Research Software Engineering Capability in Arts and Humanities Research: A Roadmap’. The Alan Turing Institute. Zenodo. 2025. https://doi.org/10.5281/zenodo.15083396.
  • Bush, Vannevar. ‘As We May Think’, The Atlantic, 1 July 1945. Accessed 24 November 2024. http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/.
  • Ciesa, Paolo. ‘Principles and Practice’. In Handbook of Stemmatology, edited by Philipp Roelli, 292–356. De Gruyter, 2020. Accessed 24 November 2024. https://www.degruyter.com/view/title/569065.
  • Cugliana, Elisa, Aengus Ward, Joris J. van Zundert, Andreas Kuczera and Max Grüntgens. ‘Computational Approaches and the Epistemology of Scholarly Editing’, International Journal of Digital Humanities 6 (2024): 169–88. https://doi.org/10.1007/s42803-024-00088-z.
  • Drucker, Johanna. ‘Sustainability and Complexity: Knowledge and Authority in the Digital Humanities’, Digital Scholarship in the Humanities 36, Supplement_2 (2021): ii86–94. https://doi.org/10.1093/llc/fqab025.
  • Engelbart, Douglas C. and William K. English. ‘A Research Center for Augmenting Human Intellect’. In AFIPS Conference Proceedings of the 1968 Fall Joint Computer Conference, 395–410. AFIPS, 1968. Accessed 24 November 2024. https://dougengelbart.org/content/view/140/#.
  • Fortier, P.A. ‘The Preparation of Text Encoding Guidelines – Closing Statement of the Vassar Planning Conference’. Paper presented at Text Encoding Initiative, Poughkeepsie, 13 November 1987. view-source:https://tei-c.org/Vault/PC/pcp1.tei.
  • Greetham, David. Textual Scholarship: An Introduction. Garland Reference Library of the Humanities 1417. Garland Publishing Inc., 1994.
  • Hinsen, K. ‘Dealing With Software Collapse’, Computing in Science & Engineering 21, no. 3 (2019): 104–8. https://doi.org/10.1109/MCSE.2019.2900945.
  • Maas, Paul. Textual Criticism. Translated by Barbara Flower. Clarendon Press, 1958.
  • Mathijsen, Marita. ‘Een knieval voor de luie lezer? Hertaling als enig redmiddel voor historische literatuur’. Nederlandse Letterkunde no. 2 (2003): 116–29. Accessed 24 November 2024. https://www.dbnl.org/tekst/_ned021200301_01/_ned021200301_01_0010.php.
  • McGann, Jerome. ‘Truth and Method: Humanities Scholarship as a Science of Exceptions.’ Interdisciplinary Science Review 40, no. 2 (2015): 204–18. https://doi.org/10.1179/0308018815Z.000000000113.
  • Nelson, Theodor Holm. Literary Machines. The Report on, and of, Project Xanadu Concerning Word Processing, Electronic Publishing, Hypertext, Thinkertoys, Tomorrow’s Intellectual Revolution, and Certain Other Topics Including Knowledge, Education and Freedom. First published 1981. Mindfull Press, 1993.
  • Otis, Jessica. ‘ “Follow the Money?”: Funding and Digital Sustainability’, Digital Humanities Quarterly 17, no. 1 (2023): 000666. Accessed 24 November 2024. https://www.digitalhumanities.org/dhq/vol/17/1/000666/000666.html.
  • Sahle, Patrick. Digitale Editionsformen, Zum Umgang mit der Überlieferung unter den Bedingungen des Medienwandels. 3 Teilen. [Finale Print-Fassung]. Schriften des Instituts für Dokumentologie und Editorik. Norderstedt Books on Demand, 2013. Accessed 24 November 2024. https://kups.ub.uni-koeln.de/5352/.
  • Sahle, Patrick. ‘What Is a Scholarly Digital Edition?’. In Digital Scholarly Editing: Theories and Practices, edited by Matthew James Driscoll and Elena Pierazzo, 19–40. Open Book Publishers, 2016. Accessed 24 November 2024. http://www.openbookpublishers.com/reader/483.
  • Smithies, James, Carina Westling, Anna-Maria Sichani, Pam Mellen and Arianna Ciula. ‘Managing 100 Digital Humanities Projects: Digital Scholarship & Archiving in King’s Digital Lab’, Digital Humanities Quarterly 13, no. 1 (2019): 000411. Accessed 24 November 2024. https://www.digitalhumanities.org/dhq/vol/13/1/000411/000411.html.
  • Steinkrüger, Philipp, Ulrike Henny-Krahmer and Frederike Neuber. ‘Editorial – State of Play’, RIDE – A Review Journal for Digital Editions and Resources (2013). Accessed 24 November 2024. https://ride.i-d-e.de/about/editorial/.
  • Tanselle, Thomas G. ‘The Varieties of Scholarly Editing’. In Scholarly Editing: A Guide to Research, edited by D.C. Greetham, 9–32. The Modern Language Association of America, 1995. Accessed 24 November 2024. https://cmohge1.github.io/lrbs-digital-scholarly-editing/readings/tanselle_varieties_of_editing.pdf.
  • Vogeler, Georg. ‘The “Assertive Edition”’, International Journal of Digital Humanities 1, (2019): 309–22. https://doi.org/10.1007/s42803-019-00025-5.

Annotate

Next Chapter
Part III COLLABORATION
PreviousNext
© the Authors 2025
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org