Skip to main content

Reframing Failure in Digital Scholarship: Chapter 4. Software at play

Reframing Failure in Digital Scholarship
Chapter 4. Software at play
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeReframing Failure in Digital Scholarship
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Reframing Failure in Digital Scholarship
  2. Contents
  3. List of figures
  4. Notes on contributors
  5. Introduction: Reframing failure
  6. Part I: Innovation
  7. Chapter 1. Stop lying to yourself: collective delusion and Digital Humanities grant funding
  8. Chapter 2. Risk, failure and the assessment of innovative research
  9. Chapter 3. Innovation, tools, and ecology
  10. Chapter 4. Software at play
  11. Part II: Technology
  12. Chapter 5. Brokenness is social
  13. Chapter 6. A career in ruins? Accepting imperfection and celebrating failures in digital preservation and digital archaeology
  14. Chapter 7. Living well with brokenness in an inclusive research culture: what we can learn from failures and processes in a Digital Humanities lab
  15. Chapter 8. Can we be failing?
  16. Part III: Collaboration
  17. Chapter 9. Doing, failing, learning: understanding what didn’t work as a key research finding in action research
  18. Chapter 10. Navigating the challenges and opportunities of collaboration
  19. Chapter 11. Challenging the pipeline structure: a reflection on the organisational flow of interdisciplinary projects
  20. Chapter 12. When optimization fails us
  21. Chapter 13. Reframing ‘reframing’: A holistic approach to understanding failure
  22. Part IV: Institutions
  23. Chapter 14. Permission to experiment with literature as data and fail in the process
  24. Chapter 15. What to do with failure? (What does failure do?)
  25. Chapter 16. The remaining alternatives
  26. Chapter 17. Who fails and why? Understanding the systemic causes of failure within and beyond the Digital Humanities
  27. Chapter 18. Experimental publishing: Acknowledging, addressing, and embracing failure
  28. Chapter 19. Writing about research methods: sharing failure to support success
  29. Chapter 20. Bridging the distance: Confronting geographical failures in Digital Humanities conferences
  30. Conclusion: On failing

Chapter 4. Software at play

David De Roure

Software is pervasively part of digital scholarship. Here we reflect on its role in the affordances of the digital, how it enables innovation, and its vital role in what we will describe as play within new scholarly practice. Narratives and practices around software, experimentation and innovation have been shaped by the beneficial experience of failure, and we shall argue for permission to be creative and permission to fail.

Working with digital data and tools brings a set of affordances benefitting humanities scholarship. One of these affordances is that ‘digital’ doesn’t respect historical disciplinary boundaries, and indeed it enables us to cross them. Practices around experimentation and innovation can flow across too, and we will look in particular at the concept of reproducible research, focusing on software as a medium for innovation – while also recognising a 'software waste cycle'.

The Digital and Computational

The affordances of ‘the Digital’ are well articulated (Naughton 2014). One is the ability to bring together objects that could not come together physically, in digital juxtaposition or superimposition. Democratised access is an affordance in terms of access to content and also in terms of analysis, illustrated for example by crowdsourcing that can engage volunteers at scale. Most pervasively we have the common medium of digital data – or the bitstream – to represent all forms of cultural artefact, and blurring a boundary between born-digital and remediated content.

An immediate critical note is needed: we are not asserting that the digital representation of an object is a replacement for its material form – for example there is typically a loss of information associated with any selecting or encoding. Furthermore, no infrastructure is agnostic, and this is as true in the digital as the analogue sphere. A common presumption that ‘digital’ is somehow more authoritative or complete must be challenged. In other words, disaffordances of the digital can be identified too, and this balance is surely necessary in any discussion about failure.

Scale and automation are well known digital affordances – in fact computational ones – the digital data opens the door to today's computational approaches and prepares us for tomorrow's. Less widespread perhaps is simulation, which enables us to explore ‘what if’ scenarios in our research inquiry; indeed it lets us experiment and fail safely. We might look at this affordance as close reading by digital prototyping, a kind of 'experimental' humanities (De Roure and Willcox 2017). We see adoption of simulation in Augmented and Virtual Reality as a humanities method too, for example in historical reconstruction of buildings and 3D modelling of artefacts. Because these digital artefacts are easy to construct, we can readily try out others in our process of enquiry – and in fact this process can be supported by automation.

Software

All these affordances have digital content in common, and they are enabled by software tools. What is important about these tools is that we are able to create them, to adopt and to adapt them. This is a powerful evolution in our research infrastructure, and prompts the suggestion that programming is a crucial affordance of the digital too. No infrastructure is agnostic, so available software tools are a lens which might lock us into particular modes of analysis – we must never forget this, but knowing it will help us address it because software is a particularly versatile and adaptable instrument.

Just to note too that software can itself be the subject of critical discourse, and its evidence in the archive – even if it no longer works – is evidence of our understanding, aims and methods. There is excellent scholarship around early software (Berry 2024) and what is now termed Digital Humanities has a history as Humanities Computing, or even non-numerical computing (Michie 1963).

However, software can be hazardous too. Unsworth articulates the role of software and eloquently discusses the ‘software waste cycle’ (Unsworth 2020):

Software developed by individual researchers and labs is often experimental and hard to get, hard to install, under-documented and sometimes unreliable. Above all most of this software is incompatible. As a result it is not at all uncommon for researchers to develop tailor-made systems that replicate much of the functionality of other systems, and in turn create programs that cannot be reused by others and so on in an endless software waste cycle.

This is a disaffordance but we can do something about it. The Software Sustainability Institute (SSI) was founded in the UK in 2010, dedicated to improving software in research across all disciplines – indeed to address this waste cycle, helping to ensure benefit from funder investment in software creation.1 As well as providing training, SSI introduced the notion of the ‘Research Software Engineer’ (RSE): these are skilled colleagues who contribute to research by developing software and engaging with the problem. The RSE role has enjoyed significant uptake (Cohen et al 2021) and this spread extends to arts too (Ma et al 2024).

Software and skills

SSI undertook a large-scale survey of the UK arts and humanities research community to establish practices and skills in the use of digital tools and software (Taylor et al, 2022). The report highlights diversity of practice, evolving interdisciplinarity and collaboration, a wide spectrum of engagement, and the importance of communities of practice. Recommendations also identify skills and knowledge development, data management and sustainability, and supporting careers including early career researchers and research technical professionals.

Indeed, supporting communities of practice in software creation, documentation and training may be a particularly effective intervention in the Humanities context: it is a means of sharing practice from the 'early adopters' to build capacity. A strong example of this approach is the consortia in Huma-Num, the French Research Infrastructure for the Social Sciences and Humanities, where collective consultation defines communities and their technical resources.2

SSI itself has sustained for 14 years to date, helped by its 'collaborate, not compete' ethos and wide disciplinary breadth. Its current work focuses on four of the software sustainability priorities we perceive today: capable research communities, widespread adoption of research software best practice, evidence-driven research software policy and guidance, and broadened access and contributions to the research software community.3 In this way, SSI is building resilience against failure into the research community, and addressing the software waste cycle.

The new primitives

While digital computational resources have grown over recent decades, so too has the engagement of citizens with the digital world – we were once a community of scholars with access to a small number of computers, now we are in a world of crowd and cloud.

Widely available 'compute' now supports our computational affordances, especially where we have data too. While this is not always the case in cultural heritage contexts, we can anticipate increasing application of large language models in aspects of humanities enquiry. Machine learning, and the creation and application of models, are now accessible tools for our digital practice.

It is interesting to reflect on the various affordances we have identified in the context of scholarly primitives (Unsworth 2000). Some of them might suggest, or at least align with, candidates for new primitives; for example modelling, prototyping, training a machine learning model, and crowdsourcing.

We suggest that another new affordance of the digital, and perhaps a scholarly primitive, is play (De Roure & Willcox 2020). Play is failing safely and it enables innovation. Performance, feedback and revision can occur many times in an hour, or a minute or, with automation, faster yet – perhaps the most basic of digital affordances are the ability to copy, paste and edit trivially. This is the evolution of practice at a pace unimaginable before the readily available compute power.

Play encourages risk-taking and immediate learning from failure. We can test rapidly, for example trying an approach through using it with different data: does a method of analysing a corpus of eighteenth-century French literature reveal equally useful insights when applied to the same genre, but in nineteenth-century Italian? Play as a scholarly primitive is embedded in our scholarly practices and enables failure and innovation at the same time.

We would argue that our enduring infrastructure should also be designed for play, for example through programmatic interfaces. These new affordances empower us to create new infrastructure ourselves, and to share it. This is powerful, though we must be aware of the responsibilities that go with it. When we create infrastructure we are party to it not being agnostic, and with disaffordances in mind too, we should be transparent about the costs: both machine learning and crowdsourcing can be expensive, in energy and the time of volunteers.

Software, Innovation, and failure

Innovation in software tools and their application occurs right across the disciplinary landscape. This involves experiments in designing and applying digital methods, and these are possible due to the ease of creating and adapting software. Key to this process of experimentation is failure. This is not just tolerated but absolutely fundamental to the process, and the flexibility with which we can conduct these digital experiments means this can be low risk.

Does this apply equally to the sciences and humanities? To a great extent, the computational aspects are applicable, and practices can be shared. This is illustrated well by the practice of open source software development, and code sharing on GitHub. We might even view GitHub as a social edition of software, which we are all able to use and to curate, with a comprehensive provenance.

Relatedly, the notion of reproducibility is promoted in the digital research world and is directly relevant to Digital Humanities and Digital Scholarship. In this context, reproducibility (which we might think of as ‘computational reproducibility’) means making data and code available so that the computations can be executed again with equivalent results. This is hugely valuable in enabling outputs to be tested, compared and interpreted, and for new experiments to be conducted – supporting innovation.

However there is a stronger notion, and long history, of reproducible research in the sciences, and it is very much about failure. To test a proposed addition to the common knowledge, the scientific method requires independent replication of an experiment – by different researchers, in a different laboratory, typically implementing the same methodology. Replication is essential to give confidence in scientific results, hence repetition and failure are fundamental parts of the process.

This approach to independent replication is perhaps where the established cultures most differ. Science necessarily has a culture of documenting failure and of ‘repeating’, as part of an accepted and principled method. There are many comparisons to be drawn, but traditional humanities scholarship does not report failure in this way. We can imagine a scientific reproducibility approach to archival research: do two people get the same result from the same archive, do we build confidence in a result by going into more than one?

Perhaps then what we are proposing with the digital approaches is to support this process of interpretation and model building – augmenting our mental models with digital ones, which give us new means of observing, testing and sharing.

But for the digital part, we can innovate and play, fail and improve, and we can document this through open software practices. By sharing software, computational reproducibility demonstrates a less independent replication than science – it might be compared to sharing a lab or at least apparatus. But this is useful in an ecosystem which is growing in communities of practice and rapidly evolving – where labs are scarce and shared ones welcome.

Permission to be creative: Permission to fail

At some point in the future we might lose the ‘digital’ prefix and digital scholarship will just be called scholarship, with the digital methods assumed where they are relevant. But the only way to get to that point is first to develop the digital methods, skills, capacity and career paths. We have shown that innovation in methods involves failing and learning from that failure. What is essential is to give people the power of play, permission to be creative and permission to fail, so that this innovation can occur.

References

Berry, David M. and Mark C. Marino. 2024. ‘Reading ELIZA: Critical Code Studies in Action’, Electronic Book Review, November 3 2024. Accessed 25 November 2024. https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/ .

Cohen, Jeremy, Daniel S. Katz, Michelle Barker, Neil Chue Hong, Robert Haines, and Caroline Jay. 2021. ‘The Four Pillars of Research Software Engineering’, IEEE Software 38 (1). https://doi.org/10.1109/MS.2020.2973362.

De Roure, David and Willcox, Pip. 2020. ‘Scholarly Social Machines: A Web Science Perspective on our Knowledge Infrastructure.’ Proceedings of the 12th ACM Conference on Web Science (WebSci '20). Association for Computing Machinery. https://doi.org/10.1145/3394231.3397915

De Roure, David and Willcox, Pip. 2017. ‘Numbers into Notes: digital prototyping as close reading of Ada Lovelace’s ’Note A’’. Paper presented at ADHO 2017, McGill University, Montréal. Accessed 25 November 2024. https://dh2017.adho.org/abstracts/540/540.pdf.

Ma, Bofan, Ellen Sargen, David De Roure, and Emily Howard. 2024. ‘Learning to Learn: A Reflexive Case Study of PRiSM SampleRNN.’ Paper presented at AIMC 2024, Oxford, 9-11 September 2024. Retrieved from [https://aimc2024.pubpub.org/pub/fnpykfd](https://aimc2024.pubpub.org/pub/fnpykfd) .

Michie, Donald. 1963. ‘Advances in Programming and Non-Numerical Analysis.’ Nature 200. https://doi.org/10.1038/200641a0

Naughton, John. 2014. ‘Lecture: Getting from Here to There’. MedieKultur: Journal of Media and Communication Research 30 (57). https://doi.org/10.7146/mediekultur.v30i57.18609.

Taylor, Rebecca, Johanna Walker, Simon Hettrick, Philippa Broadbent, and David De Roure. 2022. ‘Shaping Data and Software Policy in the Arts and Humanities Research Community: A Study for the AHRC’. Zenodo. https://doi.org/10.5281/ZENODO.10518740.

Unsworth, John. 2020. ‘Scholarly Primitives 20 Years Later’. Video. Version 1.0.0. DARIAH-Campus. Accessed 25 November 2024.

<https://campus.dariah.eu/resource/posts/scholarly-primitives-20-years-later>.

Unsworth, John. 2000. ‘Scholarly primitives: What methods do humanities researchers have in common, and how might our tools reflect this.’ Paper presented at the Symposium on Humanities Computing: Formal Methods, Experimental Practice, King’s College, London. May 2000. Accessed 25 November 2024.

https://people.brandeis.edu/~unsworth/Kings.5-00/primitives.html.


1. See https://www.software.ac.uk/.

2. See https://documentation.huma-num.fr/humanum-en/.

3. See https://www.ukri.org/news/ukri-continues-investing-in-improving-research-software-practices/.

Annotate

Next Chapter
Part II: Technology
PreviousNext
Pre-review version (January 2025)
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org