Chapter 2 Risk, failure and the assessment of innovative research
Digital humanities research may be organised in numerous different ways within institutions – in departments, centres, hubs, institutes – but the unit of the laboratory is one that has been widely adopted. And, ‘While the design lab or art studio has had a great influence on many digital humanities labs, the model of the science laboratory is most often invoked by those that imagine the digital humanities lab-space’ (Earhart 2015, 392). Central to our understandings of such a laboratory is that it is a place where experiments happen, drawing on explicitly scientific methods. New knowledge is generated through experimentation, through trial and error, through what has been described as ‘thinkering’ – ‘the action of playful experimentation with technological and digital tools for the interpretation and presentation’ (C2DH, n.d.). Such experimentation necessarily goes hand in hand not just with the potential for failure but with its inevitability. Mistakes are made. A 3D print of a complex object collapses before it is finished, the errors in a script only become apparent twenty-four hours after it was set running, a web scraper gets caught in a crawler trap, the wrong choice of stop words turns out to exclude an important set of entities during text analysis. This is the everyday experience of doing digital humanities research. It can, to say the least, be annoying, but it can also be a highly valuable learning experience or perhaps even generate new insights. The sharing of these setbacks and failures with other members of a team can prevent them from making the same mistakes, might help to explain a similar problem that has been holding them back, or might trigger a new idea or train of thought.
So far so good. This is the kind of failure that we can all get on board with. It is an acceptable, even desirable part of the research process. It is generally only talked about within a team, and often occluded by public descriptions of seamless pipelines and workflows, but there are no negative consequences were others to hear about it. But what happens when experimentation is scaled up, when it is not at the level of an individual task but at the heart of a large research project for which you are seeking funding? The stakes are suddenly much higher, both at the research proposal stage and subsequently (should you be lucky enough to get through peer review) during reporting on project delivery and outcomes. How can the possibility of failure be negotiated; how can it be acknowledged as inextricable from experimental processes, as an enabler of innovation?
To read the funding calls published by research support agencies is to be struck with enthusiasm for and interest in experimentation, innovation and interdisciplinarity. A quick glance at recent calls published by UK Research and Innovation (UKRI) open to arts and humanities researchers reveals the presence of words such as ‘ambitious’, ‘transform’, ‘alternative’, ‘innovative’, ‘breakthrough’, ‘disruptive’ and ‘transcend’ (UKRI, n.d.). Researchers are encouraged to think big, to push at the boundaries of current knowledge and to devise projects that will be transformative both in form and in findings. But how does this aspiration hold up in the face of traditional peer-review criteria that include value for money, feasibility and track record of success? These are, of course, important factors to consider – especially for publicly funded research – but within the strict grading systems operated by many funding bodies, they can serve almost systemically to weed out innovation and experimentation. What are the mechanisms for evaluating the feasibility of a method that cannot be fully explained at the outset of a project because it will only develop as the research unfolds? This might be the case, for example, in a project that draws on systemic action research. It is possible to outline the broad principles of the action research cycle – plan, act, observe, reflect, plan forward – but not precisely where they will lead in the context of a particular project (Heron and Reason 2006). The lack of fixity is precisely the point, but it carries with it risk, and with risk comes the potential for failure. We are encouraged to ‘manage out’ risk; if it is acknowledged at all it is in the reassuring form of a carefully weighted risk register. But in attempting to remove risk completely we are in danger of stifling innovative practice, discouraging interdisciplinary collaboration and holding back the development of new forms of scholarly output.
One way of addressing this challenge is to devise funding schemes that support straightforwardly exploratory and experimental research. The Curiosity Awards scheme launched by UKRI in 2023, for example, is ‘intentionally flexible’ and encourages proposals concerned, among other things, with ‘high risk and high potential concepts’ (a rare positive mention of risk!) and ‘scoping and piloting … early-stage proof of concept for ideas or change of direction’ (UKRI 2023). As might be expected, the available funding is relatively limited (up to £100,000) but that in itself removes a degree of pressure from applicants and opens the door to generative failure as a reported research finding. Such schemes will work most effectively if they are accompanied by peer-review training processes that encourage recognition of risk, uncertainty and failure as inherent in some of the most exciting and challenging research and not as an automatic indicator of unfeasibility. There has to be scope to admit that risk is present, and that a project may fail, but that it is worth supporting despite and perhaps even because of that fact.
The connection between ‘high risk’ and ‘high potential’ in some areas of research is clearly beginning to be recognised. Encouraging risk in principle, however, is not the same thing as accepting it and its companion – failure – in practice. How can institutions, funding bodies, research teams and individual researchers work together to create spaces where it becomes not just possible but essential to report on the things that have failed, and what was learnt from that failure, as well as to celebrate project successes? This is hard to do within evaluation and reporting structures that are concerned with metrics – how many journal articles were published by a project team, how many conferences did they attend, what software and datasets were created, how many people interacted with project outputs of varying kinds? There may often simply be no mechanism for telling the story of a project in official reporting, of contextualising both achievements and failure to achieve intended goals, of exploring the innovation that can arise from having to redesign an ongoing project that is not quite working.
There are, though, some positive developments in relation to the evaluation of research, including the acceptance not just that ‘Research assessment shapes research culture’, but that ‘Funders can initiate positive culture change through careful design and implementation of research assessment’ (Global Research Council 2020, 5). Initiatives such as the Declaration on Research Assessment (DORA) and the Coalition for Advancing Research Assessment (CoARA) encourage more qualitative approaches to the assessment of research and its impact, allowing complex narratives about research trajectories to be constructed, valued and evaluated. The dial is beginning to move, but it takes a long time to change a research culture, and perhaps even longer to overcome institutional aversion to risk and the public acknowledgement of failure.
Finally, it is important to recognise that the option to talk openly about risk and failure is not available to everyone, or that the potential consequences may be different depending on career stage, gender, ethnicity or a range of other factors. Established researchers are much more able to talk about their failures, about their rejected grant proposals and publications, about the projects that had to be steered back on to the rails, without damaging their careers. This brings a responsibility to do precisely that, while acknowledging that such openness reflects privilege. We can all, however, respond to consultations from funding bodies looking to design and implement responsible research assessment processes. We can all work within our institutions to raise awareness of what it means to be a DORA signatory in both principle and practice. We can all make space within our teams for the sharing of failure without judgement or consequence. It may take time, but we can change research cultures and bring failure openly into our research and practice as a positive force.
References
- C2DH. ‘Thinkering’, n.d. Accessed 25 June 2024. https://
www .c2dh .uni .lu /thinkering. - Earhart, Amy. ‘The Digital Humanities as a Laboratory’. In Between Humanities and the Digital, edited by Patrik Svensson and David Theo Goldberg. MIT Press, 2015.
- Global Research Council. Responsible Research Assessment. 2020. Accessed 25 June 2024. https://
globalresearchcouncil .org /fileadmin / /documents /GRC _Publications /GRC _RRA _Conference _Summary _Report .pdf. - Heron, John and Peter Reason. ‘The Practice of Co-operative Enquiry: Research “With” Rather than “On” People’. In Handbook of Action Research, edited by Peter Reason and Hilary Bradbury, 179–88. Sage Publications, 2006.
- UKRI. ‘AHRC Responsive Mode: Curiosity Award Round One’. 2023. Accessed 25 June 2024. https://
www .ukri .org /opportunity /ahrc -responsive -mode -curiosity -award /. - UKRI. ‘Funding Finder: AHRC’, n.d. Accessed 25 June 2024. https://
www .ukri .org /opportunity / ?filter _council%5B%5D =814.