Skip to main content

Reframing Failure in Digital Scholarship: Chapter 9 Doing, failing, learning: understanding what didn’t work as a key research finding in action research

Reframing Failure in Digital Scholarship
Chapter 9 Doing, failing, learning: understanding what didn’t work as a key research finding in action research
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeReframing Failure in Digital Scholarship
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Title Page
  2. Copyright
  3. Contents
  4. List of figures
  5. Notes on contributors
  6. Acknowledgements
  7. Introduction: reframing failure
  8. Part I: Innovation
    1. 1. Stop lying to yourself: collective delusion and digital humanities grant funding
    2. 2. Risk, failure and the assessment of innovative research
    3. 3. Innovation, tools and ecology
    4. 4. Software at play
  9. Part II: Technology
    1. 5. Brokenness is social
    2. 6. A career in ruins? Accepting imperfection and celebrating failures in digital preservation and digital archaeology
    3. 7. Living well with brokenness in an inclusive research culture: what we can learn from failures and processes in a digital humanities lab
    4. 8. Can we be failing?
  10. Part III: Collaboration
    1. 9. Doing, failing, learning: understanding what didn’t work as a key research finding in action research
    2. 10. Navigating the challenges and opportunities of collaboration
    3. 11. Challenging the pipeline structure: a reflection on the organisational flow of interdisciplinary projects
    4. 12. When optimisation fails us
    5. 13. Reframing ‘reframing’: a holistic approach to understanding failure
  11. Part IV: Institutions
    1. 14. Permission to experiment with literature as data and fail in the process
    2. 15. What to do with failure? (What does failure do?)
    3. 16. The remaining alternatives
    4. 17. Who fails and why? Understanding the systemic causes of failure within and beyond the digital humanities
    5. 18. Experimental publishing: acknowledging, addressing and embracing failure
    6. 19. Writing about research methods: sharing failure to support success
    7. 20. Bridging the distance: confronting geographical failures in digital humanities conferences
  12. Conclusion: on failing
  13. Index

Chapter 9 Doing, failing, learning: understanding what didn’t work as a key research finding in action research

Arran J. Rees

As a researcher and a practitioner in cultural heritage, and even as a person full stop, I’ve always had a natural inclination to just get stuck in, whether it be ignoring the instructions and learning by doing (and often re-doing when it goes wrong), joining professional committees and steering groups, or wanting to actively experiment with scenarios I am thinking about with regards to my research. Some of what I learn is based on what worked, but much more is based on what didn’t. Knowing what didn’t work, what was hard, what failed, has always been a part of how knowledge is generated and exchanged, but too often this is confined to ‘candid conversations’ that are performed around coffees and side discussions, facilitated through social and professional networks. It is rare to find an honest reflection on what didn’t work, focusing on what we can learn from that, in any of the formal presentations of findings that research and professional practice expect us to produce.

I get the sense that much of what we find written about in academic and professional contexts is sanitised. Is there perhaps a presumption that funders don’t want to know what you found difficult? That admitting that you failed or didn’t manage to do all that you’d planned will somehow affect your chances of getting funding again or a promotion at work? More often than not, our written accounts tend to tell stories of how our findings align neatly with our theories, how newly minted research insights will revolutionise practices in museum education or exhibition design. For me, I always found this type of writing difficult to resonate with as it did not match my experiences of working in the sector beforehand. I still remember the internal battles I had with myself during my PhD, wanting to push back against overly theoretical texts, branding them as out of touch with the complex, nuanced and contextual realities of working in cultural heritage; or as Duncan Grewcock notes about heavily theoretical studies of museums, ‘one could be forgiven for thinking that visiting a museum has not actually formed any part of the study’ (2014, 190).

Diagram illustrating the iterative action research cycle, consisting of interconnected loops labelled ‘Plan’, ‘Act’, ‘Observe’ and ‘Reflect’. Each phase transitions into the next with arrows, forming continuous cycles.

Figure 9.1: A plan, act, observe, reflect action research cycle diagram. © Arran J. Rees.

However, over time, I came to better appreciate that throughout my professional career, every activity, process or procedure I had undertaken had come to be through a theory or set of assumptions that had been derived to give them meaning (McCarthy 2016). All of these have been informed by what did and what did not work, based on an experiment that was attached to a theory. Therefore, if we are trying to make change, to influence practice through sharing what we learn in our research, it is about time we started sharing more openly what didn’t work, and what didn’t support our initial theories, paying attention to the particular contexts we were in, and learning from those experiences.

It was through seeking a way to do practical research that acknowledged my insights working in cultural heritage, as well as enabling my research experiences, both positive and negative, to inform my eventual findings, that I came to action research. Action research is best understood as a broad orientation towards inquiry, rather than a specific methodology. The action research cycle sets up the research as having a series of stages that repeat iteratively as you learn more. These are traditionally talked about as planning, acting, observing, reflecting and re-planning based on what you learnt during the first cycle (see Figure 9.1). Action research is an orientation towards research that seeks different ways of knowing as deeply related to practice, grounding itself in pragmatism and the practical forms of knowing (Reason 2003; Brydon-Miller and Coghlan 2019). It is in this orientation towards practical knowing and taking care to understand contextual relevance that my reflections on the idea of ‘failure’ come into play.

Failure as research development in action

Within the phases of an action research cycle, there are multiple points at which failure happens, and where we have opportunities to treat these points as generative learning experiences. To illustrate this, I will discuss an example from the recent ‘Congruence Engine’ project, which sought to use systemic action research to explore approaches to develop a united digital collection as part of the larger ‘Towards a National Collection’ funding programme.1 From the beginning, Helen Graham, my action research co-facilitator, and I sought to build structures for the project that would allow for a degree of self-organising – recognising that everyone will have a different starting place on the project, have different forms of knowledge and have different motivations for being involved. In doing this we embraced a complexity worldview in the project: one that ‘reminds us of the limits to certainty, [that] emphasises that things are in a continual process of “becoming” and that there is potential for startlingly new futures where what emerges can be unexpected and astonishing’ (Boulton et al. 2015). In understanding that things are in a ‘continual process of becoming’ we immediately open ourselves to seeing everything we do, whether it works or not, as part of that process.

It is difficult to begin from a place of collective understanding in an interdisciplinary project the size of Congruence Engine. The project itself is large and complex, with over fifteen organisations and around sixty people involved to various degrees. We started out by developing a working group model, taking insights from literature on the positives and critiques of Holacracy’s forms of organising (Bernstein et al. 2016; Schell and Bischof 2021), on the role of distributed leadership and open boundaries between inquiry streams in systemic action research (Burns 2007) and on collaborating in cultural heritage (Knudsen and Olesen 2018). We specified that each of these needed a facilitator who was responsible for feeding the group’s work back into central team meetings which, in theory, would work to cohere a sense of direction for the whole project. We dedicated time at the beginning of the project to making space for divergence, as well as congruence, and veered away from articulating collective agreement or understanding as the only way forward for the project. In doing this we were seeking to enable parallel and distributed action that would really help unleash people’s potential to innovate – both practically and conceptually.

From the onset we were upfront that this approach was not static and would probably need to change as the project evolved. This is an admission and an allowance to fail; we knew that things wouldn’t work, but that we had to start somewhere. This approach to iterative development, knowing something won’t work properly at first, is central to an action research approach. It builds failure in as part of the learning process.

Nine months in, after observing and reflecting on how the working groups were developing, we noticed that they were not as dynamic as we had hoped they might be. It is difficult to pinpoint exact reasons for this; there were no doubt multiple factors that inhibited their adoption, from the unfamiliarity of working in less directed and hierarchically structured ways, to the strength of disciplinary silos, but I want to focus on the hybrid nature of our project. Congruence Engine was hybrid from the onset – starting in November 2021, the opening project conference in February 2022 was, for some, the first in-person event of that ilk since the COVID-19 pandemic began. Pre-pandemic, both the project’s action research facilitators had been facilitating in-person inquiry groups, being physically present, noticing the atmosphere of a room, paying attention to body language and to slight intonations in a person’s voice. These were important factors for us in facilitating, improvising and cultivating relationships in action research. Attempting to do this type of work, to manifest energy and dynamism through working groups, is much harder to do digitally, and this is potentially one of the stumbling blocks we encountered. For Congruence Engine, whilst there are many pre-existing connections between some colleagues, the majority of us had never worked together, and come from different disciplines and working cultures. So, whilst we knew parallel work was happening, we realised we needed more active facilitation methods that acknowledged the differing environmental and communicative dynamics that are possible through digital and hybrid working. This was, in a more traditionalist sense, a failure of our first attempt.

Screenshot of a project dashboard titled ‘Congruence Engine project hub’, featuring a central card describing the hub as the main space for project resources. Underneath there are multiple working group cards, each with a title, brief description, and user avatars, including groups such as Communications, Digital Humanities & Data Science, Energy, Publishing, Race and Decolonisation, Research Discussion, Textiles, and Training and Mutual Learning.

Figure 9.2: The Congruence Engine Basecamp, showing the different project working groups. © Arran J. Rees.

Keen to adapt our approach, maintaining the core principles of parallel, distributed action, led by those who the work resonates with, we devised a new structure for the project that built in a directed digitally led engagement with each other, and more facilitated cross-project meetings. Within the context of the action research’s cycle structure, this was a planning forward, based on what we’d learnt during our first phase of the project. Utilising the language of iterative, incremental and adaptive change, the action research facilitators actively sought to communicate that nothing was fixed and that structures could be changed throughout the course of the project without it needing to be framed as a failure of one approach.

The project had been using Basecamp to try and facilitate the working groups in their original instantiation (Figure 9.2), but the platform had not enabled the open and generative form of collaboration we had hoped for.2 Research on undertaking action research in online environments points to benefits of rapid and open communication services whilst acknowledging the incompleteness of communicating online, but also to the potential to work both synchronously and asynchronously across digital platforms (Embury 2015). Knowing we had groups of researchers who worked on different days, on different fractional contracts, and with Congruence Engine being a larger and smaller proportion of their focus, encouraged us to focus on developing synchronous and asynchronous modes, opting to introduce Notion to support our work.3 For some, this added yet another digital system into the mix; something new to learn, something else to keep an eye on, but for others, it opened up the project and enabled far more openly and collectively developed ideas to be shared. Alongside the introduction of Notion, we formed a new Action Research and Project Management group and designed a new central meeting to cohere all the investigations, embedding collaborative and live documentation as part of those meetings. One of the the intentions here was to provide a new digital space for synchronous and asynchronous documentation and research development, alongside a more actively facilitated set of meetings to more explicitly encourage sharing across strands of investigation.

This second cycle of the project’s structure and management was no doubt disorientating for those who had adopted and got used to the working groups model, but there was a lot to be learnt about the dynamics of a new interdisciplinary project and the types of structure that were needed to support a wide variety of researchers and practitioners. Whilst the structure of using Notion, supported by two ongoing meetings to actively facilitate the movement of the research inquiries continued throughout the project, the approach continued to be tweaked, acknowledging when a meeting, a form of organising or an approach wasn’t working as intended. Instead of seeing these as a series of failures, we saw them as a step closer to understanding what is necessary for the context of our research.

Writing about action research as a mode for talking about failure

Action research is an inherently pragmatic approach to managing a research process, grounded in learning through practical experimentation. It sees any form of action as deeply contingent on the individual contexts you are operating within, and this means that the strategies put in place for Congruence Engine were a product of the people, institutions and wider political, social and economic realities we found ourselves in. This will be different each time, and therefore the approach will need to be different each time. This orientation towards learning by doing, seeing and observing what works and what doesn’t is an alternative to the negative connotations of failure. It is a generative reframing of failure enabled by the iterative and complexity-embracing nature of action research.

Difficulties arise when we are asked to write about the project methodology and give overviews or accounts of how the project developed. Judi Marshall and Peter Reason argue that it is easy for accounts of action research to become bland and prosaic and urge quality action research to respect the aliveness of the method – its wildness at times (Marshall and Reason 2008). With that in mind, how can we use the opportunity of an action research framing of failure to write productively about what didn’t go to plan?

As a mode of doing research, it does not necessarily align with the same models of success and significance found in traditional forms of academic inquiry. Rather than looking for ‘truth’ and replicability, action research is concerned with engagement, dialogue, pragmatic and locally useful outcomes, and an ‘emergent, reflexive sense of what is important’ (Bradbury and Reason 2001). Here we begin to step away from reportage of what was successful to what the learning was from moments of tension, from changes of process, from what was difficult and impossible to get done. Marshall and Reason sum up the potential of not knowing, or of failing to know, when they note that ‘if you think you know what you are doing as an action researcher, have it comfortably in hand, you are really not doing it, are not on a learning edge’ (Marshall and Reason 2008). Capturing the liveliness of the research that pushes you and your collaborators to the edge of your learning and comfort zone is the sign that significant research and learning experiences are taking place.

In talking about, and writing about Congruence Engine, I have always attempted to be (painfully, at times) self-reflexive and critical about the urge to tell a good story – of glossing over the complexity, the tensions and the failures. There is always an underlying fear that surfacing the challenges and the improvisations you made along the way will eclipse the traditional success stories we wish to tell our funders. However, in using action research as method, in demonstrating the challenges through defining what didn’t work, we have the opportunity to release ourselves from that urge.

Action research is an orientation towards research that undermines the traditionalist sense of failure. Instead of defining what didn’t work as a failure, action researchers see it as a development in their research, a finding that will help us move towards a better understanding of how to create a more effective research intervention during the next cycle of work. Knowing what doesn’t work is as useful as knowing what does work in a complex and changeable environment.

Acknowledgements

My thanks to Helen Graham, my co-facilitator in the action research of the Congruence Engine project, whose insights on the action research methodology and the forms of investigation this chapter is based on, are forever helping me learn.

Notes

  1. 1 Towards a National Collections (TaNC) was a five-year, £18.9 million investment by the UK Arts and Humanities Research Council, looking to understand the requirements of developing a united digital collection made up of the cultural heritage materials held in the UK’s museums, libraries, archives and heritage organisations. https://www.nationalcollection.org.uk/Discovery_Projects (accessed 24 November 2024).

  2. 2 Basecamp is a project management tool by 37signals. It was used by the Congruence Engine team from March 2022 until the end of the project in January 2025.

  3. 3 Notion is a documentation tool and wiki developed by Notion Labs Inc. It was used by the Congruence Engine project from January 2023 until the end of the project in January 2025.

References

  • Bernstein, E., J. Bunch, N. Canner and M. Lee. ‘Beyond the Holocracy Hype’, Harvard Business Review (July–August 2016): 38–49.
  • Boulton, J.G., P.M. Allen and C. Bowman. Embracing Complexity: Strategic Perspectives for an Age of Turbulence. Oxford University Press, 2015.
  • Bradbury, H. and P. Reason. ‘Conclusion: Broadening the Bandwidth of Validity: Issues and Choice-Points for Improving the Quality of Action Research’. In The Handbook of Action Research: Participative Inquiry and Practice, edited by P. Reason and H. Bradbury, 695–707. SAGE Publications, 2001.
  • Brydon-Miller, M.. and D. Coghlan. ‘First-, Second- and Third-Person Values-Based Ethics in Educational Action Research: Personal Resonance, Mutual Regard and Social Responsibility’, Educational Action Research 27 (2019): 303–17.
  • Burns, D. Systemic Action Research: A Strategy for Whole System Change. Policy, 2007.
  • Embury, D. ‘Action Research in an Online World’. In The SAGE Handbook of Action Research, edited by H. Bradbury, 529–35. SAGE Publications Ltd 2015. Accessed 24 November 2024. https://methods.sagepub.com/book/the-sage-handbook-of-action-research-3e.
  • Grewcock, D. Doing Museology Differently. Routledge, 2014.
  • Knudsen, L.V. and A.R. Olesen. ‘Complexities of Collaborating: Understanding and Managing Difference in Collaborative Design of Museum Communications’. In The Routledge Handbook of Museums, Media and Communication, edited by Kirsten Drotner, Vince Dziekan, Ross Parry and Kim Christian Schrøder, 205–18. Routledge, 2018.
  • Marshall, J. and P. Reason. ‘Taking an Attitude of Inquiry’. In Towards Quality Improvement of Action Research: Developing Ethics and Standards, edited by B. Boog et al. Sense Publishers, 2008.
  • McCarthy, C. ‘Theorising Museum Practice Through Practice Theory: Museum Studies as Intercultural Practice’. In The Routledge International Handbook of Intercultural Arts Research, edited by P. Burnard, E. Mackinlay and K. Powell, 24–34. Routledge, 2016.
  • Reason, P. ‘Pragmatist Philosophy and Action Research: Readings and Conversation with Richard Rorty’, Action Research 1 (2003): 103–23.
  • Schell, S. and N. Bischof. ‘Change the Way of Working. Ways into Self-Organization with the Use of Holacracy: An Empirical Investigation’, European Management Review 19 (2021): 123–37.

Annotate

Next Chapter
Chapter 10 Navigating the challenges and opportunities of collaboration
PreviousNext
© the Authors 2025
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org