Skip to main content

Reframing Failure in Digital Scholarship: Chapter 19 Writing about research methods: sharing failure to support success

Reframing Failure in Digital Scholarship
Chapter 19 Writing about research methods: sharing failure to support success
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeReframing Failure in Digital Scholarship
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Title Page
  2. Copyright
  3. Contents
  4. List of figures
  5. Notes on contributors
  6. Acknowledgements
  7. Introduction: reframing failure
  8. Part I: Innovation
    1. 1. Stop lying to yourself: collective delusion and digital humanities grant funding
    2. 2. Risk, failure and the assessment of innovative research
    3. 3. Innovation, tools and ecology
    4. 4. Software at play
  9. Part II: Technology
    1. 5. Brokenness is social
    2. 6. A career in ruins? Accepting imperfection and celebrating failures in digital preservation and digital archaeology
    3. 7. Living well with brokenness in an inclusive research culture: what we can learn from failures and processes in a digital humanities lab
    4. 8. Can we be failing?
  10. Part III: Collaboration
    1. 9. Doing, failing, learning: understanding what didn’t work as a key research finding in action research
    2. 10. Navigating the challenges and opportunities of collaboration
    3. 11. Challenging the pipeline structure: a reflection on the organisational flow of interdisciplinary projects
    4. 12. When optimisation fails us
    5. 13. Reframing ‘reframing’: a holistic approach to understanding failure
  11. Part IV: Institutions
    1. 14. Permission to experiment with literature as data and fail in the process
    2. 15. What to do with failure? (What does failure do?)
    3. 16. The remaining alternatives
    4. 17. Who fails and why? Understanding the systemic causes of failure within and beyond the digital humanities
    5. 18. Experimental publishing: acknowledging, addressing and embracing failure
    6. 19. Writing about research methods: sharing failure to support success
    7. 20. Bridging the distance: confronting geographical failures in digital humanities conferences
  12. Conclusion: on failing
  13. Index

Chapter 19 Writing about research methods: sharing failure to support success

Anisa Hawes and Riva Quiroga

Traditional research outputs, such as academic articles, offer authors minimal scope to describe or reflect on their methods, prioritising instead the presentation of novel and successful results. This limitation reduces methodology to a sequence of linear steps, suggesting a straightforward path to achieving the desired outcome (Rickly and Cook 2017, 120–21). Such representations can give the impression that decision-making is simple and swift, downplaying the complexity of iterative processual adjustments, and obscuring the fact that, often, failure plays a key role. Unsuccessful attempts, or challenges encountered along the way are rarely discussed, keeping valuable insights about what didn’t work well hidden from our academic discourse. How might a form like the Programming Historian lesson provide space for researchers to reflect upon their methodological failures, and support others towards success?

In 2012, Programming Historian, a project that had emerged four years earlier as a series of blog posts for historians wanting to learn the programming language Python, was relaunched as an open-access, peer-reviewed scholarly methods journal. Its core goal was to create a venue for sharing the computational methods becoming increasingly relevant to humanities researchers, at a time when there were few opportunities to learn or practise programming, and the digital was under-explored in university curricula (Crymble 2018). By publishing lessons that sit at the intersection of research and pedagogy, Programming Historian allows authors to turn their work with specific methods into case studies for lessons that demonstrate how others could use those methods themselves (Walsh 2021).

Inadvertently, discussion of the methodological work rendered invisible by other academic journals became Programming Historian’s point of focus. By making explicit the iterative, sometimes non-linear path towards results, these methodological lessons represent an economy of community effort because methods are open and shared for reuse, and equally present an opportunity to transform experiences of failure into productive catalysts that help others succeed.

Our open publishing workflow has also developed iteratively. We use a public GitHub repository to host the code, files and data that configure our website and serve lessons to our readers. GitHub also provides an environment where we collaborate to develop lessons and manage our open peer-review processes.1 Working openly from the beginning, we understood how critical it is to ensure a safe space for contributors. Very early in the project’s history, we implemented an anti-harassment policy which now forms part of a broader code of conduct providing community participants the assurance that acknowledging difficulties faced when learning about digital methods is welcomed, and gives people the freedom to thoroughly scrutinise ideas, ask questions, make suggestions or request clarification when reviewing a lesson.2 We recognised this to be especially valuable considering that, at the time, many online forums such as Stack Overflow were notorious for their hostility toward inexperienced coders and people from marginalised groups (Hanlon 2018). The code of conduct has helped us enhance the positive aspects of working openly. By choosing an open peer-review process, we’ve reframed the role of reviewers as collaborators – who are credited in every lesson – and active contributors to shaping the content of each lesson. This has allowed us to welcome challenging questions during the review process as honest attempts to help a lesson achieve its full potential.

Embracing and encouraging writing that is honest about what is challenging, and highlights – rather than hides – parts of the research process that are difficult became one of the characteristics which made Programming Historian distinct. As the project’s commitment to, and definition of, openness broadened to comprise global access and multilingualism, our understanding of how failure can support success broadened too. Creating effective learning resources for (and with) a global audience who have varying computational resources and internet connection speeds, as well as multilingual communities using different character sets and data examples, required us to increase our awareness of – and sensitivity to – circumstances where the failure of a lesson is dependent upon the technical and social context within which a method is applied. For example, during our open review process, we have realised that particular tools require a minimum amount of RAM because reviewers have failed when trying to use them on their computers.3

We understood a need to be more explicit about what you actually need to work through a lesson successfully. And we recognised how the process of translating lessons was helping us to understand limitations of tools and barriers to methods. As we developed our community of translation practice, we were also discovering ‘localised’ failures – exposing ways that certain methodologies simply fail when we apply them in languages other than English.4 Over the years, we’ve encouraged editors to take a critical stance, supporting authors and translators to be upfront where there are limitations or barriers so that readers can learn to be aware of implicit bias and navigate ethical challenges.

During 2024, we published our 250th lesson. With the growth of our directory, the expansion of our contributor community and an increase in the number of lesson proposals we receive, has also come an opportunity to revisit the guidelines we provide to support good practice across the project.

An aptitude for explanation, and an interest in the reflective practice of writing about method aren’t stated pre-requisites in our author guidelines, yet these are the qualities we’ve valued from the start. The ability to use a tool or a method does not automatically correlate with the ability to write to explain and demonstrate that use – these are two distinct skill sets. One of our objectives as a publisher is to provide guidance that can develop and empower those skills within our community. It was time to revisit and rethink. We needed to approach our author guidelines afresh, to write about writing about method. Our aim has been to find a way to clearly express what we think an effective Programming Historian lesson is.

At a moment when many learners are exploring the potential (and potential limitations) of programming with the help of AI, we find ourselves thinking about where the greatest strength of a methods journal, written for, by and with peers persists.

So, this is our opportunity to play to our strengths: explicitly encourage authors to write as though they are explaining their method to a colleague or peer, to draw in the kind of reflective, analytical discussion of decisions and workflows which we think characterises our most successful lessons. The process of rewriting our author guidelines has involved revisiting lessons to consider how that success has been attained, which characteristics those lessons share and how we might provide a scaffolding upon which future authors can build effective learning resources within this distinct form.

As our directory of published resources has grown, so too has the challenge of maintaining lessons. The work of troubleshooting errors in code, and adjusting instructions to accommodate new versions of software libraries, changing dependencies and updates in operating systems has become an increasing burden. This has prompted us to reflect on how the problem came to be. We’ve found ourselves thinking at length about the introductory sections of lessons which, when carefully developed, can open valuable space for authors to set out the context, requirements and conditions within which a method has been developed, and within which it can be applied. Until now, we think we’ve failed to emphasise the importance of this groundwork, and that is the reason it has sometimes been overlooked. One part of the future solution is prevention. How might we better support authors to write sustainably? How could a methodological lesson remain valuable after the tool at its centre becomes obsolete?

Going forwards, we want authors to provide our readers with at-a-glance statements about which operating system, programming languages and software versions each lesson has been developed for and tested within. We’re providing a sequence of questions that will guide authors to set out the technical understandings and computational environment needed to complete a lesson. As much a question of clarity, the inclusion of these principle statements as a basic requirement is a question of sustainability. The answers will help us establish what the computing context is/was at the time of writing, preempting that things may change (and potentially fail) with future updates.

Our revised framework also includes an expanded overview which comprises a series of prompt questions that invite authors to discuss the contextual ‘caveats’ of the method or tool they’re teaching, from both a technical and social perspective. For example, we want to encourage authors to explain what kinds of data or formats a particular method or tool can handle well, and to articulate how this understanding can support readers’ decision making when it comes to whether the method will be suitable in their research use-case. Our prompts seek insights into common stumbling-blocks that make a software package or its libraries challenging to install, and ‘known issues’ that make initial set-up tricky. The idea is that these questions cue writing with the generosity of voice we use naturally when we are sharing workflow with a colleague – when we’re trying to raise their awareness to what could go wrong, or admitting where we perhaps failed in our initial attempts.

At the centre of the framework we’re constructing is a learning example. In our experience, a real dataset that readers can handle, accompanied by sample code they can experiment with, are key to a successful Programming Historian lesson. In this section, we are guiding authors to narrate their workflow in practical units so that readers will be able to follow and understand the process. In the writing, we encourage descriptions of the process in general terms, rather than details of every action, click or gesture. Here, we emphasise the value of reflective notes that draw readers’ attention to specific steps which – in the author’s experience – were not intuitive, proved challenging or initially failed, and we ask authors to share suggestions that will support their reader to navigate them with the benefit of this insight.

In our view, this kind of processual commentary is a great strength of peer-to-peer writing – key to equipping another researcher to learn a method that they’ll be able to apply in their own work. For example, in Blankenship, Connell and Dombrowski’s recent lesson ‘Understanding and Creating Word Embeddings’, they gently reassure their reader, as a friend or colleague might reassure us when we are doubting ourselves over a puzzling output:

One important thing to remember is that the results you get from each of these function calls do not reflect words that have similar definitions … While some of the words you’ll get in your results are likely to have similar meanings, your model may output a few strange or confusing words. This does not necessarily indicate that something is wrong with your model or corpus … It always helps to go back to your corpus and get a better sense of how the language is actually used in your texts. (Blankenship et al. 2024)

The accelerating, ever-new possibilities for AI-pair programming has contributed to our understanding that we haven’t done enough to emphasise these uniquely human qualities of peer-to-peer writing which reflects, reconsiders, interprets and explains. As demonstrated by the excerpt above, in some lessons we succeed, but in others we have clearly failed to sufficiently support authors towards this kind of methodological writing, rarely sought by other academic publishers who tend to prioritise the summarisation and presentation of results.

As our lesson directories have grown, so too has the risk that rapid technological change will threaten the long-term utility of our past publications. It is likely that we will continue to face an increasing number of complex lesson maintenance challenges, unless we revise the guidance we provide to contributors. The new guidance must actively guide the production of writing which pivots away from the specific steps towards methodological overviews, discussion and reflection.

Thinking back to the project’s origins, we’ve recognised that among our core offers is a venue for researchers to openly share the computational methods they’ve applied, adapted or advanced in their work – surfacing the critical processes of decision-making, failure and iterative modification so that other researchers will be supported to succeed.

Since the beginning, we have worked consciously to create an open scholarly environment where people can become part of our knowledge-exchange community. But as we’ve grown, we think we have failed to explicitly articulate the particular strengths and capabilities of the Programming Historian lesson as a unique form. If that distinctive quality becomes lost in the project’s long memory, then the longevity of our journals is at risk.

The sustainability of our project depends upon re-articulating what defines the Programming Historian lesson, and providing a clear scaffold upon which authors, editors and reviewers can collaboratively build.

Our lesson framework and revised suite of guidelines remain in progress, and imperfect. This is not the first time the project has taken the initiative to reflect on where we have failed, and how we can make improvements. In the past, issues such as the lack of gender diversity in our contributor community have prompted us to openly recognise failure, and take action (Crymble 2017; Sichani et al. 2019).5 This is how the project has always operated and, perhaps, what has enabled our longevity. Change is an ongoing process, necessarily involving honest conversations which reflect on shortcomings, accept failures and plan for change.

Notes

  1. 1 All the editorial work related to a lesson takes place as an issue in this GitHub repository: https://github.com/programminghistorian/ph-submissions/issues (accessed 24 November 2024). Although the entire process takes place in the open, the published lessons don’t have a direct link to the peer-review process, as other venues like the Journal of Open Source Software do. That could be an area for improvement.

  2. 2 The first iteration of this policy on GitHub can be read in this issue: https://github.com/programminghistorian/ph-submissions/issues/3#issuecomment-196629337. Our current code of conduct is available in four languages, and can be accessed via our main repository: https://github.com/programminghistorian/jekyll/blob/gh-pages/CODE_OF_CONDUCT.md (both accessed 24 November 2024).

  3. 3 For example, this happened during the review of an original lesson in Spanish about ImagePlot: https://github.com/programminghistorian/ph-submissions/issues/254#issuecomment-594219900. This issue was then raised for discussion with the whole editorial board: https://github.com/programminghistorian/jekyll/issues/1712 (both accessed 24 November 2024).

  4. 4 You can find an example of this in the following Issue: https://github.com/programminghistorian/jekyll/issues/647#issue-271191579. This led to the discussion about which lessons are translatable from English to other languages in: https://github.com/programminghistorian/jekyll/issues/756 (both accessed 24 November 2024).

  5. 5 The open discussion about how to address the gender imbalance can be read here: https://github.com/programminghistorian/jekyll/issues/152 (accessed 24 November 2024).

References

  • Blankenship, Avery, Sarah Connell and Quinn Dombrowski. ‘Understanding and Creating Word Embeddings’, Programming Historian 13 (2024). https://doi.org/10.46430/phen0116.
  • Crymble, Adam. ‘A Decade of Programming Historians’. Network in Canadian History & Environment, 23 March 2018. Accessed 24 November 2024. http://niche-canada.org/2018/03/23/a-decade-of-programming-historians/.
  • Crymble, Adam. ‘White, Male, and North American: Challenges of Diversifying the Programming Historian’. Université de Lausanne, Switzerland (23–24 March 2017).
  • Hanlon, Jay. Stack Overflow Isn’t Very Welcoming: It’s Time for That to Change. 26 April 2018. Accessed 24 November 2024. https://stackoverflow.blog/2018/04/26/stack-overflow-isnt-very-welcoming-its-time-for-that-to-change/.
  • Rickly, Rebecca and Kelli Cargile Cook. ‘Failing Forward: Training Graduate Students for Research – An Introduction to the Special Issue’, Journal of Technical Writing and Communication 47, no. 2 (2017): 119–29. https://doi.org/10.1177/0047281617692074.
  • Sichani, Anna-Maria, James Baker, Maria José Afanador Llach and Brandon Walsh. ‘Diversity and Inclusion in Digital Scholarship and Pedagogy: The Case of The Programming Historian’, Insights 32 (2019): 16. https://doi.org/10.1629/uksg.465.
  • Walsh, Brandon. ‘The Programming Historian and Editorial Process in Digital Publishing’. Modern Languages Association Conference 2021, 7–10 January 2021. Accessed 24 November 2024. http://walshbr.com/blog/the-programming-historian-and-editorial-process-in-digital-publishing/.

Annotate

Next Chapter
Chapter 20 Bridging the distance: confronting geographical failures in digital humanities conferences
PreviousNext
© the Authors 2025
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org