As we approach the end of another Gregorian calendar year it's natural to become reflective about what's gone before and what's potentially to come. The past year has seen a lot of movement in the computer-assisted qualitative data analysis (CAQDAS) space; a lot more than we've seen in any 12 month period for a good while. Sparked by the emergence of generative-ai, qualitative researchers have been considering their craft. But as the furore settles down, it’s clear that many of the same issues abound in this new era of computer-assistance in qualitative data analysis.
It was back in the 1980s that software designed for qualitative analysis became available. Since then there’s been much debate, and many methodological and technological developments have impacted what’s possible. Controversy has characterised this field since the get-go, with the advocates and sceptics staking their claim in the literature, in practice, and in how they teach. These debates have ebbed and flowed over the years, but the capabilities of generative-ai and its potential impact on how we think about, do, and teach qualitative analysis is more profound than we’ve seen since the genre of CAQDAS first emerged.
This is because the capabilities of generative-ai now mean that computers can actively contribute to the analytic process in ways not hitherto possible. Amongst the concerns of the sceptics has always been the fear that the machine would “take over”. CAQDAS advocates have countered this, but the capabilities of generative-ai do put a different complexion on the debate.
What this actually means for the practice of qualitative research in different contexts is yet to fully be revealed. But one thing is clear: these tools, like those that came before, must be used appropriately for the task at hand.
This has always been the message of the CAQDAS advocates, and nothing is changing on that front, whether in relation to the integration of generative-ai into existing CAQDAS-packages, or the use of a new genre of tools developing in direct response to the capabilities of generative-ai technology.
As Helen Kara frequently reminds us, ethics are everywhere
Whatever the tool, it needs to be used appropriately
The difference with generative-ai is that the ethical dimension is even more pressing. In all the readings, discussion, presentations, and events I've been involved in these past 12 months about generative-ai and qualitative analysis, the most pressing concern has been the ethics of its use for qualitative research purposes. And so it should be, because those concerns are fundamental to the ethos of what we do. And generative-ai brings ethics to the forefront in ways not encountered before.
Which is why how we teach these tools is so important. And let’s be clear, we do need to teach them. Whatever our concerns might be, these tools are here to stay and they will be used, therefore we need to ensure they are used appropriately.
Just as schools have an important role in equipping children to navigate the digital world, the qualitative research community has an crucial role in equipping the next generation of qualitative researchers to harness the qualitative AI space appropriately. Researchers, universities, funding bodies and publishers also have key responsibilities here, in providing transparent accounts of use-in-practice, developing guidelines for ethical use, and frameworks for reviewing qualitative work undertaken using these new tools.
This is more pressing than ever before, and I hope to see more discussion on these topics in 2024.
For more on the role of AI in qualitative analysis see the recordings of the two-part symposium on the topic hosted by the CAQDAS Networking Project and the Social Research Association.