Professor Amanda Fulford and Professor David Aldridge, Edge Hill University

Artificial Intelligence (AI) is no longer the preserve of a few with interest in large language models and machine-based learning; it has passed into our everyday parlance and, furthermore, is fundamentally reshaping the higher education landscape. Evidence suggests, and education professionals report, that AI generated content is already having a significant impact on academic text production which calls into question the nature of knowledge creation in the academy.

It is clear that AI has huge potential in terms of supporting scholars with routine tasks related to their research, empowering them to engage in more innovative ways such that their research has wider impact. The affordances of AI could help support increased productivity, transform the research journey, and potentially lead to higher quality outputs. Yet at the same time there are significant concerns. The ethical use of AI systems, and their responsible use as part of the research process, is a fast-paced and growing field of enquiry. In this paper we reflect on a further question in an (as yet) under-researched, yet vital area of the Academy‚Äôs work: How is AI impacting on doctoral training for researchers in a higher education? How can universities continue to cultivate creative, critical, autonomous, and ethical researchers who at the same time embrace the transformative technology that AI offers?The paper will initially report on a larger project that engages with the ‘moral panic’ around AI by explaining that AI technology has been able to establish itself so swiftly on the educational scene because educational practices and institutions have already become to an extent ‘artificially intelligent’, so the problems and questions being addressed in those institutions are ones to which AI naturally presents as an answer. The outcomes that are feared to result from embedding AI – lack of creativity, plagiarism, reproduction of information without deep understanding, etc. – are already deeply embedded in higher education, even at the level of doctoral research. The teaching of research methods on doctoral training courses in social sciences, as exemplified by an examination of some of the classic textbooks, is a case in point. Information about ‘methodology’, ‘paradigms’, ‘ontological and epistemological assumptions’ and so on are reproduced without deep understanding by processes of memetic transfer, procedural approaches to undertaking research, and non-specialist teaching; to such an extent that the ‘ontology and epistemology’ section of most empirical research theses could be written more efficiently by AI technology, would be more intelligible, and would save everyone a lot of time and bother. Or they could be abandoned altogether.

Following this, the paper will argue that while the advent of AI has led universities to adopt pragmatic policies with regard to the use of generative tools such as ChatGTP and Microsoft Copilot for use with undergraduates, the use of such tools within the postgraduate research community has been less well documented, and regulated. They attempt to lay out some of the principles governing what might constitute the ethical use of AI in doctoral education, and consider further the changing modes of doctoral education that might be needed in the future where AI plays an increasingly significant role and explore what this means for the idea of the academic supervisor.