In July 2024, FLANZ’s former president and current secretary, Ralph Springett, presented alongside Associate Professor Chris Deneen at a joint event organised by ASCILITE and Turnitin, titled “Navigating the Ethical Terrain of AI and Research Integrity.” Ralph shared compelling insights about the complexities and evolving challenges in maintaining academic integrity in an era increasingly influenced by generative AI. He also offered a straightforward message for practitioners on how we take responsibility in guiding learners with specific requirements and support.
Understanding the struggle for research integrity
Ralph began by highlighting a significant issue he experienced in many years of education practice and management: many learners are struggling to comprehend or appreciate the importance of research integrity. As an educator, I have observed this struggle myself. Students, often under pressure and short on time, may overlook ethical considerations or feel confused about what ethical AI use is in their work. This confusion is compounded when guidelines are unclear or inconsistent.
The Importance of clarity and context
Ralph emphasised that while the principle of “honest attribution” remains a stable foundation for academic work, understanding what constitutes “honest practice” with AI is still evolving. He urged educators to ask themselves:
- What are the limits of referencing GenAI services?
- What are the limits of using GenAI services to paraphrase
?
- Is it ok if GenAI services enhance the structure, or coherence of an assessment response?
The answers, Ralph argued, are not straightforward and depend on various factors, including institutional policies, discipline-specific norms, and individual assignments.
The foundation is attribution
Reflecting on Ralph’s emphasis on the importance of attribution and transparency, I was reminded of Associate Professor Benito Cao’s award-winning presentation at HERDSA 2024 conference, which advocated for a straightforward approach: “Don’t be sorry, just declare it: Pedagogical Principles for the Ethical Use of ChatGPT.”

This idea emphasises the need for learners to be open about how they use AI tools, fostering a culture of honesty and clarity around AI use in their academic work.
The role of education practitioners: clarity, support, and critical analysis
The need for appropriate attribution aligns well with the growing call for specific guidelines on AI usage across educational contexts, ensuring that both learners and educators have a clear understanding of what is acceptable.Ralph’s presentation underscored the importance of educators being specific and clear about the acceptable use of AI in their courses.
This also includes educating students on the critical analysis of AI-generated content, recognising its limitations, and being aware of potential biases embedded in these tools. AI is here to stay, and it is our responsibility as educators to support students in navigating this complex landscape ethically.
Practice informs policy
While there is an evident need for national and international clarity on AI use in academic contexts, Ralph suggests that clarity should start at the ground level, with educators and institutions. Rather than waiting for institutional or national policies to provide guidance, Ralph urged educators to take the initiative. In my understanding, this could include defining parameters for AI use within our disciplines and ensuring our students understand these boundaries. By developing and refining best practices, we can shape institutional policies that will eventually contribute to broader regulatory frameworks.
Ralph’s key message resonates strongly with my experiences: we must not only clarify how AI tools should be used but also support our students in understanding why ethical considerations matter, to their learning, their future work, and their life. As Ralph wisely points out, “Clarity starts at home.” By providing specific, practical guidance, we can help students navigate the ethical terrain of AI with confidence and integrity.
Moving forward
Reflecting on Ralph’s presentation, I see the need for flexibility and adaptability as AI technologies rapidly evolve. This also resonates with Chris Deneen’s insights in the presentation, which highlighted the balance between the limitations and benefits of large language models (LLMs), demonstrating their potential to improve efficiency, automate assessment, learning analytics, and networked learning, while still requiring careful and ethical use.
