Research Celebration Semester 1
Venue
Please join us for the School of Historical and Philosophical Inquiry Research Celebration!
- Friday 6 June
- 1:30-3pm Research Seminar ‘AI and the Humanities’ in E302 Forgan Smith.
- 3-4pm Research Celebration Afternoon Tea outside the tearoom E318 Forgan Smith
- 4pm onward ‘pay your own way drinks’ at Saint Lucy Caffé e Cucina, Blair Drive (Next to the Tennis Pro Shop)
Dr Annabel Florence, Lecturer in Classics and Ancient History and HPI Integrity Officer.
ChatGPT Ate My Homework: Stories from an Integrity Officer's Desk
As AI increasingly permeates the academic landscape, the very foundations of academic integrity are being tested in unprecedented ways. Drawing upon firsthand experiences as a university academic integrity officer, my presentation offers a candid and anecdotal exploration of this evolving challenge. From early encounters with AI-assisted misconduct to navigating complex investigations and student appeals, these stories from the "AI frontier" illuminate the practical realities of upholding integrity in this new era. We will delve into the difficulties of detection, the evolving tactics observed, and the crucial lessons learned in adapting our approaches. Ultimately, this reflection underscores the need for a collective, proactive, and education-focused strategy to safeguard academic honesty in the face of rapidly advancing artificial intelligence.
Alex Paterson, HPI HDR Student
Techno-Social Engineering and AI in the Humanities: Outsourcing, Thresholds, and Calculative Thinking
This research begins with a discussion of the concept of techno-social engineering as outlined by Brett Frischmann and Evan Selinger (that our attitudes and capacities are shaped by the tools we use and our social world’s embrace of those tools), and explains that when the humanities embraces AI, we all become complicit in techno-socially engineering students of the humanities. I focus in on two consequential harms, by way of the thought of Martin Heidegger. First, that outsourcing thinking to AI engineers students away from the threshold spaces where they could meaningfully engage with what is Other to them, thus robbing students of moments of potential existential transformation. Second, that endorsing AI use encourages what Heidegger refers to as ‘calculative thinking’, a mode of thinking which I argue is detrimental to the humanities.
Professor Deborah Brown, Professor of Philosophy and Director of the Critical Thinking Project
Generative AI and Critical Thinking: An Opportunist’s Perspective
Amid growing evidence that over-reliance on generative AI is negatively correlated with critical thinking, the question arises how should we as educators responsible for the advancement of the thinking capabilities of our students respond? It is obvious that business-as-usual approaches grounded in the dominance of curriculum (i.e., content) over pedagogy and the transmission of extant knowledge from expert to novice will not survive. To the extent that human thinking is algorithmic—itself an open question, and yet—whatever we have thought up in the past, so can AI. It is also obvious that focusing exclusively on assessment security is merely treating the symptoms rather than the root cause of the disease. This talk documents research by the UQ Critical Thinking Project on the intersection of AI use in teaching and student thinking. It points to the opportunities lurking in the threat of AI to rethink how we teach rather than what we teach to position curiosity and critical and creative thinking at the centre of teaching practice. Finally, it offers some examples of how AI might be incorporated into teaching practice to augment rather than replace students’ critical and creative thinking, positioning study in the Humanities as an antidote to the risks of generative AI.
Professor Nic Carah, Director of the Centre for Digital Cultures & Societies and Professor in the School of Communication and Arts
The algorithmic flow of post-millennial life
In this talk I draw on a number of projects where we've been exploring how automated models classify and curate our visual culture. To do that work we develop computational models that help us to critically simulate what digital platforms do with our images. I offer some reflections on the experience of living in a culture where we are immersed in flows of images that are sequenced and augmented by machines, and situate that in longer debates about 'flow' in media and popular culture. I draw on this work to think about how and why the humanities should be developing its own approaches for conceptualising and using AI.
Chaired by A/Prof Tom Aechtner, Associate Professor in Religion and Science, Discipline Convenor Religious Studies.