Epistemic trespassing is where an individual with expertise in one domain mistakenly takes that to license them to pass judgement in a distinct domain. There are already examples where people adept at machine learning have used that to launch intellectual misadventures in other fields, including in such high-stakes domains as medical trials. There is an obvious risk that generative artificial intelligence can be used as an engine for epistemic trespassing. Someone who boasts skills in 'prompt engineering' (being able to extract desired outputs from generative AI) can fool themselves and others into thinking that the outputs of the generative AI should be treated as testimony from a relevant expert. I argue that this is in addition to and distinct from the danger of epistemic pollution from AI-generated hoaxes, or from so-called AI hallucinations. I also contrast the dangers of AI-driven epistemic trespassing with benign uses of other tools that aggregate large amounts of data, such as checklists in medicine and engineering and (some) recommender systems. I conclude with some suggestions about how best to manage the risks involved, in particular, how 'human-in-the-loop' safeguards treat the symptoms and not the cause.

Venue

Room: 
Hybrid (Room 01-E302 and online, contact Dr Guillermo Badia at g.badia@uq.edu.au for the Zoom link)