I agree with Prof Andrew Moran and Dr Ben Wilkinson (Letters, 2 March) that cheap and easy‑to‑use AI tools create problems for universities, but the reactions of many academics to these new developments remind me of the way some people responded to the arrival of cheap pocket calculators in the 1970s.
Reports of the imminent death of maths teaching in schools proved exaggerated. Maths teachers had to adapt, not least to teach students the longstanding rule “garbage in, garbage out”; if students had no idea of the fundamental principles and ideas behind maths, they would not realise their answer was meaningless. Today’s humanities teachers are going to have to adapt in similar ways.
Our students need to recognise, for example, when AI has harvested such poor-quality information that its responses are inaccurate. But, more importantly, they need to learn how to make their work genuinely stand out in a sea of increasingly generic AI-generated essays, not least because they will need to make their job application letters, reports and written work stand out.
AI has been defined as teaching computers to do badly what people do well, and despite recent breakthroughs, AI hasn’t changed that much. Genuinely good writing exhibits genuinely human qualities – such as individuality and empathy. True intelligence is still something people do best.
Jim Endersby
Professor of the history of science, University of Sussex