LW write
An NSW-based law lecturer recently undertook an experiment, pitting his criminal law cohort against 10 separate AI-generated responses for an end-of-semester exam. The results might surprise you.
In the wake of its still-recent explosion into mainstream consciousness, much has been made of the capacity for generative AI (GenAI) to perform the duties of legal professionals, resulting in a revival of the discourse surrounding the replacement of lawyers by emerging technology.
Dr Armin Alimardani (pictured), a lecturer in law and emerging technologies at the University of Wollongong (UOW), has specifically been investigating whether GenAI can outperform law students or, indeed, an overwhelming majority of them.
His findings form the basis of a new paper, Generative Artificial Intelligence vs. Law Students: An Empirical Study on Criminal Law Exam Performance, which was published yesterday (Tuesday, 24 September) in the Journal of Law, Innovation and Technology.
He said: “The OpenAI claim was impressive and could have significant implications in higher education; for instance, does this mean [that] students can just copy their assignments into generative AI and ace their tests?”
“Many of us have played around with generative AI models, and they don’t always seem that smart, so I thought why not test it out myself with some experiments.”
The experiment
Last year, in UOW’s second semester, Alimardani – in his capacity as the subject coordinator for criminal law – compiled answers from AI to the end-of-semester exam. Five responses were sought using different versions of ChatGPT, and another five used various prompt engineering techniques for the sake of enhanced responses.
“My research assistant and I hand-wrote the AI-generated answers in different exam booklets and used fake student names and numbers. These booklets were indistinguishable from the real ones,” Alimardani said.
Read the full article