Lawyers and others face a future in which criminals can use artificial intelligence (AI) tools like ChatGPT to enhance their ability to commit fraud, Europe’s leading law enforcement organisation has warned.
In a report focusing on recent advances evident in the AI chatbot, ChatGPT, which has access to a vast database of human knowledge, Europol concluded that along with positive benefits to society, ‘bad actors’ will harness the technology to unleash a range of attacks.
Safeguards built into the AI by developers aiming to prevent its misuse would be easily bypassed by people seeking to commit crimes, Europol suggested.
Apps based on large language models (LLMs) like ChatGPT can ‘understand’ a range of human-like text, translate various spoken languages, interpret images, answer questions on a huge variety of topics, interpret images, and write code in most common programming languages.
Meanwhile, the measures lawyers currently use to avoid being the victims of cybercrime may no longer be up to the task – for example where previously fraud attempts might have been spotted due to the poor English grammar. This is likely to be a thing of the past, the report said.
“ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge… the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime…
“[This technology] may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to… adopt a specific writing style.
“Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.”
Read more at