DeepSeek: The GenAI floodgates are open
Chinese GenAI startup DeepSeek in the last few days of January 2025 skyrocketed to the top of app stores in the United Kingdom, United States and across the globe, displacing ChatGPT and sending shockwaves across the stock markets with its open source large language model DeepSeek R1. For law firms and legal organisations more widely, its arrival presents geopolitical tensions, doubts around its provenance that could impact downstream adoption, and, if you use the web-based consumer version, massive privacy issues.
However, the risks need to be contextualised and understood. There are, perhaps, currently incomprehensible benefits that will flow downstream from this release in terms of development opportunity and cost.
Regardless of where you sit in the debate or what happens now, the GenAI floodgates are truly open.
Your privacy fears are, within reason, founded
If you, your colleagues or staff are using the web-based consumer version of DeepSeek, the first thing to note is that any data you put in can be used to improve the model and, at its bluntest, be read by the Chinese government.
The public app does indeed allow DeepSeek to collect all your personal information, IP address and the prompts you enter to improve their service. That includes monitoring interactions and usage across your devices and analysing how people are using it. DeepSeek retains the right to share your information with third parties. Oh, and there is no opt out.
Isabella Bedoya, co-CEO of Infinite Artificial Intelligence, was one of the first to make people aware that DeepSeek tracks your keystroke patterns, commenting on LinkedIn: “Didn’t we just have a whole issue last week with TikTok because it was owned by China? My friend Tara Thompson made me aware of their terms & conditions and did you know they track your keystoke patterns? No one is talking about this.”
Given that just a minute ago Tik Tok was going to be banned in the US on privacy and national security grounds, it’s surprising indeed that there wasn’t an immediate outcry from the US, although on Tuesday 28th of January, U.S. officials said they are looking at the national security implications. Italian regulators were quick off the bat to demand DeepSeek inform them of what personal data is collected for what purposes and on what legal basis. Others regulators will no doubt follow.
In the UK, law firms are looking at how to guide their staff on use of the public app, which within days of its launch had around three million downloads. Not only does it take us back to the early days of ChatGPT, with all the client confidentiality implications, but there are additional geopolitical concerns.
Speaking to Legal IT Insider, Elliot White, director of innovation and legal technology at UK top 50 firm Addleshaw Goddard, which is one of the most progressive law firms when it comes to GenAI exploration and adoption, said: “There are a few factors we have to look at that are the same as when ChatGPT first came out. So there’s the provenance of the model, so who built it and where it has come from? If we just take the consumer view for a second, there’s the normal things you’d have to consider on that anyway, such as where’s your data going? What are they doing with your data as you feed it into it, and that’s probably slightly more complex because now it’s feeding into a Chinese model, a Chinese app that is hosted out of China, so there’s probably even less control on what’s going on with your data in that sense. I’m not necessarily saying there is anything wrong with that but these factors should be considered.
“Then, if you think about it from a business perspective, the questions around provenance and open source means we have to scrutinise this even more and it potentially raises even more security considerations than we have had to deal with when considering the use of OpenAI.”
It’s important to be pragmatic and Andrew Dunkley, director of data services at Consilio, who was previously a data scientist as UK law firm BLM, says: “Privacy concerns are legitimate but probably don’t change much for enterprise. If you are already nervous of your data being stored in China, you won’t like this either – in which case don’t use it and stop your employees doing so either. Accidental employee exposure of data is the big risk here – and it does mean that companies that have never really had to think about whether their data can go to China will now need a policy on it.”
Another concern if you’re using the public version of DeekSeek is State supervision. Dunkley says: “We know DeepSeek is at the very least anticipating this because the model won’t return results around Tiananmen Square. We simply don’t know what controls are built into the model to protect Chinese state interests. This is a paranoid take, but a worst case scenario would be the model unexpectedly failing in some circumstances when implemented for critical tasks and/or interference to bias results in favour of Chinese parties in legal type use cases.”
Read more