Fortune Article: Why this law firm only works on artificial intelligence..

One upstart law firm specializing in A.I.-related legal matters is betting that companies will be increasingly investigating the various ways their machine learning systems could put their businesses in legal hot water. The law firm, based in Washington D.C., pitches itself as a boutique law firm that caters to both lawyers and technologists alike.

Having a solid understanding of A.I. and its family of technologies like computer vision and deep learning is crucial, the firm’s founders believe, because solving complicated legal issues related to A.I. isn’t as simple as patching a software bug. Ensuring that machine learning systems are secure from hackers and that they don’t discriminate against certain groups of people requires a deep understanding of how the software operates. Businesses need to know what comprised the underlying datasets used to train the software, how that software can potentially alter over time as it feeds on new data and user behavior, and the various ways hackers can break into the software—a difficult task considering researchers keep discovering new ways miscreants can tamper with machine learning software.


Their website


Andrew Burt is managing partner at and chief legal officer at Immuta. He is also a visiting fellow at Yale Law School’s Information Society Project.

Previously, Andrew was Special Advisor for Policy to the head of the FBI Cyber Division, where he served as lead author on the FBI’s after-action report on the 2014 Sony data breach, in addition to serving as chief compliance and chief privacy officer for the division, among other assignments.

A frequent speaker and writer, Andrew has published articles on law and technology for the New York Times, the Financial Times, and Harvard Business Review, where he is a regular contributor. He holds a JD from Yale Law School.


Patrick Hall is principal scientist at Patrick also serves as a visiting professor in the Department of Decision Sciences at The George Washington University. He is a frequent writer, speaker and advisor on the responsible and transparent use of AI and ML technologies.

Before co-founding BNH, Patrick led’s efforts in responsible AI, resulting in one of the world’s first widely deployed commercial solutions for explainable and fair machine learning. He also held global customer-facing roles and R&D research roles at SAS Institute. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

Also watch video Lawyer Andrew Burt on AI’s Biggest Barriers

Episode Highlights from Machine Meets World

This week’s guest is Andrew Burt, Managing Partner of the AI-focused law firm Andrew’s interview is full of great insights like these:

The biggest barrier to the adoption of AI and machine learning is not actually technical. The actual technology is fairly commoditized. The biggest barriers are risk-related and they’re policy-related and they’re law related.”

AI is great, but if you want to be serious about responsible AI, you need to be ready to respond when something actually goes wrong.”

“Even without new regulations on AI, there are a whole host of laws and ways that AI can create legal liability right now.”

“It’s good in some senses to move fast and break things — you can innovate, you can get there faster — but with AI it’s very, very dangerous.”