
As businesses continue to adopt artificial intelligence technologies, corporate lawyers and in-house data scientists should prepare to get better acquainted. Lawmakers are increasingly indicating that A.I. regulations are coming, which means that businesses will need to ensure that their machine learning systems aren’t violating laws governing privacy, security, and fairness.
One upstart law firm specializing in A.I.-related legal matters is betting that companies will be increasingly investigating the various ways their machine learning systems could put their businesses in legal hot water. The bnh.ai law firm, based in Washington D.C., pitches itself as a boutique law firm that caters to both lawyers and technologists alike.
Having a solid understanding of A.I. and its family of technologies like computer vision and deep learning is crucial, the firm’s founders believe, because solving complicated legal issues related to A.I. isn’t as simple as patching a software bug. Ensuring that machine learning systems are secure from hackers and that they don’t discriminate against certain groups of people requires a deep understanding of how the software operates. Businesses need to know what comprised the underlying datasets used to train the software, how that software can potentially alter over time as it feeds on new data and user behavior, and the various ways hackers can break into the software—a difficult task considering researchers keep discovering new ways miscreants can tamper with machine learning software.
One of the problems companies face, however, is that data scientists and lawyers don’t really speak the same language, bnh.ai Managing Partner Andrew Burt explained.
“The gap is like really, really wide, and it's really, really deep” between data scientists and lawyers, he said. “Frankly, it’s uncomfortable. Lawyers don't like being put in positions where it's extremely hard to understand what's going on. It can be very intimidating to sit across from a data scientist who just spouts a bunch of statistical terminology and math.”
Likewise, the same is true for data scientists who may be intimidated by lawyers who themselves speak in esoteric jargon, often saying “Latin things,” he said.
That said, Burt believes the “future of technology is dependent on those meetings” between attorneys and data scientists. Lawyers need to understand the technical nitty gritty of A.I. systems to be able to convey the potential legal risks to data scientists in a way that’s realistic and helps give them blueprints for how to troubleshoot and address the complicated systems. And unlike traditional software that’s relatively a “set it and forget it” kind of product, companies must continuously monitor machine learning software because it’s ever changing, thus posing future risks to their businesses.
Burt concedes that the initial meetings between data scientists and lawyers can be “awkward” for reasons including the notion that technologists “don’t want lawyers in their business” and “they don't want to be thinking about deeply ambiguous problems with no real solutions yet.”
He said his co-founder Patrick Hall once told him that lawyers will feel accomplished if they “sit in a room and talk” about legal issues. Data scientists, on the other hand, will feel like “they just wasted their time” if they attend a meeting where people talk but no one writes code.
Despite the differences between the two professions, they can find common ground, at least in Burt’s experience. They just need a little help getting on the same page. Once there, they can work on issues like figuring out the best ways to segment populations in datasets to adhere to current fairness rules, and when might be the best time to re-train a machine learning model to ensure that it’s powered by the most relevant and appropriate data.
Burt believes that when it comes to A.I. and business, "it's bad practice to wait until something bad happens to think about risk.” The most visionary CEOs will have considered the legal ramifications of A.I. long before they get caught in the cross hairs of regulators.
"Those are really the two threads," he said, "betting big on A.I. and caring about risk.”
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com