Our COO and GC, Laura Safdie, and Chief Innovation Officer, Pablo Arredondo, recently invited Bennett Borden, Partner and Chief Data Scientist at DLA Piper, to have a conversation about the AI Executive Order.
Bennett: To be honest, my hopes weren’t actually very high, because an Executive Order only carries so much weight. It’s not new legislation. But it turns out to be, I think, the longest Executive Order in history, and the largest marshaling of federal executive authority absent wartime or a depression or pandemic. It issues orders to every major federal department to develop guidelines in really astonishing timeframes of between 90 and 365 days, with most of it being required to happen in 180–270 days from its publication. All of which means there’s gonna be a whole lot of new regulation coming a year from now.
A key element is that the National Institute of Standards and Technology (NIST) was ordered to produce a companion piece to its very popular risk management framework for AI, about generative AI, which I think is absolutely a good thing. When we advise clients, NIST is the go-to standard for setting up AI governance systems.
I think the most immediately important elements for companies developing or using AI are the statements telling independent regulatory agencies like the EEOC, FTC, CFPB, and DOJ that this should be their top enforcement priority. And then saying if you don’t have the authority you need to enforce the way you think necessary, propose the new rules you need. So I think immediately you’re gonna see these independent agencies go after violations in their jurisdictional authority that are done using or through AI.
Pablo: So this reminds me of conversations I had with the co-creators of the curriculum for the last CodeX FutureLaw. One idea was to make it all about generative AI and LLMs. And then people said, but what about access to justice? And the thing we all realized as we talked is that generative AI is the most important thing happening for access to justice.
So coming back to the instructions to agencies, more likely than not, if used properly, LLMs could help them much more efficiently achieve all the other work they need to do. Maybe there are exceptions, but if your work is about ensuring people comply with regulations and laws—for instance protecting tenants from threatening landlord letters that say they draw on law but really don’t (this really happened to me, and needless to say I called their bluff)—LLMs could police these sorts of documents at scale in a way that would be impossible otherwise. So it’s not “Make this a priority instead of your agency’s primary aims,” but rather “Make this a priority so that you can better achieve your agency’s primary aims.”
Laura: I’ll push back a little on that. You’re right, of course that AI is among the most important tools available to address a host of societal issues. But that isn’t the instruction to the agencies. The instruction is “Look at your portfolio, and where AI might even be adjacently contributing to violations under your authority, then addressing that is your most important problem to focus on right now.”
But there is a real risk of having, say, six to twelve different agencies, all coming out with overlapping guidance/instructions that might take different information as their inputs about the risks and opportunities of AI. How to turn all that into a comprehensive, consistent innovation-promoting set of instructions for all of us—users and developers—I think is going to be a huge challenge.
Pablo: Right, yes, that’s a good point. And I want to make sure we’re clear that the order also mandates that these agencies figure out how best to use AI.
Laura: True, yes, of course—it’s about both aspects. I also thought they made a strong statement about the world-positive applications of AI.
As a practical matter, though, among the instructions is “Go regulate within the next year across all these different agencies and all these different use cases and industries,” and my biggest concern is that some of these conversations currently sound like they’re based on information that is at best a 40,000-foot understanding of the technology and its risks and opportunities.
This need for AI literacy extends to the judiciary as well. It’s a separate issue, but closely related and constantly coming up. State Supreme Courts are creating AI committees, but those committees aren’t even sure where to get reliable information, or who to talk to. So I think this will be the biggest public policy challenge in 2024: How do we promote AI literacy and ensure the people who most need to be informed are well-informed?
Bennett: Yes, and I keep going back—because this is what I study—to looking at all this as a revolution. So if you consider the last big industrial revolution, the exact same thing happened, which leads me to predict this is going to be pretty chaotic for about three to five years. There’s just no way around it.
I talk quite often with the people at these executive agencies, and they’re saying they have tried-and-true ways of regulating, of looking for violations, measuring wrongdoing, because everyone needs clear rules. But now there’s an entirely new way that regulations can be violated, and they don’t know how to deal with that. And right now all we’re really going to get are broad standards that won’t be much different than “Be fair and transparent.” The specifics will all get worked out in court cases and enforcement actions. That’s where the granularity of the contours is going to emerge.
Even though I think it’s going to be really unsettled for at least three to five years, the good part is the regulators are listening. Much of our work right now is in stakeholder sessions, which involves reaching out to academia and industry, and explaining to our clients what the issues are and are going to be. All of which presents a great opportunity for companies to be able to establish best practices and actually help create good law and good regulation.
Laura: I 100% agree. It’s a huge opportunity, and significant regulatory activity will be happening in the next year. It will define so much about how we take advantage of the opportunities AI offers, how we control the very real risks, and ultimately how we as a society position ourselves to realize the enormous upside.
Bennett: All of which is happening while the technology keeps changing, so fast.
Laura: Yes, exactly: If you regulate in too much detail, it’ll soon be obsolete.
Pablo: Yeah, we joke about this, but it’s not far off from reality, that if we’re in the middle of a debate about a new paper, before we’re even done talking, a paper that directly affects what we’re talking about, and literally changes things, has come out.
Pablo: One thing I don’t feel right now is “Gee, I wish this all weren’t so slow.” I mean, generative AI has become the focus for capitalism. For everybody.
Laura: It’s more about the fact that regulation has to account for that unprecedented speed, and that is a really new approach.
Bennett: That’s why it’s all going to be outcome-based.
Laura: Which is a really new paradigm.
Bennett: Exactly, this is what we keep coming back to: The technology is going to do what it does, and you really can’t control it. It’s getting better. It’s getting easier to use for all sorts of applications. It’s less explainable than earlier technology, and so the regulation has to focus on the instances when it touches the world.
If that touch is harmful, we’re going to hold you accountable, whether it’s done to an employee, a customer, or a citizen. So we’re seeing a coalescence around outcome testing, which isn’t very different from product liability or anything else we do now.
Laura: Such a good analogy.
Bennett: Thank you. And what’s hard is there are companies that touch the world all the time with automatic decision-making systems or AI systems, and literally have no idea if it has some sort of harmful or disparate impact on the world. It’s like any other industry, before we had laws in place, like selling snake oil in The Wizard of Oz.
So considering what’s at stake, it’s vital that we don’t just wait for clarity to come. It’s not too much to expect that people and companies behave like responsible adults. It really is as simple as: When you’re building something that affects people, make sure it doesn’t harm anyone, even if there isn’t yet an agreed-upon standard of what “harm” means. Be reasonable. Be a grown-up. Do the right thing.
Pablo: To me, that’s true for the legal profession and its regulation to some extent, which is to say, if you take our existing rules of professional conduct and our existing principles and you apply them, you get a lot of the way there, right?
So as far as being able to rely on AI’s output: If you have an intern who’s never practiced law who did work for you, you’d read what they wrote before using it. You’d double-check the cases, right? It’s no different. Assuming the same responsibility you already had before AI covers most if not all the bases.
I wonder how much of this is going to be just applying what we already have, versus truly new regulation. And my sense is that where the latter comes in is actually more about the doom scenario stuff, right? Like, what if it suddenly becomes conscious?
Bennett: We should definitely have a regulation against that.
Pablo: Haha, that would definitely help. It’s true though, isn’t it, that it’s kind of new that there are ideas about having to inform agencies if you’re going to build something bigger than, say, GPT-4? It’s an interesting dynamic, because a lot of people think those kinds of requirements are completely overblown and being used as a cynical way to regulate smaller companies.
Bennett: In reality I think 90% or more of regulation is going to consist of acting reasonably by analogy to existing law, right? It’s really, really interesting to watch what happens when new technology enters the world and what happens in court cases, because judges decide things by analogy. Because the only thing they have to rely on is precedent, right?
If you read the first cases about wiretapping, when telegraph wires were laid, they analogized wiretapping to eavesdropping—which was illegal—which is literally hanging from an eave to listen in at a window. Ultimately the issue is privacy.
Pablo: You just blew my mind. Today I learned …
Bennett: Good, that was my goal. So in the court cases, they said, we’ve never heard of wiretapping before, but if it’s someone consciously acting to interfere with your right to privacy, which we’ve already established is against the law, we’re going to punish you.
So I think most of the new AI applications, when faced with questions about what we should measure and how, the right approach is to see what we’re trying to promote or prevent, and how do we deal with that issue today, and how we can analogize its application to AI?
The places where it’s going to get worked out are for things like copyright cases against the frontier models. We haven’t had doctrine on that before. You put something out in the public sphere and then I look at it and build something off it. Is that wrong? It’s just going to take time to sort it all through. But I really think that with a little bit of thought and some creative lawyers and good data scientists, we can figure out the way most of the time.
Rapidly draft common legal letters and emails.
How this skill works
Specify the recipient, topic, and tone of the correspondence you want.
CoCounsel will produce a draft.
Chat back and forth with CoCounsel to edit the draft.
Get answers to your research questions, with explanations and supporting sources.
How this skill works
Enter a question or issue, along with relevant facts such as jurisdiction, area of law, etc.
CoCounsel will retrieve relevant legal resources and provide an answer with explanation and supporting sources.
Behind the scenes, Conduct Research generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identify the on-point case law, statutes, and regulations, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), along with the supporting sources and applicable excerpts.
Get answers to your research questions, with explanations and supporting sources.
How this skill works
Enter a question or issue, along with relevant facts such as jurisdiction, area of law, etc.
CoCounsel will retrieve relevant legal resources and provide an answer with explanation and supporting sources.
Behind the scenes, Conduct Research generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identify the on-point case law, statutes, and regulations, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), along with the supporting sources and applicable excerpts.
Get a thorough deposition outline in no time, just by describing the deponent and what’s at issue.
How this skill works
Describe the deponent and what’s at issue in the case, and CoCounsel identifies multiple highly relevant topics to address in the deposition and drafts questions for each topic.
Refine topics by including specific areas of interest and get a thorough deposition outline.
Ask questions of contracts that are analyzed in a line-by-line review
How this skill works
Allows the user to upload a set of contracts and a set of questions
This skill will provide an answer to those questions for each contract, or, if the question is not relevant to the contract, provide that information as well
Upload up to 10 contracts at once
Ask up to 10 questions of each contract
Relevant results will hyperlink to identified passages in the corresponding contract
Get a list of all parts of a set of contracts that don’t comply with a set of policies.
How this skill works
Upload a set of contracts and then describe a policy or set of policies that the contracts should comply with, e.g. "contracts must contain a right to injunctive relief, not merely the right to seek injunctive relief."
CoCounsel will review your contracts and identify any contractual clauses relevant to the policy or policies you specified.
If there is any conflict between a contractual clause and a policy you described, CoCounsel will recommend a revised clause that complies with the relevant policy. It will also identify the risks presented by a clause that does not conform to the policy you described.
Get an overview of any document in straightforward, everyday language.
How this skill works
Upload a document–e.g. a legal memorandum, judicial opinion, or contract.
CoCounsel will summarize the document using everyday terminology.
Find all instances of relevant information in a database of documents.
How this skill works
Select a database and describe what you're looking for in detail, such as templates and precedents to use as a starting point for drafting documents, or specific clauses and provisions you'd like to include in new documents you're working on.
CoCounsel identifies and delivers every instance of what you're searching for, citing sources in the database for each instance.
Behind the scenes, CoCounsel generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identifiy the on-point passages from every document in the database, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), citing applicable excerpts in specific documents.
Get a list of all parts of a set of contracts that don’t comply with a set of policies.
Ask questions of contracts that are analyzed in a line-by-line review
Get a thorough deposition outline by describing the deponent and what’s at issue.
Get answers to your research questions, with explanations and supporting sources.