Large Language Models (LLMs) in Artificial Intelligence (AI) have represented a huge breakthrough with countless applications, and that includes legal use cases like contract review. While this tech could be inserted throughout your Contract Lifecycle Management (CLM) process, the typical division is for either post-execution (i.e. after a Contract has been signed) vs. pre-execution (i.e. while being drafted or negotiated).
Most earlier stage companies don’t have a large enough volume of contracts that post-execution AI applications like organizing documents, extracting renewal or notice obligations, understanding and analyzing legal data, is useful enough for the time and cost it takes to implement.
However, budget-sensitive startups and time-starved executives might be very interested in pre-execution AI solutions around contracts. We’ll discuss good and bad use cases, pros and cons, and finally offer our verdict on the whole.
Keep in mind that you should only rely on AI for contract review if you have no other choice, or if you can do it in conjunction with a lawyer; humans and AI both make mistakes, but you can much better vet a human’s work product through various signals than you can vet the work product of AI.
Beyond the generalized pros and cons of using AI, there are a few ways that the pros and cons of AI applied in the context of law, especially for Silicon Valley startups, express more uniquely:
The main idea is to use AI on provisions that (1) are not overly technical, (2) are not overly sensitive, and (3) are not context-dependent–unless you’ve given AI a lot of background on you, your counterparty, the deal, risks on your mind, key objectives, etc.
You should use this if you are very budget sensitive, or have worked closely with a lawyer to apply it only to those provisions that are not (1) technical (i.e. where a small word change makes a huge difference, or carries a different legal meaning), (2) sensitive (i.e. make a big difference in liability), or (3) context-sensitive (i.e. requires deep knowledge of the parties, transactions, or even industry–at least unless you’ve thoroughly caught your AI tool up).
Two things here:
First, this isn’t a use case where being “generative” is necessarily an advantage; instead, you’d rather stray as close to the beaten path as possible, so you don’t accidentally miss or add something you didn’t mean to.
Second, this is an asset that you are going to use many, many times–so any shortcomings are are multiplied by the number of times you use it.
For these reasons, we’d first, of course, recommend working with a lawyer. If that’s not an option, the next best bet might be to look at very comparable peers and getting inspiration from their terms. You could then lawyer an AI review into *that* new work product, first giving it context on our business, then asking it about what other risks it would consider, what other changes it would suggest making.
This is an area where you especially do not want to be “generative.” You want to be super static, and in fact, ideally just like everyone else. That’s true for a lot of reasons: You’ll avoid extra scrutiny when your investors do diligence; you’ll avoid mistakes that have a more significant legal impact on the entity itself; you’ll avoid violating more specific tax and securities laws rules that can be very technical and sensitive.
We would not recommend using AI, therefore, to do things like issue option or stock grant agreements, prepare board and stockholder consents, make changes to your Charter, or other corporate workstreams.
While much of the same analysis as above applies, we would say a great use case is to save lots of time that you’d otherwise spend with a lawyer getting the basics on facts. For example, if you’re doing a SAFE round, consider uploading the YCombinator (YC) primer on SAFEs into your AI agent and asking it to give you feedback based on your questions; same with other heavier legal documentation.
Once you have a deep lay of the land, you can set up time (1) to confirm your findings, especially in the context of Silicon Valley Venture Capital, (2) ask any outstanding questions, and (3) make a game plan about what kind of documents you need, and the best way to do it.
Tl;dr: Great for educating yourself more efficiently on the basics or help digesting resources put out by the subject matter experts that matter most to you; less good for relying on entirely.
We won’t speak for these products, but they’ve established some considerable presence, so may be worth checking out, especially for pre-execution workstreams (e.g. especially negotiating contracts).
Beyond the legal-specific best practices described in using AI, consider the more general one: Giving your AI agent lots of context including guidance from Silicon Valley-specific resources; spend time prompting your LLM to test out its responses for future references, continually review and refine based on its answers!
AI tools can definitely be leveraged for reducing legal spend and the time it takes you to work with a lawyer. Like any powerful tool, though, it should be used carefully; in this case, by knowing when to use it (especially if you are budget sensitive or have scoped out a clear application together with a lawyer for a highly recurring tasks), and especially for workstreams that are less technical, sensitive, or context-dependent.
Again, keep in mind that you should only rely on AI for contract review if you have no other choice, or if you can do it in conjunction with a lawyer; humans and AI both make mistakes, but you can much better vet a human’s work product through various signals than you can vet the work product of AI.
Enter your work email to get started