Our first post in this series provided an overview of how large language models (LLMs) work, the benefits and limitations of these models, and why the quality of their output depends on the quality of the prompts you give them. We also explained why evaluating and deciding how to apply LLM-powered tools requires a degree of AI literacy, or at least a fundamental understanding of how LLMs work.
In part two, we offer tips for getting the most from LLMs by providing sufficient context with prompts.
LLMs like GPT-4 are trained on vast amounts of data and can handle a wide range of tasks across topics, but they still have limitations. While LLMs are highly intuitive and intelligent—for instance, they can detect sarcasm and subtleties in tone—they’re limited to the data on which they’re trained. They also lack humans’ abstract reasoning abilities and can struggle to make accurate assumptions if they’re givne insufficient information.
Ultimately, LLMs are just predictive models that mimic human logic through sophisticated pattern recognition. If your prompts are unclear, the AI might not understand your intent.
Adding context provides the missing information a LLM wasn’t trained on and closes reasoning gaps. Context—which, as it relates to AI, can be defined as information that influences the understanding and responses of AI—is key to writing good prompts. A lack of context leaves room for ambiguity, which can lead to erroneous conclusions and unhelpful, or even wrong, answers.
Specific-use AI that’s specialized in a particular area of knowledge, like CoCounsel, is less vulnerable to mistaking your intent due to vague prompts. That’s because it pulls from a specific knowledge base, such as a database of current case law, statutes, and regulations, as well as the back-end prompting that guides your request. But specific-use AI still benefits from sufficient contextual background in your prompt, which helps the AI interpret your requests more accurately, leading to better output.
To ensure sufficient context in prompts to legal AI, include the type of case you’re working on (e.g., personal injury, employment), basic facts about the case, or the types of documents you want to have reviewed (e.g., contracts, discovery). You should also consider whether specific dates or time frames are critical to understanding your inquiry, as well as the type of output you’re looking for (e.g., a long summary, a brief paragraph, or a certain kind of analysis).
By including these contextual details in a prompt, you’re more likely to get tailored, high-quality results that are helpful and answer your inquiry, the first time around.
When you’re just looking for basic information, you don’t need to provide as much context. For example, if you want a general overview of a legal document or case, a simple prompt should suffice. This is also the case when you’re seeking a high-level explanation of the law in a jurisdiction or conducting general legal research.
When you’re dealing with complex or specialized information—such as complex legal statutes and acronyms, or niche terms in a particular practice area—it’s important to give the AI more detail. Additional context is critical when what you intend might not be immediately clear, or when you’re seeking deeper reasoning from the AI. By adding context, you’ll help the AI fully understand what you’re asking for, leading to a more accurate response.
A straightforward formula for well-structured prompts is: intent + context + instruction.
Start with a clear expression of the intent behind your query. This sets the stage for the type of information or answer you’re seeking. Then, provide context to anchor the AI’s response in a relevant frame of reference. Include specific conditions or background terms that might help the AI better understand the case or scenario. Finally, add the instruction, the actionable part of the prompt where you tell the AI what task you want it to perform.
As an example, you might write “I’m seeking to discredit an expert witness” for your intent. For context, you can add the type of case you’re working on and the document you’re asking the AI to review: “The document contains all prior testimony of the expert I’m seeking to discredit in a medical malpractice case.” Your instruction could be: “Does the document contain any explicit or implied contradiction inconsistent with the expert’s prior testimony?”
Assigning a persona—such as a lawyer in a particular practice area—can be helpful in narrowing the scope of your prompt. For example: “You are an attorney reviewing discovery in a product liability case. Analyze the documents provided and identify any potential liability and the parties involved.” This context assigns a persona to the AI, instructing it to provide an answer from a specific perspective.
Declaring a condition, precedent, or presupposition is another technique to ensure the AI’s analysis is contextually appropriate and relevant to your needs. Setting a specific condition that must be acknowledged or fulfilled before the AI executes an instruction can save you time by reducing irrelevant results. For example: “If the document discusses employment agreements, summarize the sections related to termination and severance pay.” Writing the prompt this way tells the AI you only want information related to employment agreements.
Prompt reinforcement is a simple yet powerful way to enhance the clarity of your prompts, especially when adherence to instructions is paramount. Simply repeating the content of the prompt—through reiteration or specifying areas on which to focus—serves as reinforcement and helps obtain more precise results.
Perhaps you want the AI to respond in a particular format. You can ensure the AI does so by establishing example patterns. Let’s say you want CoCounsel to list each and every medical diagnosis indicated in the document alongside each respective date of diagnosis, and you want the date and response in a certain format. You can write an example with placeholder text, such as [mm/dd/yyyy]: [description of diagnosis], which the AI will follow when generating its response. Using indicators such as brackets also helps delineate the placeholder text you want the AI to replace. The exception is parentheses, which may confuse the AI because they are commonplace and frequently used, and should be avoided.
Lastly, LLM responses can always be incrementally improved, so you should iterate on and refine your prompt to get the best results possible. And a good prompt can always be refined with another prompt. You can even instruct the LLM to help improve your prompt.
Just as important as adding the right information to your prompts is avoiding pitfalls.
Adding too much information or data that’s immaterial to your instruction can influence or confuse the AI. Unlike a human, an AI cannot make a distinction between irrelevant and relevant information and will consider all the information you give it when “thinking.”
Be aware of the “lost middle” bias, or primacy and recency bias. An LLM is more likely to forget or fail to consider information contained in the middle of a prompt than that located in the beginning or end. Keep the most important information—the instruction to the AI—at the start or end of the prompt.
Majority label bias refers to an LLM’s tendency to place more consideration or weight on words or concepts that appear more frequently in a prompt. Repetition signals emphasis to the AI, and while it can be used to your advantage with reinforcement prompting, it can unintentionally result in less on-point responses.
Avoid lumping topics together. Most LLMs offer a way to separate different tasks, topics, or projects. For instance, CoCounsel segregates matters into individual chat environments with independent context windows. Begin a new chat when you want to work on a new topic.
Remember the AI’s limitations. When they’re not considered, the AI could lead you down the wrong path. Essentially, respecting the AI’s weaknesses will ensure better performance and help you avoid wasted time and frustration. Math, counting, and sorting are difficult tasks for an LLM, because they’re predictive models—they don’t perform calculations.
And don’t assume the AI will make assumptions. Consider what the AI knows and what it can access beyond your prompt, as well as what has been recorded and lost in the context window.
Generally, it’s good to be very specific about what you want from the AI. Ambiguity can confuse LLMs, so make sure to avoid generic or vague references. And specify not only what you want, but how you want the AI to respond.
To get the most from AI, it’s important to provide sufficient context when you input prompts, which will help tailor and refine your results, and to be aware of pitfalls that may muddy intent or produce irrelevant or incorrect answers.
And here are a few more resources our team has found quite valuable for improving prompting skills:
Rapidly draft common legal letters and emails.
How this skill works
Specify the recipient, topic, and tone of the correspondence you want.
CoCounsel will produce a draft.
Chat back and forth with CoCounsel to edit the draft.
Get answers to your research questions, with explanations and supporting sources.
How this skill works
Enter a question or issue, along with relevant facts such as jurisdiction, area of law, etc.
CoCounsel will retrieve relevant legal resources and provide an answer with explanation and supporting sources.
Behind the scenes, Conduct Research generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identify the on-point case law, statutes, and regulations, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), along with the supporting sources and applicable excerpts.
Get answers to your research questions, with explanations and supporting sources.
How this skill works
Enter a question or issue, along with relevant facts such as jurisdiction, area of law, etc.
CoCounsel will retrieve relevant legal resources and provide an answer with explanation and supporting sources.
Behind the scenes, Conduct Research generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identify the on-point case law, statutes, and regulations, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), along with the supporting sources and applicable excerpts.
Get a thorough deposition outline in no time, just by describing the deponent and what’s at issue.
How this skill works
Describe the deponent and what’s at issue in the case, and CoCounsel identifies multiple highly relevant topics to address in the deposition and drafts questions for each topic.
Refine topics by including specific areas of interest and get a thorough deposition outline.
Ask questions of contracts that are analyzed in a line-by-line review
How this skill works
Allows the user to upload a set of contracts and a set of questions
This skill will provide an answer to those questions for each contract, or, if the question is not relevant to the contract, provide that information as well
Upload up to 10 contracts at once
Ask up to 10 questions of each contract
Relevant results will hyperlink to identified passages in the corresponding contract
Get a list of all parts of a set of contracts that don’t comply with a set of policies.
How this skill works
Upload a set of contracts and then describe a policy or set of policies that the contracts should comply with, e.g. "contracts must contain a right to injunctive relief, not merely the right to seek injunctive relief."
CoCounsel will review your contracts and identify any contractual clauses relevant to the policy or policies you specified.
If there is any conflict between a contractual clause and a policy you described, CoCounsel will recommend a revised clause that complies with the relevant policy. It will also identify the risks presented by a clause that does not conform to the policy you described.
Get an overview of any document in straightforward, everyday language.
How this skill works
Upload a document–e.g. a legal memorandum, judicial opinion, or contract.
CoCounsel will summarize the document using everyday terminology.
Find all instances of relevant information in a database of documents.
How this skill works
Select a database and describe what you're looking for in detail, such as templates and precedents to use as a starting point for drafting documents, or specific clauses and provisions you'd like to include in new documents you're working on.
CoCounsel identifies and delivers every instance of what you're searching for, citing sources in the database for each instance.
Behind the scenes, CoCounsel generates multiple queries using keyword search, terms and connectors, boolean, and Parallel Search to identifiy the on-point passages from every document in the database, reads and analyzes the search results, and outputs a summary of its findings (i.e. an answer to the question), citing applicable excerpts in specific documents.
Get a list of all parts of a set of contracts that don’t comply with a set of policies.
Ask questions of contracts that are analyzed in a line-by-line review
Get a thorough deposition outline by describing the deponent and what’s at issue.
Get answers to your research questions, with explanations and supporting sources.