Course Content
Prompt Engineering for Legal Applications

Bias, Relevance, and Fit: Quality Control Beyond Accuracy

Even when an AI output is factually correct, it can still be unhelpful, biased, or misaligned with the jurisdiction, client goals, or forum expectations.

Bias checks for legal outputs

  • Look for stereotypes or loaded language.
  • Check whether the output assumes facts not in evidence.
  • Ask the model to produce a neutral rewrite and compare.
  • Have a second reviewer scan for tone and fairness.

Relevance checks

Ask: does the output answer your question, for your jurisdiction, and for your audience? If not, refine the prompt or narrow the input.

Output scoring rubric (quick)

Dimension What to look for
Accuracy No fabricated citations; facts match sources.
Jurisdiction fit Correct forum, governing law, and procedural posture.
Completeness Key issues addressed; no major omissions.
Tone Appropriate for client/court/colleague.
Risk Flags uncertainties; avoids overclaiming.

Try it

Exercise: Ask the model to summarize a case or statute you provide. Then ask:

  • “List the top 5 claims you made that require verification.”
  • “Rewrite the summary for a client with no legal background.”
  • “Rewrite for a judge in a formal tone with citations.”

Compare differences and identify where hallucinations or bias might appear.