The Ethical Risk Landscape in Legal AI
Most AI problems in legal work fall into a few predictable categories. Once you can name them, you can build controls around them.
Four common failure modes
- Hallucinations: confident but false statements, citations, or fabricated facts.
- Hidden assumptions: the model fills gaps with plausible guesses.
- Bias and unfairness: outputs may reflect biased training data or stereotypes.
- Data exposure: sensitive client information is shared with an unapproved system.
Risk tiering: choose the right level of guardrails
Not every task needs the same level of oversight. Use a simple 2‑axis assessment:
- Likelihood of error (How often does this go wrong?)
- Impact if wrong (What happens if we rely on a bad output?)
{{UPLOAD_ASSET:ai_usecase_risk_matrix.png}}

Controls that match the risk
| Risk tier | Typical legal tasks | Minimum controls |
|---|---|---|
| Low | Formatting, neutral summaries of non‑confidential material | Human review; no client secrets |
| Moderate | Drafting internal analysis, extracting facts from a record | Human review + spot‑checks; log prompts/outputs |
| High | Legal research, citations, drafting arguments | Independent verification; attorney sign‑off; documented sources |
| Critical | Client advice, filings, privilege determinations | Attorney‑led; strict inputs; full verification; audit trail |
Activity: classify 3 tasks you do this week
- Pick 3 real tasks on your plate.
- Place each on the matrix (likelihood × impact).
- Write the controls you will use before you touch an AI tool.
Tip: If you’re not sure, treat it as a higher‑risk task and escalate for attorney guidance.