Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

The Ethical Risk Landscape in Legal AI

Most AI problems in legal work fall into a few predictable categories. Once you can name them, you can build controls around them.

Four common failure modes

  • Hallucinations: confident but false statements, citations, or fabricated facts.
  • Hidden assumptions: the model fills gaps with plausible guesses.
  • Bias and unfairness: outputs may reflect biased training data or stereotypes.
  • Data exposure: sensitive client information is shared with an unapproved system.

Risk tiering: choose the right level of guardrails

Not every task needs the same level of oversight. Use a simple 2‑axis assessment:

  • Likelihood of error (How often does this go wrong?)
  • Impact if wrong (What happens if we rely on a bad output?)

{{UPLOAD_ASSET:ai_usecase_risk_matrix.png}}

Risk matrix for AI legal tasks showing likelihood vs impact
Use this matrix to decide when AI is appropriate and what level of supervision is required.

Controls that match the risk

Risk tier Typical legal tasks Minimum controls
Low Formatting, neutral summaries of non‑confidential material Human review; no client secrets
Moderate Drafting internal analysis, extracting facts from a record Human review + spot‑checks; log prompts/outputs
High Legal research, citations, drafting arguments Independent verification; attorney sign‑off; documented sources
Critical Client advice, filings, privilege determinations Attorney‑led; strict inputs; full verification; audit trail

Activity: classify 3 tasks you do this week

  1. Pick 3 real tasks on your plate.
  2. Place each on the matrix (likelihood × impact).
  3. Write the controls you will use before you touch an AI tool.

Tip: If you’re not sure, treat it as a higher‑risk task and escalate for attorney guidance.