Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Quick Reference Cards: Checklists You Can Use Immediately

These checklists are designed to be copied into your matter workflow or printed as one‑page reminders.

Card 1: Before Using AI Tools (Pre‑Use Checklist)

  • [ ] Attorney approval obtained (for the task and tool)
  • [ ] Confirm the tool is approved for this task type
  • [ ] Define scope: what the tool should and should not do
  • [ ] Sanitize inputs (no privileged/confidential identifiers)
  • [ ] Decide risk tier and required verification steps

Card 2: During AI Use (Prompt & Output Controls)

  • [ ] Use a role‑safe prompt wrapper (no legal advice; cite sources; flag uncertainty)
  • [ ] Ask for structured output (bullets, tables, issue lists)
  • [ ] Request a verification checklist in the output
  • [ ] Watch for red flags (fake citations, over‑confidence, invented facts)

Card 3: Avoiding UPL (Role Boundaries)

  • [ ] Do not send AI‑assisted client communications without attorney review
  • [ ] Do not ask AI to recommend legal strategy for a specific client
  • [ ] Use AI for drafting/summarizing, not final judgment
  • [ ] Escalate when output approaches legal advice

{{UPLOAD_ASSET:upl_boundary_spectrum.png}}

UPL boundary spectrum chart from low-risk support tasks to high-risk legal advice
A visual reminder: the closer a task gets to legal advice, the more supervision and guardrails you need.

Card 4: Handling Sensitive Outputs

  • [ ] Review for confidential facts and identifiers
  • [ ] Verify citations, quotes, and numbers
  • [ ] Redact with approved tools + second reviewer
  • [ ] Store in the matter file per policy; avoid personal drives

Card 5: Incident Response (If Something Goes Wrong)

  • [ ] Stop/contain and notify supervisor
  • [ ] Preserve prompts/outputs/logs
  • [ ] Assess scope and remediation steps
  • [ ] Document and improve policy/training

{{UPLOAD_ASSET:incident_response_flowchart.png}}

Incident response flowchart for AI-related confidentiality or accuracy incidents
If something goes wrong, speed and documentation matter. Use this flow as a starting point.