Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Incident Response & Vendor Due Diligence

Even with good controls, incidents happen: a staff member pastes sensitive text into the wrong tool, or an AI output contains harmful errors. What matters is how quickly you respond and how well you document.

Incident response quick flow

{{UPLOAD_ASSET:incident_response_flowchart.png}}

Incident response flowchart for AI-related confidentiality or accuracy incidents
If something goes wrong, speed and documentation matter. Use this flow as a starting point.

Vendor due diligence (minimum questions)

  • Is the tool approved for client data? (If not, stop.)
  • Where is data stored, and for how long?
  • Is data used to train models? If yes, can we opt out?
  • What encryption and access controls exist?
  • Do we get audit logs?
  • What happens if there is a breach?

Common incident types

  • Data exposure: sensitive text pasted into a public tool.
  • Bad authority: hallucinated cases or statutes used in a draft.
  • Bias: discriminatory or unfair language in output.
  • Misleading communication: client-facing text implies certainty without support.

Activity: tabletop exercise

Run a 10‑minute tabletop exercise with your team:

  1. Assume a user pasted privileged content into an unapproved AI tool.
  2. List the first 5 actions you will take in 30 minutes.
  3. Decide who must be notified (supervisor, IT, privacy, client?).