Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Scenario Lab: Ethical Decision‑Making With AI

Scenarios help turn abstract rules into concrete habits. Work through these with your team or on your own.

Scenario 1: “Draft the client email”

Situation: A supervising attorney asks you to draft an email explaining next steps after a hearing. You consider using an AI tool to draft the email quickly.

Risks to watch: UPL, inaccurate statements, tone, and confidentiality.

Your task:

  1. Write a role‑safe prompt that uses only attorney‑approved outline points.
  2. List 5 things you will verify before sending the draft to the attorney.
  3. Decide whether this is low/moderate/high risk and why.

Scenario 2: “The hallucinated case citation”

Situation: AI generated a paragraph with a case citation that looks real. You cannot find it in your research database.

Your task:

  1. What do you do immediately?
  2. How do you communicate this to the supervising attorney?
  3. How do you prevent it next time?

Hint: Remove the citation, verify the legal point with real authority, and document the verification.

Scenario 3: “Sensitive data in the prompt”

Situation: You pasted an unredacted draft demand letter (with client identifiers) into a public AI chatbot before realizing it was not approved.

Your task:

  1. Follow the incident response flow (contain → notify → preserve logs).
  2. List who must be notified in your organization.
  3. List what information you need to collect for assessment.

{{UPLOAD_ASSET:incident_response_flowchart.png}}

Incident response flowchart for AI-related confidentiality or accuracy incidents
If something goes wrong, speed and documentation matter. Use this flow as a starting point.

Discussion prompts (optional)

  • When should we disclose AI use to clients (if at all) under our policies?
  • What tasks should be prohibited from AI use?
  • What would “competence” look like for our team (training + approvals)?