Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Module 3 Knowledge Check (Self‑Check)

This self‑check focuses on UPL boundaries and role‑safe prompting.

Questions

  1. Q1. What is the biggest UPL risk when using generative AI as non‑attorney staff?
    • The AI tool is too slow
    • The AI tool gives legal advice that gets forwarded to a client
    • The AI tool uses too much memory
    • The AI tool formats text poorly

    Answer: The AI tool gives legal advice that gets forwarded to a client
    Why: Forwarding advice without attorney review can cross role boundaries.

  2. Q2. True or False: If AI drafted it, UPL rules do not apply to the human who used it.
    • True
    • False

    Answer: False
    Why: Responsibility remains with the firm and supervising attorney; staff must avoid crossing boundaries.

  3. Q3. Which task is safest for staff to ask AI to do without legal judgment?
    • Recommend which claims to file
    • Summarize a transcript and extract dates
    • Advise client what to do next
    • Decide privilege

    Answer: Summarize a transcript and extract dates
    Why: It’s factual organization, not legal advice.

  4. Q4. What is a “role‑safe prompt wrapper” designed to do?

    Answer: Keep AI outputs in a drafting/support role and prevent it from generating legal advice.
    Why: The wrapper sets constraints and directs the model to flag advice requests.

  5. Q5. Name one policy rule that reduces UPL risk.

    Answer: No AI‑assisted client communications without attorney review and approval.
    Why: Client-facing outputs are high risk.

  6. Q6. If an AI draft includes new legal advice, what should you do?

    Answer: Remove it, flag it, and escalate to the supervising attorney; do not send externally.
    Why: New advice requires attorney judgment.

  7. Q7. What should you request from AI to support attorney review?

    Answer: Structured output plus a verification checklist and citations (if any).
    Why: This makes review and verification easier.

  8. Q8. Why does AI “confidence” increase UPL risk?

    Answer: Because confident language can make speculative advice look authoritative.
    Why: Tone can mislead; guardrails are required.