Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Module 4 Knowledge Check (Self‑Check)

This self‑check focuses on confidentiality, sensitive outputs, and incident response.

Questions

  1. Q1. What is the safest default rule for client secrets and public AI chatbots?

    Answer: Do not paste them; use firm‑approved tools and sanitized inputs.
    Why: Public tools may store or use prompts; confidentiality is at risk.

  2. Q2. True or False: If you anonymize names, it is always safe to paste the rest of the document into any AI tool.
    • True
    • False

    Answer: False
    Why: Other facts can still identify clients or reveal strategy; tool approval matters.

  3. Q3. Which is a best practice for redaction?
    • Use manual black boxes in Word
    • Use approved redaction tools and verify with a second reviewer
    • Skip redaction if the document is long
    • Redact only names and nothing else

    Answer: Use approved redaction tools and verify with a second reviewer
    Why: Technical redaction errors can expose sensitive data.

  4. Q4. In an AI incident, what should you do first?

    Answer: Contain the issue (stop use), notify supervisor/IT, and preserve logs and outputs.
    Why: Containment and preservation support assessment and remediation.

  5. Q5. Name two vendor due diligence questions.

    Answer: Where is data stored/how long? Is data used for training? What encryption/logging exists? What breach response exists?
    Why: These questions assess confidentiality and control.

  6. Q6. Why should you preserve prompts and outputs during an incident?

    Answer: They are evidence needed to understand scope, impact, and remediation steps.
    Why: Documentation supports defensibility and client communication.

  7. Q7. What is data minimization?

    Answer: Providing only the minimum necessary information to complete the task.
    Why: Less data reduces exposure risk.

  8. Q8. If an output contains a hallucinated case citation, what is the correct response?

    Answer: Remove it, verify the point with real authority, and flag to the supervising attorney.
    Why: False authority cannot remain in a legal draft.