Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Module 1 Knowledge Check (Self‑Check)

Use this self‑check to confirm you can spot the most common ethical risks before you move on.

Note: A graded quiz CSV is included in the package for Tutor LMS quiz import.

Questions

  1. Q1. Which risk is most associated with AI generating fake citations or cases?
    • Bias
    • Hallucinations
    • Encryption failure
    • Conflict waiver

    Answer: Hallucinations
    Why: Generative AI can fabricate plausible‑sounding but false citations or facts.

  2. Q2. True or False: If AI produces an answer, it is safe to treat it as authoritative if it sounds confident.
    • True
    • False

    Answer: False
    Why: Confidence is not accuracy. Verification is always required for legal work.

  3. Q3. Which task is typically the highest risk tier?
    • Formatting a brief
    • Summarizing a public article
    • Drafting client advice on next steps
    • Extracting dates from a transcript

    Answer: Drafting client advice on next steps
    Why: Client advice is high‑impact and close to legal judgment.

  4. Q4. Name two minimum controls for any AI use in legal work.

    Answer: Human review + protect confidentiality (sanitize inputs / use approved tools)
    Why: Every workflow needs review and confidentiality safeguards.

  5. Q5. Why do non‑attorney staff need to escalate uncertain AI outputs?

    Answer: Because legal judgment belongs to the supervising attorney and errors can create ethical exposure.
    Why: Escalation supports supervision and competence.

  6. Q6. What does “risk tiering” help you decide?

    Answer: How much oversight, verification, and documentation a task needs before/after AI use.
    Why: Different tasks require different controls.

  7. Q7. Which is a sign of hidden assumptions in AI output?

    Answer: The model fills missing facts with plausible details not supported by the record.
    Why: LLMs may “complete the story” when information is missing.

  8. Q8. If a policy is unclear, what is the safest default?

    Answer: Minimize inputs, verify outputs, document decisions, and escalate to a supervisor.
    Why: These controls reduce harm when rules are uncertain.