Course Content
Module 1: Ethical Risk Landscape & Professional Duties
Welcome & How to Use This Course The Ethical Risk Landscape in Legal AI Professional Duties When AI Is Involved Module 1 Knowledge Check (Self‑Check)
0/4
Module 2: Supervised Use, Documentation & Verification
What “Supervised Use” Means (and Why It Matters) Documentation & Communication: Make AI Reviewable Verification Techniques for AI‑Assisted Legal Work Module 2 Knowledge Check (Self‑Check)
0/4
Module 3: Avoiding Unauthorized Practice of Law
Avoiding Unauthorized Practice of Law (UPL) in the Age of AI UPL Boundary Spectrum: Safe Tasks vs. Legal Advice Prompting With Role Guardrails (Templates You Can Reuse) Module 3 Knowledge Check (Self‑Check)
0/4
Module 4: Confidentiality & Handling Sensitive Outputs
Confidentiality, Privilege & Data Privacy: Safe Inputs Handling Sensitive Outputs: Review, Redaction, Storage Incident Response & Vendor Due Diligence Module 4 Knowledge Check (Self‑Check)
0/4
Module 5: Scenarios, Checklists & Continuous Improvement
Scenario Lab: Ethical Decision‑Making With AI Quick Reference Cards: Checklists You Can Use Immediately Implementation Playbook: Policy, Training, Governance Wrap‑Up, Resources & Final Assessment
0/4
AI Ethics for Legal Professionals

Professional Duties When AI Is Involved

AI doesn’t change professional obligations—it changes how you satisfy them.

Key ethics anchors (plain‑English)

  • Competence: understand the tool enough to use it safely.
  • Confidentiality: protect client information and privilege.
  • Communication: don’t mislead clients or the court about work.
  • Supervision: lawyers supervise non‑lawyers and vendors; non‑lawyers escalate.
  • Fees: bill fairly; don’t charge for time you didn’t spend.

Current guidance to be aware of

The American Bar Association issued Formal Opinion 512 (July 29, 2024) addressing lawyers’ use of generative AI tools and emphasizing duties like competence, confidentiality, communication, and reasonable fees. Many state bars have also issued guidance. Use your jurisdiction’s rules and your firm policy as the primary authority.

Practice tip: When policies and rules are unclear, default to: minimize inputs, verify outputs, document decisions, and escalate.

Governance: how teams keep AI use under control

Even small firms benefit from a lightweight governance loop:

{{UPLOAD_ASSET:ai_governance_cycle.png}}

AI governance cycle diagram inspired by NIST AI RMF: Govern, Map, Measure, Manage
A simple governance loop for AI tools: set rules, map use cases, measure performance, and manage risks.
  • Govern: approved tools, training, and a clear escalation path.
  • Map: what tasks you use AI for (and what you never use it for).
  • Measure: accuracy checks, bias checks, and incident tracking.
  • Manage: update prompts, templates, and controls when you see failure patterns.

Mini‑template: “AI Use Disclosure” for internal work

You can paste this into an internal draft to make review easier:

AI use: Draft assisted with [Tool Name] on [Date]. Inputs were sanitized. Output was reviewed for accuracy, citations verified, and final judgment remains with the supervising attorney.