Module 3 Knowledge Check (Self‑Check)
This self‑check focuses on UPL boundaries and role‑safe prompting.
Questions
- Q1. What is the biggest UPL risk when using generative AI as non‑attorney staff?
- The AI tool is too slow
- The AI tool gives legal advice that gets forwarded to a client
- The AI tool uses too much memory
- The AI tool formats text poorly
Answer: The AI tool gives legal advice that gets forwarded to a client
Why: Forwarding advice without attorney review can cross role boundaries. - Q2. True or False: If AI drafted it, UPL rules do not apply to the human who used it.
- True
- False
Answer: False
Why: Responsibility remains with the firm and supervising attorney; staff must avoid crossing boundaries. - Q3. Which task is safest for staff to ask AI to do without legal judgment?
- Recommend which claims to file
- Summarize a transcript and extract dates
- Advise client what to do next
- Decide privilege
Answer: Summarize a transcript and extract dates
Why: It’s factual organization, not legal advice. - Q4. What is a “role‑safe prompt wrapper” designed to do?
Answer: Keep AI outputs in a drafting/support role and prevent it from generating legal advice.
Why: The wrapper sets constraints and directs the model to flag advice requests. - Q5. Name one policy rule that reduces UPL risk.
Answer: No AI‑assisted client communications without attorney review and approval.
Why: Client-facing outputs are high risk. - Q6. If an AI draft includes new legal advice, what should you do?
Answer: Remove it, flag it, and escalate to the supervising attorney; do not send externally.
Why: New advice requires attorney judgment. - Q7. What should you request from AI to support attorney review?
Answer: Structured output plus a verification checklist and citations (if any).
Why: This makes review and verification easier. - Q8. Why does AI “confidence” increase UPL risk?
Answer: Because confident language can make speculative advice look authoritative.
Why: Tone can mislead; guardrails are required.