Module 1 Knowledge Check (Self‑Check)
Use this self‑check to confirm you can spot the most common ethical risks before you move on.
Note: A graded quiz CSV is included in the package for Tutor LMS quiz import.
Questions
- Q1. Which risk is most associated with AI generating fake citations or cases?
- Bias
- Hallucinations
- Encryption failure
- Conflict waiver
Answer: Hallucinations
Why: Generative AI can fabricate plausible‑sounding but false citations or facts. - Q2. True or False: If AI produces an answer, it is safe to treat it as authoritative if it sounds confident.
- True
- False
Answer: False
Why: Confidence is not accuracy. Verification is always required for legal work. - Q3. Which task is typically the highest risk tier?
- Formatting a brief
- Summarizing a public article
- Drafting client advice on next steps
- Extracting dates from a transcript
Answer: Drafting client advice on next steps
Why: Client advice is high‑impact and close to legal judgment. - Q4. Name two minimum controls for any AI use in legal work.
Answer: Human review + protect confidentiality (sanitize inputs / use approved tools)
Why: Every workflow needs review and confidentiality safeguards. - Q5. Why do non‑attorney staff need to escalate uncertain AI outputs?
Answer: Because legal judgment belongs to the supervising attorney and errors can create ethical exposure.
Why: Escalation supports supervision and competence. - Q6. What does “risk tiering” help you decide?
Answer: How much oversight, verification, and documentation a task needs before/after AI use.
Why: Different tasks require different controls. - Q7. Which is a sign of hidden assumptions in AI output?
Answer: The model fills missing facts with plausible details not supported by the record.
Why: LLMs may “complete the story” when information is missing. - Q8. If a policy is unclear, what is the safest default?
Answer: Minimize inputs, verify outputs, document decisions, and escalate to a supervisor.
Why: These controls reduce harm when rules are uncertain.