Module 4 Knowledge Check (Self‑Check)
This self‑check focuses on confidentiality, sensitive outputs, and incident response.
Questions
- Q1. What is the safest default rule for client secrets and public AI chatbots?
Answer: Do not paste them; use firm‑approved tools and sanitized inputs.
Why: Public tools may store or use prompts; confidentiality is at risk. - Q2. True or False: If you anonymize names, it is always safe to paste the rest of the document into any AI tool.
- True
- False
Answer: False
Why: Other facts can still identify clients or reveal strategy; tool approval matters. - Q3. Which is a best practice for redaction?
- Use manual black boxes in Word
- Use approved redaction tools and verify with a second reviewer
- Skip redaction if the document is long
- Redact only names and nothing else
Answer: Use approved redaction tools and verify with a second reviewer
Why: Technical redaction errors can expose sensitive data. - Q4. In an AI incident, what should you do first?
Answer: Contain the issue (stop use), notify supervisor/IT, and preserve logs and outputs.
Why: Containment and preservation support assessment and remediation. - Q5. Name two vendor due diligence questions.
Answer: Where is data stored/how long? Is data used for training? What encryption/logging exists? What breach response exists?
Why: These questions assess confidentiality and control. - Q6. Why should you preserve prompts and outputs during an incident?
Answer: They are evidence needed to understand scope, impact, and remediation steps.
Why: Documentation supports defensibility and client communication. - Q7. What is data minimization?
Answer: Providing only the minimum necessary information to complete the task.
Why: Less data reduces exposure risk. - Q8. If an output contains a hallucinated case citation, what is the correct response?
Answer: Remove it, verify the point with real authority, and flag to the supervising attorney.
Why: False authority cannot remain in a legal draft.