Course Content
Module 1: Introduction to Large Language Models (LLMs) in Law
What LLMs Are (and Aren’t): A Lawyer‑Friendly Mental Model Legal Use Cases & Risk Tiers
0/5
Module 2: Fundamentals of Effective Prompt Design for Legal Tasks
The ICI Framework: Intent + Context + Instruction Advanced Prompt Techniques for Legal Work Prompt Debugging: Lost Middle, Ambiguity, and Token Hygiene
0/5
Module 3: Verifying and Validating AI-Generated Legal Content
Validation Mindset: Why Verification Is Non‑Negotiable Hallucinations in Legal Content: Red Flags & Fixes Bias, Relevance, and Fit: Quality Control Beyond Accuracy
0/5
Module 4: Ethical Considerations and Responsible AI Use in Law
Confidentiality & Data Handling: What You Can Paste Into AI Competence, Supervision, and Accountability with AI Build Your Firm AI Policy Template
0/5
Module 5: Building a Personal Prompt Library and Future Trends
Designing a Personal Prompt Library Future Trends: Specialized Legal Models, RAG, and Agents Build 10 High-Value Prompts You’ll Actually Reuse Final Assessment: Applied Prompt Engineering Scenario
0/5
Prompt Engineering for Legal Applications

Module 4 Knowledge Check (Self‑Check)

This self-check mirrors the Module quiz. Use it to test your understanding. If your site uses Tutor LMS Quiz Import, you can import the CSV quiz file provided in the package instead of (or in addition to) this self-check.

  1. A primary ethical concern when inputting sensitive client information into general-purpose LLMs is:

    • The AI might become too intelligent.
    • The risk of breaching client confidentiality.
    • The AI might generate too much information.
    • The AI might not understand the legal context.
  2. The duty of “technological competence” for lawyers implies:

    • Being able to code complex AI algorithms.
    • Understanding the capabilities and limitations of AI tools.
    • Relying on AI for all legal tasks.
    • Ignoring traditional legal research methods.
  3. True or False: If an AI makes an error in legal work, the AI system itself is solely accountable, not the supervising attorney.

    • True
    • False
  4. Which of the following is a recommended mitigation strategy for ethical risks in AI use?

    • Blindly trusting AI outputs.
    • Avoiding any discussion of AI with clients.
    • Implementing robust data security and privacy protocols.
    • Disregarding professional rules of conduct.

Answer key

  1. b
  2. b
  3. b
  4. c