Course Content
Module 1: Introduction to Large Language Models (LLMs) in Law
What LLMs Are (and Aren’t): A Lawyer‑Friendly Mental Model Legal Use Cases & Risk Tiers
0/5
Module 2: Fundamentals of Effective Prompt Design for Legal Tasks
The ICI Framework: Intent + Context + Instruction Advanced Prompt Techniques for Legal Work Prompt Debugging: Lost Middle, Ambiguity, and Token Hygiene
0/5
Module 3: Verifying and Validating AI-Generated Legal Content
Validation Mindset: Why Verification Is Non‑Negotiable Hallucinations in Legal Content: Red Flags & Fixes Bias, Relevance, and Fit: Quality Control Beyond Accuracy
0/5
Module 4: Ethical Considerations and Responsible AI Use in Law
Confidentiality & Data Handling: What You Can Paste Into AI Competence, Supervision, and Accountability with AI Build Your Firm AI Policy Template
0/5
Module 5: Building a Personal Prompt Library and Future Trends
Designing a Personal Prompt Library Future Trends: Specialized Legal Models, RAG, and Agents Build 10 High-Value Prompts You’ll Actually Reuse Final Assessment: Applied Prompt Engineering Scenario
0/5
Prompt Engineering for Legal Applications

Module 3 Knowledge Check (Self‑Check)

This self-check mirrors the Module quiz. Use it to test your understanding. If your site uses Tutor LMS Quiz Import, you can import the CSV quiz file provided in the package instead of (or in addition to) this self-check.

  1. What is the most fundamental step in verifying AI-generated legal content?

    • Trusting the AI implicitly.
    • Cross-referencing with authoritative legal sources.
    • Relying solely on the AI’s internal consistency.
    • Assuming the AI has no biases.
  2. Hallucinations in AI-generated content refer to:

    • The AI’s ability to create highly imaginative legal arguments.
    • The generation of fabricated or false information, such as non-existent case citations.
    • The AI’s capacity to understand complex legal concepts.
    • The AI’s visual output capabilities.
  3. True or False: As a legal professional, you are ultimately responsible for all work product, even if AI was used in its creation.

    • True
    • False
  4. Which of the following is a valid strategy for identifying bias in AI-generated legal content?

    • Assuming the AI is always neutral.
    • Looking for stereotypical language or unfair generalizations.
    • Relying on the AI to self-correct its biases.
    • Ignoring any language that seems subjective.

Answer key

  1. b
  2. b
  3. a
  4. b