Cavaridge Academy
Cavaridge AI for Service Teams
Module 5 of 5

When NOT to use AI

The honest taxonomy of tasks where Cavaridge AI is the wrong tool — and what to do instead.

Video — pending production
Read the transcript below. Once recording is complete, the video will replace this notice.
--- title: When NOT to use AI status: draft note: AI-generated first-pass transcript pending video production + SME review. --- The most useful lesson on Cavaridge AI is when _not_ to use it. Get this right and you avoid the two ways teams burn trust: confident-but-wrong output sent to customers, and over-automation of decisions that need a human signature. ## The taxonomy Three categories. ### Don't use AI here — ever - **Compliance attestations.** A HIPAA risk assessment, a SOC 2 control attestation, a vendor security questionnaire response that goes on the record — these need a qualified human to sign. AI can draft. AI must not approve. The platform enforces this with audit-mode blocks. - **Disciplinary or HR communications.** Even if the language is right, the accountability is the manager's. Don't outsource it. - **Patient or client decisions in a regulated context.** Medical, legal, or financial decisions affecting a person's care or status. ### Use AI as a draft tool — human approves - Customer-facing emails. - Status reports. - Security finding summaries. - Quote and SoW drafts (Cavaridge Market handles this with a built-in review gate). - Code suggestions for review. The pattern is always the same: AI drafts, qualified human approves before send. ### Use AI freely — internal velocity - Researching a topic before a meeting. - Summarizing your own notes. - Brainstorming options before you commit to one. - Running document analysis on your own docs (provenance still applies). - Asking Ducky to explain something you don't understand. ## The "confident but wrong" failure mode The single biggest mistake is sending output that sounds right but isn't grounded. Two practices defend against it: 1. **Citations on every customer-facing claim.** Already covered in module 2. Re-verify what's cited. 2. **Pulse the failure when it happens.** When AI is wrong in a way that mattered, file a Pulse domain event for the relevant app and link the Langfuse run. The product team uses these to tighten the prompts and the routing. ## When in doubt Default to: **AI drafts, human approves**. The friction is small. The trust dividend is large. That's the framework. The next time you're about to send AI output to a customer, ask yourself: _am I qualified to sign my name to this?_ If yes, ship. If no, get the right person involved before it goes out. This wraps up the path. After the final assessment, you'll receive a signed credential you can share publicly via your `verify` URL.

Knowledge check

  1. Question 1 · select one
    Which of these is NOT a good Cavaridge AI use case for a service team?
  2. Question 2 · select one
    For a regulated decision (e.g., HIPAA gap), the right pattern is
  3. Question 3 · select all that apply
    Where should you go to flag a feature where AI was wrong?