← Cavaridge AI for Service Teams---
title: Chat with citations
status: draft
note: AI-generated first-pass transcript pending video production + SME review.
---
In this lesson you'll work in Studio's sandbox tenant — the **forge-msp-trio**
seed gives you three sample MSP clients with realistic ticket history,
licensing data, and a small docs corpus. You'll use chat, you'll get
citations, and we'll talk about what to do with both.
## The flow
Open Studio. Start a chat. Ask a real question — something a service-team
member would ask. Try this:
> Summarize the open tickets for ContosoCare and tell me which would
> need a follow-up call this week.
Studio responds with a summary. Look at the bottom of the response: each
non-trivial claim has a citation. Click one. The cited source opens in a
side panel.
## What citations actually mean
A citation is a **claim of grounding**: the model is saying _I drew this
from this source_. It is not a guarantee that the source actually
supports the claim. Two things to check:
- The cited document genuinely says what the answer says.
- The cited document is **the right kind of source** for the question.
A roadmap doc isn't a security finding; an old ticket isn't a current
inventory.
When the answer is right and the source supports it, you have a usable
output. When the source doesn't support the claim, that's a hallucination
even with a citation. Treat it like one.
## When there are no citations
Sometimes Studio answers without citations. This is honest behavior — it's
saying _I'm answering from training, not from your data_. Treat that
output as a starting point, never the final answer for a customer-facing
or compliance-relevant decision. If you need grounding, ask: "answer
only from our connected sources, and cite each claim."
## What about retrieval-augmented chat
Studio's retrieval is over the docs and data sources you've connected to
the workspace. If a topic isn't connected, citations can't appear. The
answer to "why didn't I get citations" is usually "we haven't connected
that data source yet" — not "the AI is being lazy."
## The minimum review you should give every customer-facing output
1. Open every citation.
2. Confirm the cited source supports the specific claim.
3. Check the source is current and relevant.
4. If anything's off — even a small thing — edit before sending.
This is the same standard you'd apply to a draft from a junior analyst.
Studio is fast; you stay accountable.
## Hands-on exercise
In the sandbox, ask three questions:
- One that's well-grounded in the seed data (you should get good citations).
- One that's **partially** in scope (you should see Studio decline parts).
- One that's outside scope (you should see no citations or a refusal).
Compare the three responses. Notice how Studio's confidence + citation
pattern changes with grounding. That pattern is a tool you'll use every
day.
Module 2 of 5
Chat with citations
Use chat with retrieval-grounded citations and verify the source links before relying on output.
Video — pending production
Read the transcript below. Once recording is complete, the video will replace this notice.
Hands-on sandbox
forge · seed:
forge-msp-trio · 60 min