---
title: "AI Visibility Audit: Are You Being Cited?"
description: "Step-by-step guide to auditing whether AI models cite your content. Find gaps, track competitors, and measure progress."
pillar: "LLMO"
level: "beginner"
date: "2026-01-20"
url: "https://theglitch.ai/academy/llmo/ai-visibility-audit"
---

# AI Visibility Audit: Are You Being Cited?

Step-by-step guide to auditing whether AI models cite your content. Find gaps, track competitors, and measure progress.


# AI Visibility Audit: Are You Being Cited?

> **The Glitch's Take:** "You can't optimize what you don't measure. Start with knowing where you stand."

**Part of:** [LLMO: How to Get Cited by AI](/articles/llmo/llmo-complete-guide)
**Level:** Beginner
**Reading Time:** 10 minutes

---

## The Point

Before optimizing for AI citations, you need to know your current state. This guide walks through a systematic audit to understand if, when, and how AI models cite you.

---

## TL;DR

- **Create 20 queries** your audience would ask
- **Test on 3 platforms:** Claude, ChatGPT, Perplexity
- **Track:** Are you cited? Competitors? What sources?
- **Run monthly** to measure progress
- **Time investment:** 2-4 hours for initial audit

---

## The Audit Process

### Step 1: Create Your Query List

**Source queries from:**
- Customer support logs (what do people ask?)
- Sales calls (what questions come up?)
- Search console (what queries find you?)
- Competitor content (what do they rank for?)

**Query types to include:**

| Type | Example |
|------|---------|
| Definition | "What is [your category]?" |
| Comparison | "[Your product] vs [competitor]" |
| Best-of | "Best [your category] tools" |
| How-to | "How to [problem you solve]" |
| Recommendation | "Which [category] should I use for [use case]?" |

**Aim for 20 queries** across these types.

### Step 2: Test Each Platform

For each query, test on:

1. **Claude** (claude.ai)
   - With web search enabled
   - Without web search

2. **ChatGPT** (chat.openai.com)
   - With browsing enabled
   - Without browsing

3. **Perplexity** (perplexity.ai)
   - Always has web access

### Step 3: Document Results

For each query, record:

| Field | What to Track |
|-------|---------------|
| Query | The exact question |
| Platform | Which AI |
| Your mention | Yes/No/Partial |
| Quote used | What text was cited |
| Competitors mentioned | Which ones |
| Sources cited | URLs if shown |
| Date | When tested |

### Step 4: Analyze Patterns

Look for:

**Where you're strong:**
- Queries where you're consistently cited
- Topics where you're the authority

**Where you're weak:**
- Queries where competitors are cited, not you
- Topics where you have content but no citations

**What sources win:**
- Which URLs get cited?
- What format works (lists, definitions, data)?

---

## Audit Template

### Query Tracking Spreadsheet

```
| Query | Claude (web) | Claude (no web) | ChatGPT | Perplexity | You Cited | Competitors | Sources |
|-------|--------------|-----------------|---------|------------|-----------|-------------|---------|
| "What is [topic]?" | | | | | | | |
| "Best [category] tools" | | | | | | | |
| "[You] vs [competitor]" | | | | | | | |
```

### Summary Metrics

After completing audit:

| Metric | Value |
|--------|-------|
| Total queries tested | /20 |
| Queries where you're cited | /20 |
| Queries where competitors cited | /20 |
| Citation rate | % |
| Top competitor citations | Name (count) |

---

## Interpreting Results

### Citation Rate Benchmarks

| Rate | Status |
|------|--------|
| 0-10% | Not visible to AI |
| 10-30% | Emerging visibility |
| 30-50% | Moderate presence |
| 50%+ | Strong authority |

### What Different Results Mean

**You're cited, competitors aren't:**
- You're the authority on this topic
- Protect and expand this position

**Competitors cited, you aren't:**
- Gap to close
- Analyze why their content wins

**Neither cited:**
- Topic may not have authoritative sources
- Opportunity to become the source

**Both cited:**
- Competitive space
- Differentiation needed

---

## Competitor Analysis

### Who to Track

- Direct competitors (same product category)
- Content competitors (same topics)
- Authority sites (industry publications)

### What to Learn

For competitors who get cited:

1. **Content format:** How is their content structured?
2. **Authority signals:** Where else are they mentioned?
3. **Unique elements:** Data, research, definitions?

---

## Monthly Tracking

### What to Monitor

| Metric | Direction |
|--------|-----------|
| Citation rate | ↑ over time |
| New citations | + each month |
| Lost citations | Investigate why |
| Competitor changes | Track movement |

### Progress Dashboard

```
Month | Queries | You Cited | Rate | Change
Jan   | 20      | 3         | 15%  | -
Feb   | 20      | 5         | 25%  | +10%
Mar   | 20      | 7         | 35%  | +10%
```

---

## Action Items from Audit

### Priority 1: Quick Wins

Queries where:
- You have relevant content
- Content isn't being cited
- Small updates could help

**Action:** Restructure content for AI extraction

### Priority 2: Gap Filling

Queries where:
- You don't have content
- Competitors are cited

**Action:** Create authoritative content

### Priority 3: Authority Building

Queries where:
- Your content exists
- Nobody is cited

**Action:** Build external authority signals

---

## Quick Reference

### Audit Checklist

- [ ] 20 queries created
- [ ] All 3 platforms tested
- [ ] Results documented
- [ ] Patterns identified
- [ ] Action items prioritized
- [ ] Monthly schedule set

### Time Investment

| Activity | Time |
|----------|------|
| Query creation | 30 min |
| Testing (20 queries × 3 platforms) | 2-3 hours |
| Documentation | 30 min |
| Analysis | 30 min |
| **Total initial audit** | **4-5 hours** |
| **Monthly re-audit** | **2 hours** |

---

## Next Steps

- [Content Structure for AI](/articles/llmo/content-structure-for-ai)
- [LLMO Complete Guide](/articles/llmo/llmo-complete-guide)

---

*Last verified: 2026-01-20.*

