User Manual
For marketing teams. No technical background needed.
Welcome! This guide will walk you through everything you need to know to start using the tool, understand your results, and share them with clients and teammates.
What Does This Tool Do?
Think of it like a mystery shopper for your website — except instead of one shopper, you send 27+ different "visitors" to your pages at the same time.
These visitors include:
- Real browsers (Chrome, Safari, Firefox — desktop and mobile)
- Search engine bots (Google, Bing, DuckDuckGo)
- AI crawlers (ChatGPT, Claude, Perplexity)
The tool captures what each visitor sees — screenshots, page content, and metadata — then compares them side by side so you can spot differences instantly.
Why does this matter? If Google's bot sees something different from what your customers see, it can hurt your rankings, traffic, and revenue.
Getting Started (Your First Test)
The fastest way to learn is to run a single test. It takes about 2 minutes.
Step 1 — Open the Workspace
Go to the app's home page. You'll see two tabs at the top:
- User Agent Test — This is what you want. It tests URLs against different visitors.
- Screaming Frog Upload — For advanced crawl analysis (skip this for now).
Stay on the User Agent Test tab.
Step 2 — Enter a URL
Paste a single URL into the input field. Use a page you know well — your homepage is a great first choice.
Make sure it starts with https:// (or http://).
Step 3 — Pick Your Visitors (User Agents)
Below the URL input you'll see a list of user agents with checkboxes. Don't overthink this — use a Quick Preset instead:
| Preset | What It Tests | When to Use It |
|---|---|---|
| SEO Essential | Core search bots plus baseline browser checks | Your go-to for most audits |
| LLM Agents | Major AI crawlers and AI user-agent variants | Checking AI visibility |
| All Crawlers | Broad set of search and AI bots | Comprehensive bot audit |
| Browsers | Major desktop and mobile browsers | Cross-browser rendering check |
Click a preset button and the right agents are selected for you.
Step 4 — Hit Run Test
Click the Run Test button. A progress bar will appear showing each visitor testing the page.
This usually takes 15 seconds to 1 minute depending on how many visitors you selected.
Step 5 — View Your Results
When the test finishes, you'll see:
- A score (0–100) summarizing the page's SEO health
- A link to the comparison report showing screenshots and metadata side by side
- A Download button for the full export ZIP
- A Rerun Test button to re-test the same URL(s) with the same settings (see “Rerunning Tests & Comparing Results” below)
Click the report link to dive in, or head to the Dashboard to see all your past runs.
Saving Your Own Presets
How to save, reuse, and remove custom user-agent presets
- Select the user agents you want.
- Enter a name in the Preset name field.
- Click Save Preset.
- Your preset appears below the form as a clickable chip.
- Click the chip to re-apply it later.
- To remove a saved preset, click the
×on that chip.
Testing Multiple URLs at Once
How to run a bulk test with a CSV file
- Create a spreadsheet with your URLs in the first column (one URL per row).
- Save it as a
.csvfile. - In the Workspace, click the CSV Upload area and select your file.
- Pick a Quick Preset (or select agents manually).
- Click Run Test.
The tool will test every URL in your file. Results for each URL appear on the Dashboard.
Rerunning Tests & Comparing Results
Already ran a test and want to see what changed? The Rerun & Compare feature lets you re-test the same URL(s) with the same settings, then see a side-by-side comparison automatically.
How to rerun a test
- Open the Dashboard and click View Details on any past run.
- On the Run Details page, click the Rerun Test button (next to the Download button).
- The tool re-tests using the same URL(s) and user agents as the original run.
- When the rerun finishes, you are automatically redirected to the Comparison page.
Understanding the Comparison page
The Comparison page shows the original run and the rerun side by side. At the top you will see:
- A rerun context banner showing the version numbers (e.g., “v1 → v2”) and the time elapsed between runs.
- A Score Hero section with large score cards for SEO, CRO, and AI Readiness. Each card shows the old score, the new score, and the change (color-coded: green for improvement, red for regression, gray for no change).
Below that, the page shows:
- Issue Changes — new issues introduced since the last run, resolved issues that are no longer flagged, and unchanged issues.
- Metadata Changes — any differences in title, meta description, canonical URL, robots tags, or HTTP status between runs.
You can also click Export Comparison to download a ZIP of the comparison data.
Run chains and version badges
Each time you rerun a test, the runs are linked into a run chain. The original test becomes version 1, the first rerun becomes version 2, and so on.
On the Dashboard, you will see:
- Version badges on each card or table row (e.g., “v2”) showing which version of a chain a run belongs to.
- A “Latest only” checkbox (checked by default) that collapses chains so you only see the most recent version. Uncheck it to see every version.
On the Run Details page for a rerun, a banner at the top shows:
- The version number (e.g., “Version 2”)
- A link back to the original run
- A View Comparison button to jump directly to the side-by-side comparison
Understanding Your Score
Every test produces a score from 0 to 100. Here's what it means:
| Score | Rating | What It Tells You |
|---|---|---|
| 90–100 | Excellent | Your page looks great across all visitors. Minor or no issues. |
| 70–89 | Good | Mostly solid, but there are a few things worth optimizing. |
| 50–69 | Needs Attention | Notable issues found — worth investigating and fixing. |
| Below 50 | Critical | Significant problems that are likely affecting your SEO or traffic. |
How is the score calculated?
You start at 100 points and lose points for each issue found:
- High severity issue = minus 25 points (e.g., page is blocked from indexing)
- Medium severity issue = minus 10 points (e.g., missing meta description)
- Low severity issue = minus 5 points (e.g., images missing alt text)
The minimum score is 0. The more issues found, the lower the score.
What the Tool Checks
Can search engines find and index your page?
The tool checks for:
- HTTP errors (404, 500, etc.)
noindextags telling search engines to stay away- Redirect chains (too many redirects in a row)
- Canonical URL problems (pointing search engines to the wrong page)
Why it matters: If search engines can't index your page, it won't appear in search results — no matter how great the content is.
Do all visitors see the same thing?
The tool compares what different visitors see side by side:
- Does Google's bot see the same title as a real browser?
- Does the AI crawler get the same content as a human?
- Are screenshots consistent across devices?
Why it matters: If bots see different content than humans ("cloaking"), search engines may penalize your site. If AI crawlers can't read your content, you won't appear in AI-generated answers.
Are your SEO basics in place?
The tool checks for:
- Page title present and the right length (10–70 characters)
- Meta description present and the right length (50–170 characters)
- H1 heading present (and only one per page)
- Canonical URL set
- Structured data (schema markup) for rich results — the tool evaluates both the type and quality of your markup:
- Rich-result-eligible types like Product, Recipe, Event, and LocalBusiness are scored higher than generic types
- Required properties are validated — for example, a Product schema should include a name and at least one of review, rating, or offers
- If the tool detects page content that matches a type (e.g., pricing content, recipe ingredients) but the matching schema is missing, it flags this as a finding
Why it matters: These are the fundamentals. Missing any of them means missed opportunities in search results.
How is your structured data scored?
The AI Readiness Score includes a dedicated Structured Data category. Points are awarded based on:
| Signal | Points |
|---|---|
| Any structured data present | +30 |
| Open Graph tags present | +15 |
| Each rich-result-eligible type (e.g., Product, Recipe, Event) | +15 per type (max 3) |
| Each generic type (e.g., WebPage, WebSite) | +5 per type (max 2) |
| Each rich-result type with all required properties | +5 per type (max 3) |
Rich-result-eligible types are schema types that Google supports for enhanced search results. The tool recognizes 15 types from Google's Search Gallery, including: Product, Recipe, Event, LocalBusiness, Article, VideoObject, JobPosting, BreadcrumbList, SoftwareApplication, Review, Organization, and others.
Page-type detection — the tool also scans your page content for signals like pricing information, recipe ingredients, event details, or business hours. If these signals are present but the corresponding schema is missing, you will see a finding such as “Pricing content without Product/Offer schema.”
Why it matters: Rich-result-eligible schema with complete properties gives your pages the best chance of appearing with enhanced listings (star ratings, prices, images, FAQs) in Google search results and being understood by AI systems.
Is your content deep enough?
The tool flags:
- Thin content — fewer than 150 words on the page
- Light content — 150–500 words (may not compete well)
- Missing H2 subheadings for structure
Why it matters: Pages with thin content rarely rank well. Search engines and AI systems need enough text to understand what your page is about.
Are your images optimized?
The tool checks:
- What percentage of images have alt text
- Flags pages where 30%+ of images are missing descriptions
Why it matters: Alt text helps search engines understand your images, improves accessibility, and can drive traffic from image search.
Is your page ready for AI search?
The tool tests whether AI crawlers (ChatGPT, Claude, Perplexity) can access your content:
- Can they reach the page at all?
- Do they see the same content as humans?
- Is there significant content loss for AI visitors?
Why it matters: More and more people find information through AI assistants. If your content isn't accessible to these bots, you're invisible in AI-generated answers.
How heavy is your page?
The tool measures page weight (file size) across different visitors:
- HTML document over 2 MB = Flagged (Google's indexing limit)
- Over 3 MB total page weight = Medium concern
- Over 1 MB total page weight = Worth watching
Why it matters: Google will only index the first 2 MB of an HTML document. Pages exceeding this limit risk partial or no indexing. Heavier total page weight also hurts user experience and can lower your search rankings — especially on mobile.
Using the Dashboard
Finding and managing your past runs
The Dashboard shows every test you've run. You can:
- Search by URL, domain, test ID, or label
- Filter by run type (single URL, bulk, Screaming Frog)
- Limit by date range (All Time, 24h, 7 Days, 30 Days)
- Sort by date or total run size
- Switch views between cards and table
- Add/edit labels to organize runs (e.g., "Q1 Audit", "Client: Acme Corp")
- Delete runs you no longer need (you'll be asked to confirm)
- “Latest only” filter (checked by default) — shows only the most recent version of each rerun chain. Uncheck it to see every version.
- Version badges — each card and table row shows a version badge (e.g., “v2”) if the run is part of a rerun chain
At the top you'll see summary stats: total runs, average score, and your most recent run.
Rerun chains on the Dashboard
When you rerun a test, the original and all reruns are grouped into a run chain. On the Dashboard:
- The “Latest only” checkbox (on by default) hides older versions so you see a clean list.
- Uncheck “Latest only” to reveal all versions in a chain — useful when you want to compare a specific earlier version.
- Each run shows a version badge (e.g., “v2”) so you can tell at a glance which version you are looking at.
- You can select two runs (using the checkboxes) and click Compare to see them side by side, even if they are not from the same chain.
Keyboard Shortcuts
Shortcuts in the Workspace
Use Cmd on Mac or Ctrl on Windows/Linux:
Cmd/Ctrl + Enter— Run the active test/auditCmd/Ctrl + 1— Switch to User Agent TestCmd/Ctrl + 2— Switch to Screaming Frog UploadCmd/Ctrl + A— Select all user agentsCmd/Ctrl + D— Clear selected user agentsCmd/Ctrl + /— Open/close help panelEscape— Clear the active form (or close help if it is open)
Exporting and Sharing Results
How to download and share a report
- From the Dashboard or Run Details page, click Download Results (ZIP).
- The ZIP file contains everything you need:
00_START_HERE.md— Read this first, it explains the folder structure01_Gamma_Deck/— A ready-made presentation deck (upload directly to Gamma or Graphy)02_Source_Data/— CSV tables, CRO analysis, detailed findings03_Run_Artifacts/— Screenshots, HTML snapshots, raw metadata
For client presentations: Upload the contents of 01_Gamma_Deck/ to Gamma for instant slide decks.
For deeper analysis: Open the CSVs in 02_Source_Data/tables/ in Excel or Google Sheets.
Screaming Frog Upload (Advanced)
How to upload a Screaming Frog crawl
If your team uses Screaming Frog SEO Spider for full-site crawls, you can upload that data here for deeper analysis.
- In Screaming Frog, export these CSVs:
- Internal (All) — required
- Page Titles — required
- Meta Descriptions — required
- H1 / H2 — required
- Canonicals — required
- Response Codes — required
- Directives — required
- Structured Data — optional but recommended
- Inlinks — optional but recommended
- Put all CSV files into a single ZIP file.
- In the app, switch to the Screaming Frog Upload tab.
- Upload your ZIP.
- Optionally enter the site's main URL and check "JS was enabled" if applicable.
- Click Run Audit.
The tool will analyze the entire crawl and generate findings, internal link opportunities, and a full export pack.
Things to Avoid
These are the most common mistakes new users make:
Don't test too many URLs with too many agents at once
Each combination of URL + user agent runs a real browser session. Testing 50 URLs against 27 agents = 1,350 browser sessions.
Start small. Use 1–5 URLs with a Quick Preset (4–7 agents). Scale up once you're comfortable with the results.
Don't forget to select user agents before running
The test won't start if no agents are selected. If nothing happens when you click Run Test, check that at least one checkbox is ticked — or just click a Quick Preset.
Don't expect JS-heavy pages to look perfect for bot profiles
Most bot profiles run with JavaScript disabled on purpose. If a page relies heavily on JavaScript to load its content, many bots will see a blank or broken version. That's the point: the tool is showing you what non-rendering crawlers actually see.
Don't panic at a low score
A score of 40 doesn't mean your site is broken. It means the tool found issues worth investigating. Some findings might be intentional (e.g., a noindex on a staging page). Use the findings list to understand what's flagged and decide what actually needs fixing.
Don't upload incomplete Screaming Frog exports
If you use the Screaming Frog upload feature, make sure your ZIP contains all 7 required CSV files. Missing files will cause the analysis to fail or produce incomplete results.
Don't confuse this tool with a site speed test
Page weight is measured, but this is not a performance testing tool like Lighthouse or PageSpeed Insights. The focus is on what different visitors see, not how fast they see it.
Don't rerun too quickly after making changes
If you have just updated your page, give it a moment before clicking Rerun Test. Some changes — like cache invalidation or CDN propagation — can take a few minutes to go live. Running a rerun before changes are live will show no difference from the previous run.
Quick Reference
| I want to... | Do this |
|---|---|
| Run my first test | Paste a URL, click SEO Essential preset, click Run Test |
| Test multiple pages | Upload a CSV file with URLs |
| Reuse a custom user-agent mix | Select agents, name it, click Save Preset |
| See past results | Go to the Dashboard |
| Share results with a client | Download the ZIP, upload the Gamma Deck folder to Gamma |
| Check AI visibility | Use the LLM Agents preset |
| Do a full site audit | Upload a Screaming Frog crawl ZIP |
| Rerun a past test | Open the run, click Rerun Test — comparison opens automatically |
| Compare two runs | Select two runs on the Dashboard with checkboxes, click Compare |
| Monitor a bulk test in progress | Click View Progress below the progress bar during a bulk run |
| Get help in the app | Click the ? button or press Cmd/Ctrl + / |
Glossary
| Term | What It Means |
|---|---|
| User Agent | The identity a visitor sends to your website. Google's bot, Chrome browser, and ChatGPT's crawler all have different user agents. |
| Metadata | Hidden information on your page (title, description, canonical URL) that search engines use to understand and display your content. |
| Canonical URL | Tells search engines which version of a page is the "main" one (important if you have duplicate or similar pages). |
| Structured Data | Code on your page (schema markup) that helps search engines show rich results like star ratings, prices, and FAQs. The tool evaluates quality based on type (rich-result-eligible vs. generic) and whether required properties are present. |
| Cloaking | When a website shows different content to search bots than to humans. Search engines penalize this. |
| Indexability | Whether a search engine is allowed to add your page to its search results. |
| CRO | Conversion Rate Optimization — improving your page so more visitors take action (buy, sign up, contact). |
| Screaming Frog | A popular desktop tool that crawls entire websites. This app can analyze its exported data. |
| Gamma Deck | A presentation-ready export you can upload to Gamma or Graphy for instant slide decks. |
| Quick Preset | A one-click button that selects a recommended group of user agents for common audit types. |
| Rich-Result-Eligible Type | A structured data type that Google supports for enhanced search listings — for example, Product, Recipe, Event, or LocalBusiness. The tool gives these types a higher score than generic types like WebPage. |
| Run Chain | A linked series of runs created by rerunning the same test. The first test is version 1, the first rerun is version 2, and so on. Chains are grouped on the Dashboard. |
| Run Comparison | A side-by-side view of two runs showing score changes, new/resolved issues, and metadata differences. Automatically opens after a rerun completes. |
| Score Delta | The difference between two scores shown on the Comparison page. A positive delta (green) means the score improved; a negative delta (red) means it got worse. |
| Score Hero | The large score summary cards at the top of the Comparison page showing SEO, CRO, and AI Readiness score changes between two runs. |
| Version Badge | A small label (e.g., “v2”) shown on Dashboard cards and table rows indicating which version of a run chain a test belongs to. |