Claude Ads Tutorial: Run a 7-Platform PPC Audit in 30 Minutes
The average SaaS Google Ads account wastes 36.1% of its spend on non-converting terms, sloppy match types, and irrelevant audiences (GrowthSpree, 2026). Finding that waste used to mean 4-8 hours a week per client, exporting CSVs, building pivot tables, and writing commentary. Claude Ads, an open-source Claude Code skill, compresses the same audit into about 30 minutes across seven ad platforms, without handing account credentials to a third-party SaaS.
TL;DR
- Claude Ads is an MIT-licensed Claude Code skill that runs 250+ audit checks across Google, Meta, YouTube, LinkedIn, TikTok, Microsoft, and Apple Ads using six parallel subagents.
- Agencies compress 4-8 hour manual audits into under 30 minutes, recovering 15-25 hours per week on a 50-account book.
- The average SaaS Google Ads account still wastes 36.1% of spend on non-converting terms, which is what a good audit surfaces.
- The tool is diagnostic, not executional. It doesn't mutate campaigns, and with MCP disabled, zero account data leaves your machine.
What is Claude Ads and who is it for?
Claude Ads is an open-source Claude Code skill that runs 250+ paid-advertising audit checks across seven platforms using parallel subagents. It ships with 25 built-in reference files covering platform specs, bidding decision trees, and compliance rules.
A "skill" in Claude Code is a bundled set of slash commands, subagents, and reference docs that teach Claude to run a specific multi-step playbook end to end. Claude Ads exposes /ads audit, /ads plan, /ads math, /ads test, and /ads report, each coordinating specialist subagents behind the scenes.
The target user is a technical marketer or agency analyst who's comfortable in a terminal. In practice, that covers three roles: in-house PPC leads running audits on their own accounts, boutique agency owners standardising audits across 20-50 clients, and growth engineers building internal marketing tooling on top of Claude Code. If you've never opened a terminal, start with a SaaS audit tool instead.
Why does PPC audit automation matter in 2026?
Manual PPC reporting burns 4-8 hours per client per week. Automation-first agencies do the same work in under 30 minutes and recover 15-25 hours weekly across a 50-account book. Multiplied even across a small agency, that's the difference between a profitable retainer and a margin-crushing one.
The shift is structural, not cosmetic. Successful PPC practitioners now spend roughly 20% of their time on tactical campaign management, most of which is automated anyway, and 80% on strategic analysis. A decade ago, the ratio was inverted. AI-driven Performance Max and AI Max campaigns compound the pressure: they absorb 13-18% of monthly spend and deliberately obscure the signals a traditional audit leans on, forcing auditors to ask new questions about asset groups, audience signals, and attribution windows.
Our take: The real unlock isn't the time saved on any single audit. It's that a 30-minute audit can run monthly instead of quarterly. Audit frequency is the variable that actually moves account health, and manual economics cap most agencies at "once a year per client."
What do you need before installing Claude Ads?
Claude Ads requires Claude Code (Anthropic's agentic terminal tool), Python 3.10 or later, and optionally Playwright for landing-page analysis and reportlab for PDF export. Installation is a single curl command that registers the skill locally and pulls in the 25 reference files the audit agents consult.
You'll need:
- Claude Code CLI (latest)
- Python 3.10+
- Playwright (optional, for landing-page checks)
- reportlab (optional, for
/ads reportPDF output) - ~5 minutes to install, ~15-30 minutes for your first audit
- Tested on macOS 14, Ubuntu 22.04, and Windows 11 (WSL2)
Install the skill with one command:
# Install the Claude Ads skill into your Claude Code config
curl -fsSL https://raw.githubusercontent.com/AgriciDaniel/claude-ads/main/install.sh | bash
Then verify:
# Confirm the skill is registered
claude /ads --help
Expected output:
Claude Ads — paid advertising audit & optimization
Commands:
/ads audit Full multi-platform analysis
/ads [platform] Deep platform-specific audit
/ads plan [type] Industry-specific strategy
/ads math PPC financial calculators
/ads test A/B test design framework
/ads report Generate PDF deliverable
If you want live API access instead of CSV exports, install the relevant MCP server (Google Ads, Meta Marketing, or a neutral broker like DataForSEO) and add it to ~/.config/claude-code/mcp.json. The base audit works with no API credentials at all. It parses CSV exports and interactive inputs, which is the skill's key privacy-first design choice.
How do you run your first audit with /ads audit?
The /ads audit command spawns six parallel subagents, one per major ad platform. Each runs a weighted checklist and returns scored findings to a coordinator agent, which produces a unified Ads Health Score out of 100. A full cross-platform audit finishes in about 15 minutes on a mid-sized account, and around 30 minutes including PDF generation.
Before running the command, export the data Claude Ads expects. For a Google Ads audit:
- Account performance export (last 30 and last 90 days)
- Search terms report (last 30 days)
- Quality Score by keyword (last 30 days)
- Asset performance (if running PMax)
For Meta, export campaign, ad set, and creative-level performance plus the placement report. Drop everything in one folder. Then:
# Run the full cross-platform audit
claude /ads audit --data ./exports --industry saas
The --industry flag tells the coordinator which benchmark pack to load. Options include saas, ecommerce, b2b, local, and lead-gen. Skip it for platform-agnostic scoring.
Expected output (trimmed):
[coordinator] Loaded saas benchmark pack (25 reference files)
[google-ads-agent] running 54 checks ✓ done in 2m 41s
[meta-ads-agent] running 41 checks ✓ done in 2m 18s
[youtube-ads-agent] running 28 checks ✓ done in 1m 47s
[linkedin-ads-agent] running 32 checks ✓ done in 1m 52s
[tiktok-ads-agent] running 26 checks ✓ done in 1m 39s
[microsoft-ads-agent] running 29 checks ✓ done in 1m 44s
[coordinator] Merging findings, calculating weighted score…
Ads Health Score: 62/100 (below industry benchmark)
Critical issues: 7
High-priority: 14
Medium: 23
Low: 31
Watch out: The agents only score what you give them. Skip the search terms report and the Google Ads agent flags "insufficient data for negative keyword analysis" rather than silently passing that check. Provide every export on the list for a complete score.
How do you interpret the Ads Health Score?
Ads Health Scores below 60 typically correspond to 20-35% recoverable spend, which lines up with the 36.1% waste benchmark cited earlier. The score is weighted: critical issues like broken conversion tracking or policy violations count roughly three times as much as stylistic flags such as ad-copy variation.
Findings are ranked into four severity tiers. Action them top-down: critical issues on day one, high-priority in week one, medium within a month, and low as quarterly housekeeping. Scores in the 60-80 range usually mean you're on the right track but leaving 8-20% of spend on the table. Scores above 80 indicate a well-maintained account where further optimisation is about testing, not fixing.
The follow-up command /ads plan [industry] re-reads the audit output and produces a prioritised 30-60-90 day plan. This is where the skill's reference pack earns its keep: SaaS plans emphasise LTV:CAC, match-type hygiene, and long-sales-cycle attribution; ecommerce plans lead with shopping feed quality, bidding against margin, and cart-value segmentation. Accounts with Google Quality Scores of 8-10 pay 30-50% less per click than scores of 4-6, so the plan typically frontloads Quality Score work when there's opportunity there.
Our take: The Ads Health Score is most useful as a trend line, not an absolute number. A score of 62 on its own is noise. A 62 that went from 54 the month before tells you the last month of optimisation worked. Run the audit monthly and the score starts earning its keep.
How do you generate a client-ready PDF report?
/ads report compiles the audit findings into a white-labelled PDF, usually 12-20 pages depending on how many platforms were audited, in under 60 seconds using Python's reportlab library. Agency logo, brand colour, and footer text are set via a one-time config file, and sensitive keyword lists can be redacted with a flag.
# Generate a branded PDF for the most recent audit
claude /ads report --brand ./agency-brand.json --redact-keywords
The --brand JSON accepts logo_path, primary_color, accent_color, agency_name, and footer. Leave it out and you get plain Claude Ads branding. The --redact-keywords flag swaps actual search terms for category labels, useful when the report is going to a prospect before contracts are signed.
The PDF follows a predictable structure: executive summary with the headline Health Score, per-platform scorecards, a prioritised recommendation list, a spend-recovery model (from the /ads math calculators), and an appendix with the full raw findings. In agency pitch contexts, the executive summary is often all a prospect reads, so the skill front-loads the three highest-impact findings on page one.
When should you not use Claude Ads?
Claude Ads is a diagnostic tool. It doesn't push changes into ad accounts. For live bid management you still need platform-native automated rules, Google Ads Scripts, or a dedicated bid-optimisation SaaS; Claude Ads complements those rather than replacing them. It also has no real-time alerting, so it won't tell you at 2am that a Performance Max campaign is burning budget on a junk placement.
A second limitation is data dependence. Without an MCP integration, the agents work off whatever exports you provide. Skip the search terms report and the Google Ads agent flags the gap instead of inventing findings, which is the right behaviour, but it means the score is only as good as your inputs. Budget an extra five minutes for clean exports if you want a reliable number.
Our finding: The non-obvious strength here is privacy, not speed. Most audit SaaS tools require OAuth access to your Google Ads or Meta accounts, which many regulated-industry legal teams simply won't approve. Claude Ads runs entirely on your local machine; with MCP disabled, no account data leaves the device at all. For healthcare, finance, and legal advertisers, the real competitor isn't a faster SaaS. It's the slow manual spreadsheet audit the legal team tolerates.
Skip Claude Ads if any of these describe you
- Never opened a terminal. A SaaS audit tool is a better starting point.
- Need real-time alerting. Claude Ads is a scheduled diagnostic. Pair it with platform-native rules for live alerts.
- Want automatic campaign changes. The skill surfaces recommendations; it doesn't mutate campaigns. That's a feature for governance-conscious teams and a limitation for push-button automation teams.
For the adjacent reporting-automation problem, where the bottleneck isn't the audit but the weekly multi-platform dashboard, see how to automate marketing reporting with Windsor.ai.
Frequently Asked Questions
Is Claude Ads free?
Yes. Claude Ads is MIT-licensed and the skill itself costs nothing. You pay only for Claude Code API usage, which for a single full audit typically runs under $2 at Sonnet 4.6 rates. Heavy users running dozens of audits a week can expect tens of dollars per month in API costs, not hundreds.
Which ad platforms does it cover?
Claude Ads covers Google Ads, Meta (Facebook and Instagram), YouTube, LinkedIn, TikTok, Microsoft Advertising, and Apple Search Ads. Seven platforms, each with a dedicated subagent. The coverage mirrors the spend mix most agencies see, with Google and Meta agents doing the heaviest lifting on the typical account.
Do I need API credentials to run an audit?
No for the base audit. It works from CSV exports and interactive inputs. You only need credentials if you want live data pulls via the optional MCP integrations (Google Ads, Meta Marketing API, or a neutral broker like DataForSEO). Many users deliberately keep MCPs disabled for privacy reasons and export data manually.
How long does an audit actually take?
Around 15 minutes for a standard cross-platform audit on a mid-sized account, and up to 30 minutes including PDF report generation. Very large accounts with 500+ campaigns and multi-market setups can stretch to 45-60 minutes.
Can I customise the audit for my industry?
Yes. Pass --industry saas, --industry ecommerce, --industry b2b, --industry local, or --industry lead-gen to /ads audit and /ads plan, and the coordinator loads the matching benchmark pack before running the weighted checklist. The reference files include sector-specific CPA targets, match-type expectations, and attribution-window assumptions.
How to run your first audit this week
Install the skill with the one-line curl command above and run your first audit tonight. If you're running it against a real client account, start with a 30-day window and the --industry flag set correctly. The Health Score and plan quality both improve sharply with the right benchmark pack loaded. For teams already using Claude Code for other marketing workflows, pair Claude Ads with a weekly cron so the audit runs every Monday and lands a fresh PDF in your inbox before the client status call.
Manual PPC audits aren't a skill issue. They're an economics issue, and in 2026 the economics shifted. A 30-minute automated audit is cheap enough to run monthly, which is the frequency that actually changes account trajectories. Start with one client, time it, and decide from there.
Member discussion