Inside Charlie Hills's Open-Source Social Media Skills
On 22 April 2026, a UK marketer with 350,000 followers across LinkedIn, Instagram, Substack, X, and YouTube, and roughly 100 million views a year, open-sourced his entire content system as 17 Claude Code skills under MIT licence. Five days later, the repo had 335 stars and 101 forks (GitHub charlie947/social-media-skills, 2026).
Most "AI for social media" coverage is fragmented prompts, single-platform hacks, and tools that flatten every creator into the same neutral voice. The result tends to read like a press release from ChatGPT, not from the person whose name is on the post. Charlie Hills's repo is something different: a working production stack, end to end, with a single architectural decision underneath that's worth more than any of the 17 skills on their own.
For a worked example of what a single Claude Code skill looks like in real marketing production, see our walkthrough of running a 7-platform PPC audit with Claude.
- The repo ships 17 Claude Code skills covering the full content stack across 5 platforms, MIT-licensed and forkable today (GitHub, 2026).
- The architecture is voice-first: a
voice-builderskill producesvoice.md, and every other skill reads it before drafting a line. - 80% of organisations running AI agents now report measurable ROI (Anthropic 2026 Agentic Coding Trends Report), and this repo is the clearest creator-side worked example yet.
- You don't need all 17 skills. Start with
voice-builderplus one downstream skill, judge it against one rule: does the output sound indistinguishable from your longhand work?
Why does this repo matter (and why now)?
Three things landed in the same six-month window. Anthropic released the Agent Skills specification as an open standard in December 2025, OpenAI adopted the same format for Codex CLI and ChatGPT shortly after (The New Stack, 2026), and the creator economy crossed an estimated $250 billion globally in 2026 (CreatorIQ State of Creator Marketing Report, 2025-2026). The standard arrived. The market is the size of a small country. What's been missing is a real, end-to-end production system from a named creator.
That's the gap the charlie947/social-media-skills repo fills. Anthropic's own anthropics/skills repository ships generic-purpose skills (skill-creator, document-builder, design helpers). It doesn't show you what a content stack actually looks like at scale, where the joints are, or which skills earn their keep. Charlie's repo does. Every skill comes with trigger phrases, inputs, dependencies, and a clear position in the dependency graph.
The MIT licence matters too. You can fork the repo, modify it, ship the output commercially, and never write Charlie a cheque. With more than 200 million people globally identifying as content creators, 48% of them solo (Circle Creator Economy Statistics, 2026), the addressable audience for "a working AI content system you can fork tonight" is somewhere north of 96 million people.
For a different angle on the same shift, our Claude Projects guide for marketing teams covers the persistent-context approach, the closest cousin of the skills pattern.
How does the system flow from voice to platform?
The architecture has one foundation and one source. The foundation is the voice-builder skill, which produces two files (about-me.md and voice.md) from an interview plus three to five of your past writing samples. Every other skill reads both files before drafting a single line. The source is the newsletter-voice skill, which extends voice-builder with newsletter-specific structure rules and produces newsletter-voice.md. From there, every downstream piece (LinkedIn posts, Reels scripts, YouTube thumbnails, carousels, quote posts) descends from the newsletter.
That dependency graph is what makes the repo interesting. It's not 17 independent helpers stitched together. It's a tree: tune the file at the root and every leaf changes.
voice.md. The post-writer, reels-scripting, hook-generator, and quote-post skills are thin shells. The compounding value lives in the voice file every one of them reads first. That single design decision is what most creators trying to "AI my LinkedIn" miss, and it's why their output sounds like ChatGPT.
The dependency math shows up clearly. If you tune a single prompt inside post-writer, you've improved the output of one skill. If you tune voice.md, you've improved the output of every skill in the repo at once. Anthropic's 2026 agent research found that 80% of organisations running production AI agents now report measurable ROI (Anthropic 2026 Agentic Coding Trends Report), and the highest-payoff agents tend to share this pattern: a small, well-tuned core that many specialised skills consume.
voice.md improves every downstream skill in the repo at once. Tuning a single prompt improves one. Source: Editorial analysis of repository dependency graph, 2026.For a parallel example of a named creator open-sourcing a Claude Code skill, see Guizang's PPT skill for magazine-style HTML decks.
What do voice-builder and newsletter-voice actually produce?
Two skills sit above everything else. voice-builder runs an interview plus three to five of your past writing samples to produce about-me.md (who you are, what you write about, what you'd never write) and voice.md (sentence rhythm, vocabulary tier, banned phrases, signature moves). newsletter-voice extends both with newsletter-specific structure rules to produce newsletter-voice.md. According to recent platform data, 41% of LinkedIn users are now using AI tools like ChatGPT (LinkBoost LinkedIn Engagement Benchmarks, 2026). The gap between "using AI" and "AI that sounds like you" is exactly what these two files close.
voice-builder reads to produce voice.md.What's actually in voice.md? The same NNGroup four-dimension tone framework that the claude-blog plugin uses for its personas: funny vs. serious, formal vs. casual, respectful vs. irreverent, enthusiastic vs. matter-of-fact. The interview prompt structures it. Your samples ground it. The output is a markdown file with banned phrases ("at the end of the day", "circle back", whatever your tells are), preferred sentence patterns, signature openings, and the specific things you'd never write under your own name.
Why does this work? Because every other skill in the repo reads it as input before the topic-specific prompt fires. So the post-writer doesn't ask "write a LinkedIn post about X." It asks "write a LinkedIn post about X, in the voice defined here, avoiding these phrases, using these patterns." The topic prompt is short. The voice prompt is long. That ratio is the whole architectural bet.
voice-builder is only as good as the samples you feed it. Throw in three of your weakest posts and you'll get a voice.md that produces output you don't want to publish. Pick three posts that actually represent how you sound at your best, not your most popular, your most representative, and the file gets tighter.
For the marketing-team analogue (uploading documents to Claude Projects so every session starts pre-briefed), see our 10-use-case guide on Claude Projects for marketing.
Which LinkedIn skills carry the most weight?
Five skills run LinkedIn end-to-end, and on this platform the format choice does most of the heavy lifting. profile-optimizer rebuilds your headline, about, experience, and featured section plus four image-generation prompts. post-writer drafts posts in your voice using the voice files. graphic-designer decides between an HTML/CSS graphic and an AI-generated infographic based on the post content. post-scorer pulls your post history via Apify and scores any draft against what actually performs for you. post-formatter translates a topic into a finished post using PAS, AIDA, BAB, STAR, or SLAY frameworks.
The standout is post-scorer. It's the only skill in the entire stack that grounds the system in your engagement data, not industry benchmarks. Most AI scoring tools test against generic "best practice": long form, no hashtags, post on Tuesday morning. That advice is averaged across millions of posts that have nothing to do with your audience. post-scorer reads your last few hundred posts via Apify, models what works for you, and grades the draft on the curve that actually matters.
post-formatter is the format-translator. PAS = Problem / Agitate / Solve. AIDA = Attention / Interest / Desire / Action. BAB = Before / After / Bridge. STAR and SLAY are the long-form storytelling siblings. Pick the framework that fits the post and the same idea ships in five different shapes. Combined with post-scorer's feedback loop and graphic-designer's graphic-vs-infographic decision, the LinkedIn pipeline is the most production-ready segment of the repo.
For the reporting equivalent (an Apify-style data pipeline applied to weekly marketing reports rather than post scoring), see how to automate marketing reporting with Windsor.ai.
How does the visual tier work, and what does it cost?
Five skills cover everything visual, and they share a common pattern: scrape an outlier, analyse what worked, then rebuild it in your voice. reels-scripting reverse-engineers an outlier Reel via Apify plus Gemini 2.5 Flash, then writes a fresh script in your voice from your newsletter source. youtube-thumbnail turns a video title into a branded thumbnail prompt. gemini-infographic produces the whiteboard style that Charlie reports pulled 480k impressions from three posts. gemini-carousel does slide-by-slide carousel generation with an approval gate. quote-post writes the quote and Gemini bakes it into the image.
reels-scripting feeds, paired with an Apify scrape of outlier Reels in your niche.The AI voice-cloning market is forecast to grow from $3.28 billion in 2025 to $4.06 billion in 2026 at a 23.9% compound rate (Research and Markets / GII Research, 2026). The same trend, applied to written voice, is what this tier of the repo captures. The Reels script that gets generated isn't a generic "viral hooks" template. It's reconstructed from an outlier Charlie's Apify scraper actually surfaced, then rewritten through voice.md so it sounds like the human, not the algorithm.
gemini-infographic is locked to a single whiteboard aesthetic: beautiful, but recognisable. If it spreads, the look will eventually feel like a meme. The approval gate in gemini-carousel exists for the same reason image generation always needs one: the model is non-deterministic, the cost is real, and you don't want it shipping ten slides you'd never have approved.
For an alternative image-generation stack used by marketing teams (ChatGPT Images 2.0 instead of Gemini), see our 8-use-case rundown on ChatGPT Images for marketing.
What do the standalone skills actually do?
The remaining seven skills round out the system. analytics-dashboard turns a LinkedIn Analytics export into an interactive React dashboard plus five data-backed recommendations. pinned-comment writes meme-style pinned comments with matching image prompts. hook-generator produces six clickbait two-line hooks per topic. content-matrix pairs your pillars with eight formats for 32+ post ideas in one table (Justin Welsh style). niche-research drives Claude for Chrome to scroll Reddit, X, and Google with verified dates and surfaces the 20 most relevant stories from the last seven days.
niche-research is the underrated entry. The system has a front door for ideas, not just a back door for output. Most AI content stacks start at "give me a topic." Charlie's starts at "find me 20 timely stories in my niche this week." Pair that with content-matrix's pillars-times-formats grid and you've got a 32-row planning table feeding into the rest of the stack, not the other way around.
analytics-dashboard is the other quiet hero. Most creators stare at LinkedIn's native analytics with zero structure and walk away with vibes. A static dashboard with five concrete recommendations (post type X is up Y%, post Z under-performed because of W, your audience peaks Tuesday at 8am) is genuinely better than the dashboard LinkedIn ships. Creators who post consistently three or more times a week see 3.5× more profile views than those who post occasionally (LinkBoost, 2026), so the recommendation engine that tells you what to post next earns its keep fast.
voice-builder, newsletter-voice) compound through every downstream skill, which is why their payoff estimates are highest. Source: editorial estimate, 2026.For a wider survey of where AI design and creative tooling fit into a marketing stack, see AI design use cases across marketing in 2026.
How do you fork the repo in one week?
A realistic first week ships three foundations. Day 1: clone the repo, run voice-builder against three of your past posts, not your most popular, your most representative. Day 2-3: review and edit voice.md until it actually sounds like you. This is the only step that matters; everything downstream inherits this file. Day 4: run newsletter-voice and produce a single newsletter draft. Day 5-7: run post-writer against the newsletter, generating one LinkedIn post per day. Compare each post to a baseline you'd have written longhand.
Three install paths from the README. Option one: the Claude Code plugin marketplace (/plugin marketplace add charlie947/social-media-skills then /plugin install social-media-skills). Option two: clone the repo and copy the skill folders into ~/.claude/skills/ for personal use, or into project-local .claude/skills/ for team work. Option three: zip individual skill folders and upload them via Customise Skills in Claude Desktop. Pick whichever matches your stack. The skills themselves are platform-agnostic markdown.
Version control is just Git. The skills are plain markdown; diff them, branch them, review them like any other code. Cost reality check: Claude API usage during the build week will run a few dollars, Apify usage for post-scorer and reels-scripting is metered (~£10-30/month for solo use), and Gemini 2.5 Flash for image generation is cheap per call but easy to multiply.
The eval question that decides whether the week was worth it is one sentence long: is the post-writer output indistinguishable from your longhand work, or is it close-but-clearly-AI? If it's the first, you've inherited a system that compounds. If it's the second, the work is in the voice file, not the skills.
Frequently Asked Questions
What are Claude Code skills?
Skills are folders containing a SKILL.md file (YAML frontmatter plus markdown instructions) and any optional scripts or reference files. Claude loads them on demand when the description matches a user request, following the open Agent Skills standard released by Anthropic in December 2025 and adopted by OpenAI for Codex CLI and ChatGPT shortly after.
Is Charlie Hills's repo free to use?
Yes. The charlie947/social-media-skills repo is published under MIT licence: fork, modify, and ship the output commercially without restriction. You pay only for Claude API usage (Claude Code itself is free) and any optional services the skills call: Apify for post history (£10-30/month for solo use), Gemini for image generation (cents per call but multiplies on carousels).
Do I need to use all 17 skills?
No. The architecture is layered, not monolithic. The two foundation skills (voice-builder, newsletter-voice) are the only true dependencies. Everything else is independently useful and adopted one at a time. Most creators will start with voice-builder plus post-writer and stop there. Add the next tier when a specific bottleneck (visuals, scoring, ideation) actually shows up.
How is this different from generic AI content tools?
Generic tools generate posts in their default voice with light style transfer applied last. This system inverts the order: every output is a function of your voice.md, built from your interview and writing samples. The skills are thin shells around a thick voice file. Get the voice file right and everything else follows. That single architectural decision is the moat.
What's the difference between a Claude Skill and an MCP server for content creators?
Skills are playbooks (markdown plus scripts loaded into context on demand). MCP servers are live tool endpoints (APIs, always available). This repo uses both: skills encode the playbook (reels-scripting, post-writer), and MCP-style integrations call live data (Apify for post history, Gemini for image generation, Claude for Chrome for Reddit/X scraping). Use skills for what to do. Use MCP for real-time data.
Conclusion
The repo is the clearest creator-side worked example of the Agent Skills standard so far. The architectural bet, voice file as the moat, is what differentiates it from generic AI content tools. The newsletter-as-source pattern is the cleanest answer to multi-platform content creation in 2026. And the fact that all of it ships as MIT-licensed plain markdown means you can read every skill, fork what you want, and ignore what you don't.
You don't need all 17 skills. You probably don't even need 10. Start with voice-builder and one downstream skill that maps to your biggest weekly bottleneck. Run them for a week. Judge the output against one rule: does it sound like you, or does it sound like ChatGPT in your hat? If it's the first, you've found the new floor. If it's the second, the voice file is where the work is.
If you're looking for the marketing-team analogue (persistent context applied across a whole content workflow rather than skills loaded on demand), our walkthrough of 10 Claude Projects use cases for marketing covers the closest cousin.
Member discussion