15 min read

Automotive Skills Suite for Claude: 152 Skills That Replace Your xlsx Stack (Tutorial)

Automotive Skills Suite for Claude: 152 Skills That Replace Your xlsx Stack (Tutorial)
Photo by Lenny Kuhne / Unsplash

Editorial note: Star count, skill count, and standards coverage are snapshots from the public jherrodthomas/automotive-skills-suite repository as of 7 May 2026. Treat the output of any builder skill as a draft your safety / cybersecurity / quality team reviews; assessor sign-off is still a human responsibility.

Automotive software complexity grew roughly 4× over the last decade while software development capability grew only 1× (Perforce 2025 State of Automotive Software Development, citing McKinsey). The gap shows up everywhere a functional safety, cybersecurity, or quality engineer opens a spreadsheet. HARA, FSC, TSC, FMEDA, TARA, DFMEA, control plan, PPAP, RTM. Every phase ships an xlsx, and every phase re-keys the previous one.

This tutorial walks the automotive-skills-suite end-to-end: a 152-skill MIT-licensed Claude bundle that turns each phase into a builder + reviewer pair and hands off via stable xlsx contracts. We'll install one skill, run a HARA on a sample ECU, chain it through to FSC and TSC, look at the parallel ISO/SAE 21434 cybersecurity chain, then close with the parts you should not hand to AI. Goal: under an hour, end-to-end, on real artifacts. (For another deep Claude Skills bundle worth pairing with this one, see our walkthrough of Charlie Hills's open-source social media skills.)

automotive-skills-suite 152 installable Claude skills 76 builders · 76 confirmation reviewers · MIT ISO 26262 · ISO/SAE 21434 · ISO 21448 · IATF 16949 AIAG-VDA · ASPICE · AUTOSAR · UDS · ODX · SysML · V&V HARA FSC TSC Safety Case
Source: jherrodthomas/automotive-skills-suite README, May 2026.

TL;DR: The open-source automotive-skills-suite (github.com/jherrodthomas/automotive-skills-suite, 887 stars, MIT) ships 152 installable Claude skills (76 builders plus 76 confirmation reviewers) that emit the structured xlsx deliverables required across ISO 26262, ISO/SAE 21434, ISO 21448, AIAG-VDA, ASPICE, AUTOSAR, and V&V. Install one .skill bundle, hand Claude an item definition, and a HARA → FSC → TSC chain rolls out where downstream skills consume upstream xlsx as a stable file-format contract. McKinsey reports automotive software complexity grew 4× while development capability grew 1× over the last decade. Chains like this are how teams close that gap without hiring engineers they can't find.


What Is the Automotive Skills Suite, and What Does It Actually Ship?

It's 152 installable Claude skills (76 builders paired with 76 confirmation reviewers) covering the full automotive engineering lifecycle. Standards on the box: ISO 26262:2018, ISO/SAE 21434:2021, ISO 21448:2022, IATF 16949:2016, AIAG-VDA FMEA Handbook 2019, ASPICE PAM 3.1 / 4.0, AUTOSAR Classic and Adaptive R22-11, ISO 14229-1 (UDS), SAE J2012 (DTCs), ISO 22901-1 (ODX), OMG SysML 1.6 / 2.0, and IEEE 1012 V&V. MIT-licensed, 887 GitHub stars as of 7 May 2026, maintained by jherrodthomas.

The skill-pair pattern is the whole design. Each builder takes a JSON input plus the upstream artifact and emits a structured xlsx with consistent NAVY / Calibri styling and ASIL or CAL color coding. Each matching reviewer ingests that xlsx and writes a visual dashboard (KPI tiles, pie chart, compliance bar, stacked rating breakdown by section, findings table) without touching the source artifact. Gaps land in a Recommended Actions table for a human to close. The same prompt-as-code logic shows up in other open-source Claude bundles we've covered — see our piece on prompt-as-code as a repeatability pattern.

The breadth is what makes this different from a single-artifact AI tool. You're not getting one HARA generator. You're getting a HARA, an FSC, a TSC, four HW lane skills (HW-SR, HW-Architecture, HSI, FMEDA), four SW lane skills (SW-Arch, SW-SR, SW-FMEA, SW-HSIS), and a Safety Case capstone. That's just ISO 26262 concept-through-product. Parallel chains live alongside it: TARA → CS Goals → CS Concept → CS Architecture → IR Plan → Secure Coding for ISO/SAE 21434, SOTIF Analysis → Triggering Conditions → Validation Strategy for ISO 21448, APQP → DFMEA → PFMEA → Control Plan → PPAP for IATF 16949 / AIAG-VDA, and a full ASPICE assessment chain through process evidence.

76 builder pairs ISO 26262 (12) ISO/SAE 21434 (9) Comm Protocols (8) Diagnostics (8) Continuous Improvement (8) Program Mgmt (8) AUTOSAR (8) V&V (10) AIAG-VDA Quality (10) SOTIF (8) SysML + MBSE (10) ASPICE + Calibration (10)
Approximate distribution. Slice counts are rounded for legibility; check the README category tables in jherrodthomas/automotive-skills-suite (May 2026) for the exact builder-pair count per family.

That breadth matters because the same engineer at a Tier-1 supplier often owns the safety case, the cybersecurity concept, and the ASPICE evidence package on the same program. Three separate AI tools means three separate xlsx file formats and zero handoff. One suite where every output sits in a shared schema means the FSC reads its inputs from the HARA the same way every time.


Why Does This Matter Now?

Three signals make 2026 the year teams stop hand-editing this stack. First: McKinsey via Perforce reports automotive software complexity grew 4× over the last decade while development capability grew 1× (Perforce, 2025). The gap is the work. Second: 25 percent of automotive software teams call out scaling for distributed developers as their top productivity blocker in the same Perforce report. Third: the cybersecurity specialist shortage is "placing a heavy burden on the entire industry" per dissecto, and IDC pegged the global developer shortage at 4 million by 2025.

The regulatory ratchet keeps tightening on top of that staffing reality. UN R155 mandates a certified Cybersecurity Management System for vehicle type approval and explicitly references ISO/SAE 21434, which means OEMs and every Tier-1 / Tier-2 in the supply chain now produce TARA, CS Goals, CS Concept, CS Architecture, and an Incident Response Plan as a precondition to selling new types in UNECE-aligned markets (VicOne). UN R156 adds software update management on top. The xlsx pile is not optional.

Why hasn't AI cleared the backlog already? Because the published AI-for-automotive-safety pieces (Embitel, ENCO, New Eagle, Pi Innovo) each cover one artifact at a time. A HARA generator, an FMEA generator, a safety case generator. None of them analyze how artifacts hand off. The single most expensive minute in this work isn't writing a HARA. It's reconciling the HARA spreadsheet against the FSC spreadsheet six months later when the malfunction list changed and nobody propagated it. A chain that compounds across phases is qualitatively different from a smarter single-artifact tool.


How Does the Builder + Reviewer Chain Work?

Each builder takes a JSON input plus the upstream artifact's xlsx and emits the next xlsx in the chain. Each matching reviewer ingests that output and writes a visual dashboard with KPI tiles, a compliance bar, a stacked rating breakdown by section, and a findings table, and it never modifies the source. The interesting part: many reviewers auto-verify quantitative thresholds against the standard, so the dashboard tells you whether the artifact is mathematically conformant before a human sits down to read it.

Here's the canonical concept-phase chain. Item Definition feeds Safety Plan and the Development Interface Agreement (DIA). Those feed HARA. HARA feeds FSC, FSC feeds TSC, and TSC splits into the HW lane (HW-SR → HW-Architecture → HSI → FMEDA) and the SW lane (SW-Arch → SW-SR → SW-FMEA → SW-HSIS). Both lanes converge on the Safety Case capstone. Three parallel chains run alongside: TARA → CS Goals → CS Concept → CS Architecture → IR Plan + Secure Coding for ISO/SAE 21434, SOTIF Analysis → Triggering Conditions → Validation Strategy for ISO 21448, and APQP → DFMEA → PFMEA → Control Plan → PPAP for IATF 16949. Every box has a builder and a matching reviewer.

The auto-verifications are where this tool earns its keep. None of these numbers are matters of opinion, and none of them are fun to compute by hand on a deadline. Five of the most common reviewer thresholds:

Reviewer Auto-verified metric Threshold Standard reference
fmeda-reviewerSPFM / LFM / PMHFASIL B: ≥90% / ≥60% / ≤100 FIT · ASIL D: ≥99% / ≥90% / ≤10 FITISO 26262-5 Table 5
ppap-package-reviewerCpk≥1.33 standard · ≥1.67 special characteristicsAIAG PPAP 4th ed.
msa-gage-rr-reviewerGauge R&R %≤10% acceptable, 10–30% conditional, >30% rejectAIAG MSA 4th ed.
sw-fmea-reviewerResidual risk vs ASILQualitative residual risk per ASIL threshold of the safety goalISO 26262-6
spc-chart-reviewerCpk vs target + Western Electric / Nelson rulesPer Control Plan target Cpk; rule violations flaggedAIAG SPC 2nd ed.

The chain compounds because every downstream skill consumes the upstream xlsx as a stable file-format contract. Rename a malfunction in HARA, regenerate, and the FSC reviewer flags every safety goal whose derivation no longer matches. That's the part hand-edited xlsx never gets you.


How Do You Install Your First Skill?

Three steps and roughly five minutes. Clone or download the .skill bundle you need from skills/ in the repo, install it, then trigger it by phrasing. Skills install into one of four scopes per Anthropic's Skills documentation: personal at ~/.claude/skills/ for just you, project at .claude/skills/ in a repo root for your team via git, plugin distributed via marketplace, or enterprise pushed through MDM. Tutorial mode here is personal scope.

Concrete commands, copy-paste:

# Clone the suite
git clone https://github.com/jherrodthomas/automotive-skills-suite.git
cd automotive-skills-suite

# Install one skill into personal scope
mkdir -p ~/.claude/skills
cp -r skills/hara-builder ~/.claude/skills/
cp -r skills/hara-reviewer ~/.claude/skills/

# Or install the whole suite
cp -r skills/* ~/.claude/skills/

If you're on Cowork or Claude Desktop, the GUI path is the "Save skill" button on the skill card; same effect, different surface. The Claude Code CLI reads from ~/.claude/skills/ automatically; restart the session and the skill becomes available.

Triggering is by phrasing, not by command. Every skill declares its triggering description in frontmatter, and Claude picks it up from the way you describe what you want. For HARA: "Build me a HARA for a new ECU project — Electronic Stability Control, 12-volt platform, target ASIL D." That phrasing matches hara-builder's declared description, the skill loads, Claude asks the JSON questions it needs, and out comes the xlsx. The community library now lists more than 1,234 agentic skills compatible with the SKILL.md format (Codecademy, March 2026), so getting comfortable with the trigger-by-phrasing model pays off across other suites too.

One install detail worth knowing on day one: the project scope (.claude/skills/ committed via git) is how you give a whole functional safety team the same builders without each engineer doing the install themselves. That's where this stops being a personal-productivity story and starts being a team contract. For the broader pattern of skill-pack distribution and a working install playbook, see our breakdown of Guizang's PPT skill bundle.


How Do You Run a HARA, Then Chain to FSC and TSC?

With hara-builder, hara-reviewer, fsc-builder, fsc-reviewer, tsc-builder, and tsc-reviewer installed in ~/.claude/skills/, the chain runs end-to-end without a single manual xlsx edit between phases. Below is a worked walkthrough on a sample Electronic Stability Control ECU. Pick whichever ECU you have an item definition for; the pattern is identical.

Step 1 — HARA. Hand the builder your item definition (PDF or pasted text) and a JSON list of malfunctions. The builder applies 14 malfunction guide words against a Cartesian of function × malfunction × environment, scores Severity / Exposure / Controllability per ISO 26262-3 §5, derives ASIL, and writes a Hazard Analysis xlsx with safety goals. Pass that file to hara-reviewer. The dashboard shows ASIL distribution, function coverage, malfunction-guide-word coverage, and a findings table for each safety goal that's missing a safe state, a fault tolerant time interval, or an emergency operation interval.

Step 2 — FSC. Pipe the HARA xlsx into fsc-builder. The skill produces an FTA per safety goal with a visual fault tree, derives the FSRs against each top event, runs ASIL decomposition where it's mathematically supported, and emits the FSC xlsx. fsc-reviewer checks fault-tree completeness, FSR-to-safety-goal traceability, and ASIL preservation through decomposition. If the HARA changed, regenerate; the reviewer flags every FSR whose derivation no longer ties back.

Step 3 — TSC. Feed the FSC xlsx into tsc-builder. The output: HW-TSR / SW-TSR with safety mechanisms, an HSI scaffold, system FMEA, Dependent Failure Analysis, and a draw.io system architecture diagram. tsc-reviewer checks coverage of FSRs by TSRs, allocation of safety mechanisms, and DFA completeness against common-cause initiators. Three builders + three reviewers, six linked xlsx artifacts, zero re-keying.

Hours per artifact (illustrative consultancy-rate baseline) HARA ≈ 28 h hand-edited ≈ 4 h chain FSC ≈ 22 h hand-edited ≈ 3.5 h chain TSC ≈ 26 h hand-edited ≈ 5 h chain FMEDA ≈ 32 h hand-edited ≈ 5.5 h chain (auto SPFM/LFM/PMHF) Hand-edited xlsx (consultancy estimate) Builder + reviewer chain
Illustrative only — do not quote without replicating. Hand-edited baselines are author projections against published consultancy rate cards (not measured), and chain figures reflect run-time on a single sample ECS chain. Time the chain against your own program before publishing or citing these numbers.

One concrete value the reviewer surfaced on a sample ECS run worth calling out: the FMEDA dashboard flagged Single-Point Fault Metric at 96.4 percent on a goal we'd targeted as ASIL D, where the standard demands ≥ 99 percent. The fix took the safety mechanism allocation back to architecture for one redistribution cycle. Five minutes to spot, an hour to fix. The same gap on hand-edited FMEDA xlsx normally surfaces during the assessor's review three weeks before SOP, a far more expensive moment to find it. (For an adjacent pattern, pulling structured artifacts out of legacy specs, we covered Reversa's legacy spec extractor.)


What About the Cybersecurity (TARA) Chain?

The ISO/SAE 21434 chain mirrors the safety chain in shape but enforces different thresholds. TARA → CS Goals → CS Concept → CS Architecture → IR Plan + Secure Coding. Six builder + reviewer pairs. The TARA builder covers STRIDE across assets, runs Risk Value lookup per the standard, and emits the threat scenarios xlsx. CS Goals derives Cybersecurity Assurance Level (CAL) per asset. CS Concept produces CSRs against each cybersecurity property (confidentiality, integrity, availability, authenticity, non-repudiation). CS Architecture writes a cryptographic inventory, key management, and a secure boot chain. IR Plan emits a PSIRT plan with the UN R155, ENISA, and NHTSA notification deadlines wired into the timeline.

Why this matters in 2026: UN R155 has been mandatory for new vehicle type approvals across UNECE-aligned markets since July 2024 (VicOne). The regulation explicitly references and overlaps with ISO/SAE 21434, which means the entire supply chain is now in scope, not just the OEM. A Tier-2 producing a sensor module is being asked for the same TARA-shaped artifact the Tier-1 produces for the integrated ECU, which is the same shape the OEM consolidates for type approval.

The Secure Coding reviewer is the small but interesting one. It checks against CERT C, MISRA C:2012 + Amendment 1, and the AUTOSAR C++14 secure subset all at once: three rule sets that almost no team reconciles by hand. If your codebase is mixed C and C++ (most automotive ECUs are), that reconciliation is normally where two engineers spend a week. The reviewer surfaces the rule overlaps and the rule conflicts in one findings table.


Where Does This NOT Belong?

This is the part most AI-tooling write-ups skip, so it's worth being specific. The suite produces structured xlsx; it doesn't produce the engineering judgment behind the malfunction list, the threat model, or the residual risk acceptance rationale. Treat builder output as a draft your team confirms.

Don't hand it: the HARA malfunction list authorship (this is where domain experience earns its keep; a builder will give you a competent default, but the ten malfunctions that actually matter on your platform come from someone who's worked it for years), the TARA threat scenarios that depend on attacker capability assumptions specific to your fleet, the residual risk acceptance memo that needs your safety manager's signature, the lessons-learned synthesis that needs human judgment about what was actually surprising, and assessor / TPA review of any ASIL D or CAL 4 deliverable. The reviewer dashboards flag gaps; they don't close them. ISO/PAS 8800 (AI in functional safety) isn't covered as of May 2026 either. Automotive Functional Safety Week 2026 has flagged it as a focus topic, but the suite hasn't shipped 8800 builders yet.

And one operational note that's easy to miss: the suite is community-maintained and MIT-licensed. For production deployment, pin your version, don't auto-pull from main, and run a smoke test on a known artifact when you upgrade. The same caveats you'd apply to any open-source dependency in a regulated workflow apply here.


Frequently Asked Questions

Is the automotive-skills-suite production-ready?

It is MIT-licensed, sits at 887 GitHub stars as of May 2026, and is structured around stable xlsx file-format contracts between phases. Assessor and TPA sign-off remains required for ASIL D and CAL 4 deliverables. Treat outputs as draft artifacts the team confirms, not final submissions.

Do I need a paid Claude plan to use the automotive-skills-suite?

No. Skills install into any Claude environment that supports the SKILL.md format. Claude Skills support four install scopes: personal at ~/.claude/skills/, project at .claude/skills/ shared via git, plugin distributed through marketplaces, and enterprise pushed by IT through MDM.

Does it cover ISO/PAS 8800 for AI in functional safety?

Not as of May 2026. The README does not list ISO/PAS 8800 among covered standards. Automotive Functional Safety Week 2026 has flagged 8800 as a focus topic, so coverage is plausible but not promised. The current suite spans ISO 26262, ISO/SAE 21434, ISO 21448, IATF 16949, ASPICE PAM 3.1 / 4.0, and AUTOSAR R22-11.

Can I extend the suite with my own skills?

Yes. Every .skill bundle is independently installable, and the chain contract is the xlsx file format itself. A custom skill that consumes the same upstream xlsx slots into the chain. The community library lists over 1,234 agentic skills compatible with the SKILL.md format as of March 2026 (Codecademy).


The Bottom Line

The interesting question is not whether AI can write a HARA. It can, and a dozen single-artifact tools already do that. The interesting question is whether AI can keep an entire functional safety, cybersecurity, and quality program coherent across phases the way a good safety manager does. The automotive-skills-suite bets the answer is "yes, if every output is a stable xlsx and every downstream skill reads it as a file-format contract." That's a small bet stated precisely. It's also the right one.

If you're a functional safety lead at a Tier-1, an automotive cybersecurity engineer post-UN R155, or a quality engineer running APQP through PPAP, clone the repo, install hara-builder and hara-reviewer, run them on an item definition you already have, and time the chain end-to-end against your hand-edited baseline. An hour, one ECU, two xlsx files. That's enough data to know whether to put the rest of the suite in your team's .claude/skills/ next sprint. (For another open-source Claude tooling write-up worth pairing with this one, see our OpenClaude-Portable tutorial.)