Certification Blueprint · Published v1.0
A certification body for AI-native enterprise delivery — graded on shipped artifacts, not memorized answers.
The exam blueprint is public, the rubrics are open, and the assessment is independently audited — so enterprise procurement does not have to take our word for any of it.
Section 01 · Why SNCP exists
Traditional certifications grade candidates on what they can recall under a timer. That is why a procurement team can read a candidate's credential list and still have no idea whether the person can ship anything inside a regulated enterprise environment. The market has been pricing that gap for a decade, and the gap is now the product.
The SNCP is designed around the opposite premise. Every tier is gated by shipped artifacts — a pull request merged into a real repo, a governance memo that cleared second-line review, a 45-minute module delivered to a live audience with an executive in the room. The moat is transparency: the blueprint is published, the rubrics are open, the assessment audit is signed by a named third party, and every credential we issue points back to the artifact that earned it.
Section 02 · The three tiers
Each tier is gated on hands-on proof of shipping. No tier is purchased, transferred, or issued on the basis of prior vendor certifications alone.
SNCP-A
Recertify every 24 months
SNCP
Recertify every 18 months
SNCP-M
Recertify every 24 months
Section 03 · The seven competency domains
Every domain requires an artifact the candidate has already produced and can defend under a live panel. This is the complete scope covered by the SNCP blueprint.
Proficiency with Claude Code, Cursor, Copilot, and the next-generation IDE agent stack. Candidates publish a working configuration and a commit history that demonstrates consistent AI-leveraged output under real constraints.
.claude/ · .cursor/ · CLAUDE.md · AGENTS.md
Required shipped artifact
Published .claude/ or .cursor/ configuration plus 30 days of commit history in a reference repo.
Designing prompts, tool schemas, memory policies, and retrieval pipelines that move eval-harness numbers. Candidates must show a measurable delta on a shared benchmark, not a demo.
tool-schema.json · eval-harness · retrieval.md
Required shipped artifact
Before-and-after eval-harness report with reproducible deltas on a named benchmark.
Multi-agent systems, human-in-the-loop patterns, and structured failure-mode analysis. The emphasis is on orchestration that survives production incidents, not orchestration that looks clean in a demo.
workflow.yaml · hitl-checkpoints · failure-modes.md
Required shipped artifact
Deployed multi-agent workflow with documented HITL checkpoints and a failure-mode analysis memo.
SR 11-7, NIST AI RMF, the EU AI Act, and ISO / IEC 42001. Candidates produce a model-risk document that survives second-line review at a regulated institution.
SR-11-7 · NIST-AI-RMF · EU-AI-Act · ISO-42001
Required shipped artifact
Model-risk documentation that has been reviewed by a second-line function or independent governance body.
Mainframe COBOL, Java 6, and Oracle PL/SQL estates migrated to modern runtimes with test parity. This is the work enterprise procurement actually pays for, and the reason vendor-supplied slides are not enough.
COBOL · Java-6 · Oracle-PL/SQL · parity-report.md
Required shipped artifact
Migrated service with a test-parity report against the legacy behavior baseline.
Evaluations, canary deployments, drift monitoring, and a rollback runbook that has been exercised. Candidates demonstrate production hygiene, not happy-path demos.
canary.yaml · drift-dashboard · rollback-runbook.md
Required shipped artifact
Executed rollback runbook plus a drift-detection dashboard snapshot from a production deployment.
Converting an executive mandate into a shippable engineering specification and a weekly status memo that does not get the sponsor fired. The connective tissue between delivery and budget.
scope-doc · status-memo · exec-mandate
Required shipped artifact
Executive mandate, shipped specification, and 4 weekly status memos from a delivered engagement.
Section 04 · The assessment funnel
The funnel below applies to the Practitioner tier. Associate and Master variants share the same rubric family with tier-appropriate scope changes published alongside this blueprint.
01
Stage
Async · 7 days · Written + video evidence
Benchmark: inspired by AWS Professional portfolio review
02
Stage
8 hours · Open book · Sandbox repo + eval harness + governance memo + Loom
Benchmark: similar in style to the HashiCorp Certified Vault Specialist performance lab
03
Stage
90 minutes · Live · Pair-coding + architecture defense
Benchmark: similar to CKA's live-terminal format, extended to architecture review
04
Stage
60 minutes · Live · Teach a 45-min module to a mixed audience
No analog in the AWS, HashiCorp, or CKA tracks — this stage is the SNCP differentiator.
05
Stage
5 business days · Async verification
Section 05 · Instructor Operating Standard
Enforced on the ScaledNative platform, attached to the engagement contract, and surfaced in the post-engagement 360.
Hands-on lab ratio ≥ 60%
Every engagement guarantees at least 60% live keyboard-on-client-stack work. Measured on the engagement retrospective, not on the proposal deck.
Live coding in client stack ≥ 20%
A minimum of 20% of instructional time is live coding inside the client's actual repository, not a vendor sandbox.
Slideware hard cap ≤ 15%
No more than 15% of any engagement is slideware. Exceeding the cap is an involuntary quality-review trigger.
Pre-engagement scope doc within 5 business days
Signed 2-page scope document with named success metrics delivered within 5 business days of kickoff. No scope, no engagement.
Post-engagement reporting SLA: 5 business days
Capability heatmap, shipped-artifacts index, a 90-day readiness plan, and a cohort 360 — delivered inside 5 business days of close.
360 feedback feeds recertification
Every engagement triggers a client 360 that feeds directly into the practitioner's recertification review. Feedback is non-optional and non-editable.
Sustained negative client feedback triggers decertification review
A documented pattern of sub-threshold 360-review outcomes across consecutive engagements opens an involuntary decertification review chaired by two SNCP-Ms and an external observer.
Section 06 · Out-of-scope commitments
Twelve commitments that distinguish this credential from every other certification in the market. Attached to the engagement contract. Enforceable by the client.
Does not bill for slideware-only days.
Does not prescribe ceremonies, rituals, or frameworks without a measurable output tied to revenue, cost, or risk.
Does not recommend tools they have not personally shipped with in the last 12 months.
Does not teach AI strategy without writing code in the client's stack the same week.
Does not use vendor-supplied decks untouched — every example is rebuilt against the client's actual data.
Does not claim certification counts, logos, or headcount as proof of competence. Only shipped artifacts.
Does not deliver fixed-scope transformation programs — every engagement is milestone-billed with a kill-switch.
Does not take vendor referral fees without written pre-disclosure to the client.
Does not allow junior shadow-instructors to deliver unsupervised. The named SNCP is on camera every paid hour.
Does not ghost-write status reports for client executives.
Does not use client code, prompts, or data to train external models or enrich other engagements.
Does not accept work outside their SNCP-verified domains — they refer.
If you have shipped — in a regulated environment, on a legacy estate, or under a real production incident — you qualify to apply.
The full exam blueprint — including scoring rubrics and calibration data — is available under NDA for enterprise procurement teams. Contact us.