AI Governance that Works · Insight

ISO/IEC 42001 in the DACH Area

A four-month runway to the EU AI Act high-risk deadline. And an auditor bench that is still being built. What the Vorstand actually needs to decide.

8 min read
April 17, 2026
HandsOn Insights

Globally, fewer than 100 organizations held the certificate as of January 2026, when BCG announced its own. The first binding EU AI Act deadline for high-risk systems lands in August 2026 — four months from today and ISO 42001 could play a crucial role in the future.

The pattern we see in DACH boardrooms is consistent. Every CEO asks whether they need to certify. Almost none have decided who would own the management system if the answer turned out to be yes.

The certificate gets the headlines at the moment. The management system behind it decides whether the headlines are earned.

Governance on paper is the majority case — the numbers make it undeniable

Deloitte’s State of AI 2026 work reports that 87% of executives say their organization has an AI governance framework in place. Fewer than 25% have fully operationalized it. In practice, that gap looks like policies signed at executive-committee level and never enforced in procurement, procurement onboarding AI-enabled tools without flagging them to compliance, and compliance running an AI inventory that misses most of the stack.

87%
Framework Exists
Share of executives reporting an AI governance framework in place (Deloitte, State of AI 2026).
<25%
Actually Operational
Share that have fully operationalized the framework in procurement, risk, and monitoring (Deloitte).
11%
Responsible AI Live
Leaders with fully implemented responsible AI across inclusiveness, accountability, transparency (PwC 2025).

The DACH Mittelstand pattern is likely wider still: fewer dedicated AI teams, more distributed adoption, a procurement function built to check licence costs rather than model risk, and a risk function that has not yet written AI into its taxonomy.

This is the environment ISO/IEC 42001 walks into. Published in December 2023 by ISO/IEC JTC 1/SC 42, the joint technical committee on Artificial Intelligence, it is the first international standard that forces exactly the stress test Deloitte’s survey exposes — whether the stated controls are actually running or sitting in a SharePoint folder.

How ISO 42001 forces governance out of the slide deck

The standard applies to any organization that develops, provides, or uses AI-based systems — regardless of size or industry. It follows the High Level Structure shared by ISO 9001 and ISO 27001, which means Clauses 4 through 10 (context, leadership, planning, support, operation, performance evaluation, improvement) are familiar territory for any quality or information-security function. What’s new sits in Annex A: 38 AI-specific reference controls organized into nine domains, from AI policies and impact assessment to lifecycle, data governance, and third-party relationships.

The logical shape matters more than the clause count. 42001 is an AI Management System (AIMS) standard — it mandates that governance is documented, audited, reviewed by leadership, and continually improved. What the standard demands is provable evidence, under audit, of how you decided, who decided, what evidence was reviewed, and how the system is monitored after deployment — regardless of which AI products you build.

For an EU AI Act high-risk system, this maps directly. Vanta’s analysis places the material overlap between ISO 42001 controls and AI Act requirements at 40–50%, concentrated in data governance, risk management, human oversight, and transparency. ISO 42001 is not yet a harmonised European standard under the AI Act — CEN/CENELEC is developing prEN 18286 as the likely harmonised version. Once that lands, adherence creates a “presumption of conformity.” In the meantime, 42001 is the closest operational draft of what the regulator is about to ask for.

“Stakeholders increasingly expect evidence grounded in recognized governance frameworks, such as NIST AI RMF and ISO/IEC 42001.”

— Danny Manimbo, Managing Principal, Schellman · ANAB-accredited 42001 auditor (certified Anthropic, AWS)

Two German certifications, four months, one narrow window

The DACH state of play, as of April 2026:

Certification · June 2025
Xayn (Berlin)
First German company to achieve ISO/IEC 42001 certification. Certified by SGS. Xayn develops Noxtua, a sovereign legal AI for European law firms.
Certification · April 2025
Unique AG
Swiss-headquartered with German operations. Certified by TÜV SÜD. One of the earliest DACH-region certifications on record.

That is the entire visible list of DACH-headquartered certifications. No Austrian-headquartered certifications publicly announced. The auditor bench is still being built — TÜV SÜD, TÜV Rheinland, and Bitkom Akademie all run auditor training programmes for the standard, but qualified 42001 auditors in the German-speaking market remain a constraint, not a commodity.

Global certified · January 2026
< 100
Organizations worldwide held ISO/IEC 42001 certification when BCG announced its own in January 2026. Less than 0.01% of the estimated 70,000+ AI companies globally.

The regulatory clock is the forcing function. The EU AI Act entered into force on 1 August 2024. High-risk AI system requirements begin enforcement in August 2026. Germany’s KI-MIG implementation act was approved by the Cabinet on 11 February 2026, designating the Bundesnetzagentur as the central AI supervisory authority and BaFin as the supervisor for high-risk AI in financial services. Bitkom’s May 2025 position paper on competitive AI Act norms explicitly references ISO 42001 as a baseline framework.

Awareness is catching up fast. A June 2025 benchmark study of 1,000 compliance professionals found that 76% intend to use ISO 42001 or an equivalent as their AI governance backbone. The distance between “intends to use” and “has operationalized” is the Mittelstand window — roughly twelve to eighteen months where moving first buys procurement credibility and regulator comfort before the certificate becomes table stakes.

ISO 27001 is the shortest path — and it still needs a decision the board hasn’t made

40%
Faster with 27001
Time-to-compliance reduction for ISO 27001-certified organizations vs. greenfield build (industry estimate).
€30–60K
SME · Year 1
Total first-year cost for SME ISO 42001 programmes including gap, implementation, and audit (Polimity, Vanta).
€80–200K+
Enterprise · Year 1
Total first-year cost for large enterprise programmes — comfortably within typical AI programme budgets.

The mechanism behind the 40% figure is structural: shared High Level Structure, shared policy architecture, shared internal-audit discipline, shared management-review cadence. For a Mittelstand industrial group already carrying 9001 and 27001, the incremental build is data-governance uplift, AI-specific risk and impact assessment, lifecycle controls, and third-party AI oversight. Realistic range: four to six months if the scope is disciplined and leadership owns the decisions.

For companies without 27001 — and most mid-sized DACH manufacturers outside critical infrastructure fall into this bucket — the honest answer is six to twelve months of greenfield build. In both cases, the numbers fit comfortably inside typical AI-programme budgets. The blocker sits earlier — at ownership.

The management system needs one accountable executive in the sense the AI Act and the standard both use the word: a named role, not a committee. That role does not live inside IT. It lives next to Risk, Legal, or Strategy, with direct access to the Vorstand. This is where most programmes stall: the CIO is offered the role by default, declines it or accepts it without authority, and the programme quietly becomes a procurement exercise rather than a governance one.

What an AI management system looks like through the HandsOn lens

The HandsOn AI Operating Model treats AI governance as an organizational-design problem with two load-bearing domains. ISO 42001 gives you the evidence framework for System Governance (D03). It does not answer the Decision Architecture question (D04). That is a design choice, and it is the more consequential one.

D03 · Foundation
System Governance
How an organization governs AI systems across their full lifecycle, operationally embedded. ISO 42001 gives you the evidence framework for this domain.
D04 · Activation
Decision Architecture
Who is authorized to let AI decide, at what autonomy level, under what conditions. The standard does not answer this — the board does.
Core Artefact
Decision Rights Registry
Formal record of every AI-enabled decision type — autonomy level, authority, evidence standard, recalibration trigger. Maps directly to Annex A.4 and A.8.
Design Core
Human-AI Interface
Four autonomy levels: HITL, AI decides / human reviews, AI decides / human notified, Human-in-the-Exception. Every decision type in the registry gets one.

Build the Decision Rights Registry for governance reasons and the audit trail is a side effect. Build it for audit reasons only and you have a document nobody uses. A HandsOn-aligned 42001 scope therefore sequences as follows: decide the management-system owner; define scope at system level, not organization level, for year one; build the registry for the in-scope systems; map registry entries to Annex A controls; assess the existing 27001 controls that transfer; run the gap on AI-specific controls; then engage the auditor. The pattern corresponds to the “Augmented” stage in the Multidimensional Maturity Profile — the stage where AI is embedded in specific workflows and governance catches up deliberately, not by accident.

What the board decides in the next 30 days

For a DACH company with AI already in production and no 42001 programme under way, five decisions belong on the next Vorstand agenda. Each one can be made in a single meeting. None of them requires another working group.

Decision 1
01
Owner
Name the accountable executive for the AI Management System. Not a committee. Not the CIO by default. Risk, Strategy, Transformation, or a dedicated Chief AI Officer — Schellman observes these roles emerging in enterprises the way DPO roles emerged after GDPR.
Decision 2
02
Scope for year one
Enterprise-wide scope in year one is how programmes die. Pick the two or three AI systems that are in production, likely to fall under EU AI Act high-risk classification, and most visible to regulators or customers. Everything else is year two.
Decision 3
03
Certify or conform
ISO 42001 certification is voluntary. For some, demonstrable conformity without an external certificate is sufficient in year one. For AI-enabled product sellers in regulated sectors, the certificate is commercial. Treat this as a commercial question.
Decision 4
04
Bridge from 27001
If the company is 27001-certified, commission a bridge assessment: a four-to-six-week exercise that quantifies how much of the existing ISMS transfers and what the AI-specific build actually costs. The 40% faster figure is an industry estimate; the number that matters is scope-specific.
Decision 5
05
AI literacy
Article 4 of the EU AI Act has been binding since February 2025. Every employee working with AI needs documented competency. This is also a 42001 Clause 7 requirement. Starting the programme without closing the literacy gap is building on sand.

Downstream decisions — detailed budget, auditor selection, implementation partner, system-by-system remediation — are operational. They belong in the programme.

The consultants are certifying themselves. The room has already changed.

BCG certified itself in January 2026. KPMG, Deloitte, and EY all position ISO 42001 as the backbone framework in their client-facing materials. When the consultants advising your AI strategy are certified and you are not, the dynamic in the room shifts — quietly, but permanently. The August 2026 AI Act deadline is the external forcing function. The internal forcing function is simpler: by the end of this year, you either have a running AI management system or you are explaining to your customers, regulators, and board why you don’t.

HandsOn · AI Operating Model Diagnostic

Is ISO 42001 on your Vorstand’s agenda yet?

Conduct the HandsOn AI Operating Model Diagnostic, to see if  implementing the ISO 42001 standard within your organization is the right next move.

Similar Posts