AI Governance that Works · Insight
ISO/IEC 42001 in the DACH Area
A four-month runway to the EU AI Act high-risk deadline. And an auditor bench that is still being built. What the Vorstand actually needs to decide.
8 min read
April 17, 2026
HandsOn Insights
Globally, fewer than 100 organizations held the certificate as of January 2026, when BCG announced its own. The first binding EU AI Act deadline for high-risk systems lands in August 2026 — four months from today and ISO 42001 could play a crucial role in the future.
The pattern we see in DACH boardrooms is consistent. Every CEO asks whether they need to certify. Almost none have decided who would own the management system if the answer turned out to be yes.
The certificate gets the headlines at the moment. The management system behind it decides whether the headlines are earned.
Governance on paper is the majority case — the numbers make it undeniable
Deloitte’s State of AI 2026 work reports that 87% of executives say their organization has an AI governance framework in place. Fewer than 25% have fully operationalized it. In practice, that gap looks like policies signed at executive-committee level and never enforced in procurement, procurement onboarding AI-enabled tools without flagging them to compliance, and compliance running an AI inventory that misses most of the stack.
The DACH Mittelstand pattern is likely wider still: fewer dedicated AI teams, more distributed adoption, a procurement function built to check licence costs rather than model risk, and a risk function that has not yet written AI into its taxonomy.
This is the environment ISO/IEC 42001 walks into. Published in December 2023 by ISO/IEC JTC 1/SC 42, the joint technical committee on Artificial Intelligence, it is the first international standard that forces exactly the stress test Deloitte’s survey exposes — whether the stated controls are actually running or sitting in a SharePoint folder.
How ISO 42001 forces governance out of the slide deck
The standard applies to any organization that develops, provides, or uses AI-based systems — regardless of size or industry. It follows the High Level Structure shared by ISO 9001 and ISO 27001, which means Clauses 4 through 10 (context, leadership, planning, support, operation, performance evaluation, improvement) are familiar territory for any quality or information-security function. What’s new sits in Annex A: 38 AI-specific reference controls organized into nine domains, from AI policies and impact assessment to lifecycle, data governance, and third-party relationships.
The logical shape matters more than the clause count. 42001 is an AI Management System (AIMS) standard — it mandates that governance is documented, audited, reviewed by leadership, and continually improved. What the standard demands is provable evidence, under audit, of how you decided, who decided, what evidence was reviewed, and how the system is monitored after deployment — regardless of which AI products you build.
For an EU AI Act high-risk system, this maps directly. Vanta’s analysis places the material overlap between ISO 42001 controls and AI Act requirements at 40–50%, concentrated in data governance, risk management, human oversight, and transparency. ISO 42001 is not yet a harmonised European standard under the AI Act — CEN/CENELEC is developing prEN 18286 as the likely harmonised version. Once that lands, adherence creates a “presumption of conformity.” In the meantime, 42001 is the closest operational draft of what the regulator is about to ask for.
“Stakeholders increasingly expect evidence grounded in recognized governance frameworks, such as NIST AI RMF and ISO/IEC 42001.”
— Danny Manimbo, Managing Principal, Schellman · ANAB-accredited 42001 auditor (certified Anthropic, AWS)
Two German certifications, four months, one narrow window
The DACH state of play, as of April 2026:
That is the entire visible list of DACH-headquartered certifications. No Austrian-headquartered certifications publicly announced. The auditor bench is still being built — TÜV SÜD, TÜV Rheinland, and Bitkom Akademie all run auditor training programmes for the standard, but qualified 42001 auditors in the German-speaking market remain a constraint, not a commodity.
The regulatory clock is the forcing function. The EU AI Act entered into force on 1 August 2024. High-risk AI system requirements begin enforcement in August 2026. Germany’s KI-MIG implementation act was approved by the Cabinet on 11 February 2026, designating the Bundesnetzagentur as the central AI supervisory authority and BaFin as the supervisor for high-risk AI in financial services. Bitkom’s May 2025 position paper on competitive AI Act norms explicitly references ISO 42001 as a baseline framework.
Awareness is catching up fast. A June 2025 benchmark study of 1,000 compliance professionals found that 76% intend to use ISO 42001 or an equivalent as their AI governance backbone. The distance between “intends to use” and “has operationalized” is the Mittelstand window — roughly twelve to eighteen months where moving first buys procurement credibility and regulator comfort before the certificate becomes table stakes.
ISO 27001 is the shortest path — and it still needs a decision the board hasn’t made
The mechanism behind the 40% figure is structural: shared High Level Structure, shared policy architecture, shared internal-audit discipline, shared management-review cadence. For a Mittelstand industrial group already carrying 9001 and 27001, the incremental build is data-governance uplift, AI-specific risk and impact assessment, lifecycle controls, and third-party AI oversight. Realistic range: four to six months if the scope is disciplined and leadership owns the decisions.
For companies without 27001 — and most mid-sized DACH manufacturers outside critical infrastructure fall into this bucket — the honest answer is six to twelve months of greenfield build. In both cases, the numbers fit comfortably inside typical AI-programme budgets. The blocker sits earlier — at ownership.
The management system needs one accountable executive in the sense the AI Act and the standard both use the word: a named role, not a committee. That role does not live inside IT. It lives next to Risk, Legal, or Strategy, with direct access to the Vorstand. This is where most programmes stall: the CIO is offered the role by default, declines it or accepts it without authority, and the programme quietly becomes a procurement exercise rather than a governance one.
What an AI management system looks like through the HandsOn lens
The HandsOn AI Operating Model treats AI governance as an organizational-design problem with two load-bearing domains. ISO 42001 gives you the evidence framework for System Governance (D03). It does not answer the Decision Architecture question (D04). That is a design choice, and it is the more consequential one.
Build the Decision Rights Registry for governance reasons and the audit trail is a side effect. Build it for audit reasons only and you have a document nobody uses. A HandsOn-aligned 42001 scope therefore sequences as follows: decide the management-system owner; define scope at system level, not organization level, for year one; build the registry for the in-scope systems; map registry entries to Annex A controls; assess the existing 27001 controls that transfer; run the gap on AI-specific controls; then engage the auditor. The pattern corresponds to the “Augmented” stage in the Multidimensional Maturity Profile — the stage where AI is embedded in specific workflows and governance catches up deliberately, not by accident.
What the board decides in the next 30 days
For a DACH company with AI already in production and no 42001 programme under way, five decisions belong on the next Vorstand agenda. Each one can be made in a single meeting. None of them requires another working group.
Downstream decisions — detailed budget, auditor selection, implementation partner, system-by-system remediation — are operational. They belong in the programme.
The consultants are certifying themselves. The room has already changed.
BCG certified itself in January 2026. KPMG, Deloitte, and EY all position ISO 42001 as the backbone framework in their client-facing materials. When the consultants advising your AI strategy are certified and you are not, the dynamic in the room shifts — quietly, but permanently. The August 2026 AI Act deadline is the external forcing function. The internal forcing function is simpler: by the end of this year, you either have a running AI management system or you are explaining to your customers, regulators, and board why you don’t.
HandsOn · AI Operating Model Diagnostic
Is ISO 42001 on your Vorstand’s agenda yet?
Conduct the HandsOn AI Operating Model Diagnostic, to see if implementing the ISO 42001 standard within your organization is the right next move.
