Spain’s Bold Move on AI Regulation: is a wake-up call
The EU AI Act as a Call to Leadership
Every great transformation begins not with a question of what, but with a powerful understanding of why. In the age of artificial intelligence, Europe has declared its "why" with clarity: to protect human dignity, to ensure transparency, and to steer innovation toward serving people - not replacing them.
The European Union AI Act, adopted in 2024, goes far beyond a traditional compliance framework. It represents a moral compass in a fast-evolving digital world. It applies a tiered, risk-based structure that categorises AI systems into four levels - unacceptable, high, limited, and minimal risk. This system provides clear guidance for developers, deployers, and regulators, outlining the obligations associated with each category.
For CIOs, this legislation is not just about adhering to a new set of rules. It's a signal - a loud, resonant call to elevate our leadership. Trust, ethics, and responsibility are becoming the true measures of success in digital transformation. As stewards of technology, CIOs must now step up and design systems that inspire confidence and respect, not just efficiency and scale.
Spain Steps Up: Why This Draft Law Matters Now
In March 2025, Spain made headlines by approving a draft law that brings the EU AI Act into national implementation. This draft law isn’t merely an administrative step in aligning with Brussels. It is a declaration of Spain’s ambition to lead Europe - and perhaps the world - in ethical AI deployment.
Spain's draft law demonstrates political will, regulatory foresight, and cultural values all converging. Rather than waiting for a reactive, fragmented roll-out of the EU law, Spain chose to proactively define its path. By enacting this draft, Spain creates a concrete framework that companies can begin working within - complete with timelines, penalties, and local oversight mechanisms.
This matters enormously for CIOs of companies doing business in Spain, or even thinking about entering the market. It sets a tone of urgency and clarity. More than that, it offers an opportunity to get ahead of the curve. Building your organisation’s AI systems with Spanish compliance in mind means you'll be future-proofing against broader EU expectations - and demonstrating thought leadership in responsible technology.
From Vision to Oversight: The Role of AESIA
To enforce this bold step forward, Spain has established a dedicated regulatory body: the Spanish Agency for the Supervision of Artificial Intelligence, known as AESIA. Based in A Coruña, AESIA is more than just a traditional watchdog - it is positioned to be a proactive guide and steward for AI in the Spanish economy.
AESIA is tasked with a wide-ranging set of responsibilities, including monitoring the compliance of AI systems, conducting independent audits, issuing public guidance, collaborating with other EU authorities, and, when necessary, applying penalties. But more than that, AESIA seeks to play a collaborative role with the tech sector.
For CIOs, this creates a new kind of regulatory relationship. Rather than fearing the regulator, you can work alongside it - seeking advice, demonstrating good faith, and even helping to shape future guidance through transparent practice. For example, a CIO in a healthcare company might engage with AESIA to verify that diagnostic AI tools are not inadvertently introducing bias. Such collaboration not only helps you stay compliant but builds credibility with customers and partners.
Fines with Teeth: Accountability at a Global Scale
One of the most striking features of Spain’s draft law is its emphasis on meaningful penalties. The numbers are designed to make decision-makers sit up and pay attention: up to €35 million or 7% of global annual turnover for some breaches, whichever is higher.
This is not symbolic. These fines are intended to drive home a point that ethical oversight in AI is now a business priority. Infractions like failing to properly label AI-generated content - particularly synthetic media like deepfakes - are viewed as serious offences. The logic is simple: misleading content, whether intentional or not, can do real harm to democratic processes, consumer trust, and social cohesion.
Other violations include failure to properly classify high-risk systems or neglecting to establish meaningful human oversight in automated decision-making processes. For CIOs, this fundamentally changes the role of IT leadership. It is no longer sufficient to “move fast and break things.” The new mandate is to “move responsibly and document everything.”
The financial risk of non-compliance is now on par with data breaches under the GDPR. The reputational risk may be even higher. CIOs need to make sure their AI strategies are backed by thorough documentation, regular audits, cross-functional governance, and continuous learning.
Spain’s Extraterritorial Scope: No Borders for Compliance
Perhaps the most consequential feature of Spain’s new law is its extraterritorial reach. Like the GDPR before it, Spain’s AI legislation doesn’t only apply to Spanish organisations. It applies to any organisation whose AI systems are used within the Spanish market - regardless of where they were developed or deployed from.
For example, an Australian edtech firm offering AI-driven personalised learning platforms to students in Spain will need to comply with Spanish AI law. The same applies to US-based HR software vendors whose recruitment algorithms are accessed by Spanish companies.
This extraterritorial scope changes the compliance calculus. No longer can CIOs assume that their operations are outside the remit of foreign regulators. The use of an AI system in a foreign market now brings with it a set of obligations to that jurisdiction’s ethical standards.
This underscores the need for globally harmonised governance practices - and it places the CIO at the forefront of managing international compliance across jurisdictions. Think globally, govern locally.
Know Your Risk: How the EU Classifies AI Systems
The EU AI Act is structured around a risk-based framework that classifies AI systems into four categories: unacceptable, high-risk, limited-risk, and minimal-risk. For the purposes of practical governance, CIOs should concentrate especially on the "high-risk" and "low-risk" categories.
Low-risk AI systems include applications where the risk to safety, rights, or critical decision-making is minimal. Examples:
Spam filters that protect inboxes from junk mail
Recommender systems that suggest songs, books, or shows based on user preferences
Simple chatbots that provide customer support without collecting sensitive data
These systems are subject to few regulatory burdens beyond transparency - such as ensuring users know they are interacting with AI.
High-risk AI systems, by contrast, are those that have a direct impact on individuals' rights, safety, or access to essential services. Examples include:
AI used in medical diagnosis or treatment recommendations
Automated CV screening tools for hiring decisions
Credit scoring and financial underwriting systems
AI used in law enforcement or biometric surveillance
High-risk systems must undergo rigorous testing, include detailed documentation, demonstrate transparency, and include human oversight mechanisms. For CIOs, this means every AI project must begin with a risk classification step - a step that should be documented and revisited throughout the system’s lifecycle.
Strategic Governance with ISO 42001: A Blueprint for Compliance
Navigating these requirements might seem overwhelming. But there’s a silver lining: international standards like ISO 42001 can provide a roadmap.
ISO 42001 is the world’s first management system standard for AI. It provides a comprehensive structure for designing, deploying, and governing AI systems in a responsible, auditable way. For CIOs looking to scale AI while meeting legal, ethical, and societal expectations, this is your blueprint.
The standard addresses key areas such as:
Risk identification and mitigation: Encouraging early-stage analysis of potential harms and structured interventions
Transparency and traceability: Requiring systems to be explainable, not black boxes
Human oversight: Ensuring clear accountability pathways for automated decisions
Data management: Focusing on quality, diversity, and appropriateness of training data
Example in Practice: Consider an Australian telecommunications company rolling out an AI-powered predictive maintenance system for critical infrastructure. By implementing ISO 42001, they:
Identified risks related to false positives and infrastructure safety
Developed escalation protocols for human engineers to override AI decisions
Documented all data inputs, testing parameters, and edge-case scenarios
Provided targeted staff training on AI monitoring and response
The result was two-fold: smoother regulatory engagement and deeper client trust. For CIOs, ISO 42001 can turn AI governance into a strategic enabler - not just a compliance tool.
The Opportunity: Turning Compliance into Competitive Advantage
There’s a fundamental shift taking place. Regulatory compliance is no longer a checkbox exercise; it is becoming a source of differentiation.
Customers want to know that the AI they interact with is fair. Governments want to ensure that AI serves the public good. Investors want to back companies that demonstrate resilience and ethics. And partners want assurance that your systems won’t cause reputational blowback.
CIOs who lead with integrity will find that compliance creates more than risk reduction - it builds brand capital. It opens doors to public sector partnerships. It invites favourable media coverage. It makes talent recruitment easier. In short, responsible AI governance becomes a strategic asset.
Imagine your next board presentation outlining:
The percentage of your AI systems audited for fairness
The roadmap for ISO 42001 certification
Your alignment with AESIA guidance in European markets
Customer satisfaction metrics tied to transparent AI interactions
These aren’t just metrics. They are proof of leadership.
In Closing: Trust is the True Differentiator
Spain’s draft AI law is more than a local policy. It’s a leading indicator of how the global regulatory landscape is evolving. And it reflects the world’s growing insistence that with great technological power comes great ethical responsibility.
CIOs now stand at the nexus of innovation and accountability. You can wait until the rules are enforced, scrambling to retrofit systems and update documentation. Or you can lead by designing AI systems that are trustworthy from the start.
The future belongs to those who can answer not just what they built or how they built it - but why they built it. And if the answer is grounded in service, transparency, and trust, your organisation won’t just survive the regulatory shift.
It will lead it.