Spain’s government has approved a bill imposing heavy fines on companies that fail to label content created using artificial intelligence (AI), in an effort to curb the spread of “deepfakes.”
The bill aligns with the European Union’s AI Act, which mandates stringent transparency measures for high-risk AI systems, Digital Transformation Minister Oscar Lopez told reporters.
“AI is a very powerful tool that can be used to improve our lives … or to spread misinformation and attack democracy,” he said.
Spain is among the first EU nations to implement the bloc’s regulations, which are considered more robust than the United States’ largely voluntary approach.
Lopez warned that anyone could be targeted by “deepfake” attacks—AI-generated videos, photos or audio clips presented as real.
The bill, which must pass through the lower house, categorises non-compliance as a “serious offence” punishable by fines of up to €35 million (£30m) or 7% of a company’s global annual turnover.
Since OpenAI launched ChatGPT in 2022, regulators have prioritised AI safety. The bill also prohibits subliminal AI techniques, such as sounds or images that manipulate vulnerable groups, citing concerns over chatbots promoting gambling or toys encouraging children to engage in dangerous challenges.
Additionally, the bill bars organisations from using AI to classify individuals based on biometric data or behavioural traits to assess benefits eligibility or criminal risk. However, authorities would still be permitted to use real-time biometric surveillance in public areas for national security.
The newly established AI supervisory agency, AESIA, will oversee enforcement, except in specific cases related to privacy, crime, elections, credit ratings, insurance, and capital markets, which will fall under sector-specific regulators.