- Artificial Intelligence Act - already adopted but to be introduced in stages over several years
- More than 100 companies have signed the AI pact, compared to around 1,000
- The legislation aims to impose strict rules on what has so far been a rather haphazardly expanding AI sector
- Apple and Meta refuse to join the AI pact
- The AI Act is the first comprehensive EU regulatory mechanism, with billions of euros in fines for non-compliance
Artificial Intelligence Act - already adopted but to be introduced in stages over several years
Artificial Intelligence (AI) is becoming increasingly advanced and popular. However, as AI advances and its use becomes more widespread, its dangers must also be addressed. The Artificial Intelligence Act was created to prevent fundamental rights violations and promote innovation. But what is this legislation, and why do some companies oppose it?
The AI Act's purpose is to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk artificial intelligence systems. It is also supposedly aimed at promoting innovation while ensuring that Europe is at the forefront of AI[1].
The legislation approved by the European Parliament entered into force on 1 August, but it will apply after two years. However, there are some exceptions which will apply earlier or later:
- the bans will come into force six months after the entry into force of the AI Act,
- governance rules and obligations for general-purpose AI models will start to apply 12 months later,
- the rules for AI systems integrated into regulated products will start to apply 36 months later,
However, it is worth noting that while the AI pact stimulates innovation, it has already been argued that this will only stifle AI progress. The tech giants have opposed the EU's initiative to further accelerate the control of AI by proposing to sign the AI pact in good faith at the very first step.
More than 100 companies have signed the AI pact, compared to around 1,000
To speed up the AI Act's entry into force, politicians have devised an AI pact, under which companies that voluntarily sign up to it commit to complying with its provisions from the outset. Former EU Digital Commissioner Thierry Breton launched the initiative, but he resigned after clashes with European Commission President Ursula von der Leyen, who pressured the French government to withdraw his candidacy for a second term.
The AI Pact was initially seen as an important document that would significantly impact the regulation of AI technologies. More than 1,000 companies were said to be already interested in the pact, which was presented as a significant step towards a more secure future with the AI.
However, despite the initial enthusiasm and hype, the new AI pact has only been endorsed by 115 companies. These companies, including some of the world's largest AI players, supported the EU's initiative to speed up AI control measures[2].
But many big names are still missing. Their refusal to join the pact indicates a certain resistance or skepticism about the stringency of regulation and its impact on innovation and business.
Among the companies that have signed the pact are important global technology giants such as Germany's Aleph Alpha, Amazon, Google, Microsoft, OpenAI, Samsung, Snap and Palantir. In addition to these, many smaller European and global companies have also joined the pact to contribute to AI control measures.
The legislation aims to impose strict rules on what has so far been a rather haphazardly expanding AI sector
Although some businesses have already signed up to the AI Pact, even those that have not will eventually have to comply with the rules set out in the AI Act when it comes into force over the next few years. The Act's self-signature was intended to encourage businesses to start complying with the new rules earlier so that they can adapt more quickly to the future legal environment.
The aim is to strengthen engagement and the willingness to commit as soon as possible. The legislation focuses on promoting the exchange of information so that the signatories to the DI Pact help each other to meet the requirements of the bloc's DI rules and actively develop common practices. There are also three main (but not the only) actions that signatories to the Pact would commit to:
- Adopt an IAI governance strategy to promote the uptake of AI in the organization and work towards future compliance with the AI law,
- Identify and map AI systems that may be classified as high risk under the AI Act,
- Promote employee awareness and literacy on IoT, ensuring ethical and responsible development of IoT.
In addition, the EU DI Office has drawn up a long list of possible commitments after filtering the feedback received from "relevant stakeholders" affected by the DI legislation. The resulting list of commitments allows the signatories to choose and agree on which ones are right for them[3].
Among the commitments are the requirement to inform people when they interact with AI systems and to flag IoT-generated content, in particular 'deep fakes'. This initiative is seen as a lighter version of the AI Act, allowing companies to prepare for the transition to the upcoming mandatory regulation.
Apple and Meta refuse to join the AI pact
The AI Pact has lost its popularity not only because of the departure of its initiator, Mr Breton. It is also because of high-profile campaigns by the technology industry that argue that Europe's strict regulations are holding back innovation and AI integration, making it harder for the continent to compete with other global markets. Former Italian Prime Minister Mario Draghi has also highlighted similar concerns about Europe's competitive position, arguing that over-regulation can hinder progress in IoT.
Therefore, it is not surprising that not all major companies decided to rush to sign up to the new legislation. Even some tech giants, which have previously been at loggerheads with EU governing bodies and existing laws, oppose the AI Pact and, thus, the AI Act.
Although Apple has said that it is "cooperating" with EU regulators to bring Apple Intelligence features to EU consumers, the company has refused to sign the AI pact. This signals that neither serious cooperation nor increasingly advanced Apple features are likely to happen for the time being. After all, if Apple continues to ignore EU law, it will find it difficult to bring AI features to EU residents. So it will be interesting to see whether the lack of AI will impact iPhone 16 sales in EU countries.
Another tech giant, Meta, has also refused to accept the Aipact. At an event organized by Meta earlier this month, a warning was heard that "regulatory decision-making has recently become fragmented and unpredictable" and that the EU risks missing the train with AI technology that could "boost productivity."
These words do not appear to be off the cuff, as some technology companies, including Meta and X, have already paused the deployment of AI products in Europe.
However, Anna Kuprian, a spokesperson for Meta, did not rule out giving in to the EU's wishes and joining the AI pact in the future: "We welcome the harmonized EU rules and are currently focusing on our work under the AI law, but we do not rule out the possibility of joining the AI pact at a later stage", Kuprian added:
"We should also not lose sight of the huge potential of the AI to stimulate European innovation and enable competition. Otherwise, the EU will miss this once-in-a-century opportunity."
On Wednesday, a few hours before the legislation was officially published at EU headquarters in Brussels, the list of signatories presented on Wednesday did not include several other well-known companies. Mistral, France's AI champion, did not sign the AI pact, nor did video-sharing platform TikTok and America's leading AI company Anthropic.
The failure to get support from some of the world leaders developing cutting-edge AI models shows how governments, including the EU, are still failing to cope with the lightning-fast development of technology. Despite stark warnings over the past few years about the threats posed by the IoT, policymakers are still looking for ways to regulate this rule-free technology, which has the potential to have a major impact on society and the world.
The AI Act is the first comprehensive EU regulatory mechanism, with billions of euros in fines for non-compliance
So, what is the AI Act, and why is it so feared by some companies that have not even agreed to sign up to the softer AI pact? The AI Act is the world's first comprehensive legal framework to regulate the use of artificial intelligence, reduce its potential risks, and promote its responsible deployment.
The Act sets out clear requirements and obligations for IoT developers and implementers, especially those working with high-risk systems such as medical, educational, or law enforcement domains. High risks include risks to human health, life, and fundamental rights, and these systems are subject to strict requirements on data quality, transparency, and human oversight.
The AI Act is based on risk classification, dividing AI systems into four categories: minimal risk, limited risk, high risk, and unacceptable risk.
Under this instrument, AI systems that pose unacceptable risks, such as social rating practices, are banned. AI technologies that threaten citizens' rights, such as biometric categorization and emotion recognition in the workplace, are also prohibited.
High-risk AI systems cover areas such as education, public service delivery, and the administration of justice. These systems must comply with strict requirements, including risk mitigation mechanisms, activity registration, and human oversight.
To protect citizens and ensure transparency, a framework of requirements has been put in place which sets out responsibilities for suppliers, installers and market surveillance authorities. Systems placed on the market must have declarations of conformity and, once placed on the market, must monitor risks and report serious incidents. Suppliers are required to ensure a high level of reliability and security and to comply with ongoing maintenance requirements.
Finally, the AI Act is a forward-looking framework to protect people's rights and promote innovation in IoT. Although the regulation is strict, it provides regulatory aspects that allow innovative companies to test their systems safely. The AI Act also mandates enforcement mechanisms and strengthens Member States' cooperation with EU regulators to enforce the Regulation.
Some companies do not like the AI Act for several reasons:
- High implementation costs: the Act requires strict compliance procedures, especially for high-risk AI systems, which can be financially burdensome for smaller companies and start-ups.
- Restriction of innovation: Strict regulation can slow down the development and deployment of new products.
- Global competitiveness risk: Companies fear that EU AI regulation may make it harder to compete with countries with looser AI regulations, such as the US or China.
- Bureaucratic burden: Companies argue that the Act makes it more difficult for them to operate by requiring complex administrative procedures that may slow down the introduction of products on the market.
- Lack of flexibility: Some argue that the regulation is too rigid and does not adapt to rapidly changing AI technologies, hindering experimentation and adaptation to new market needs.
Penalties for non-compliance with the EU AI Act are quite severe. They can reach up to 7% of global annual revenues for the use of prohibited IoT, up to 3% for non-compliance with other obligations under the AI Act and up to 1.5% for providing false information to AI regulators. So for tech giants like Meta, breaking the rules can cost billions of dollars.