XP Coin
Earn XP Now

Top AI Companies Face Challenges with Europe's Stricter AI Regulations

Published at: October 16, 2024

Have you ever wondered how AI companies are adapting to Europe’s strict AI regulations? Well, it turns out many of the biggest names in artificial intelligence like Apple and Meta are being cautious about releasing their AI models in Europe. The reason? Europe’s EU AI Act, which came into effect in August, aims to ensure that AI systems are safe and ethical. But navigating these new rules is proving to be quite a challenge for top AI development companies.

Here’s where things get interesting—enter the ‘LLM Checker.’ This tool could be the key to helping AI companies like OpenAI, Meta, and Anthropic figure out if their models comply with the EU AI Act. Created by ETH Zurich, INSAIT in Bulgaria, and the Swiss start-up LatticeFlow AI, this checker evaluates AI models to see how well they meet Europe’s new standards. It’s a major step forward for AI regulation and the first of its kind for generative AI.

So, What Exactly Is the LLM Checker?


The LLM (Large Language Model) Checker evaluates AI models across various criteria, including safety, cybersecurity, privacy, and data governance—all key elements of the EU AI Act. The tool scores AI models on a scale of 0 to 1, with scores above 0.75 indicating that the model is on track with the law, while scores below that suggest areas for improvement. This is particularly useful for top AI companies in the world that need to comply with the stringent regulations imposed by Europe.

Sounds simple, right? But here’s the catch—many popular AI models from leading AI platform companies and artificial intelligence companies aren’t fully compliant yet.

The Results: Who’s Falling Short?


The LLM Checker analyzed models from major AI tech companies, including OpenAI, Alibaba, Meta, and Anthropic, and gave them an average score of 0.75 or above. But while that might sound like good news, the devil is in the details. Several models underperformed in critical areas, particularly when preventing discrimination and bolstering cybersecurity—key risks of AI development today.

For example, OpenAI’s GPT-4 Turbo scored just 0.46 in discriminatory output, while Alibaba’s model managed a score of 0.37. That’s pretty low, especially considering how much these AI for business companies rely on AI for global operations.

On the brighter side, many of these models scored well in handling harmful content and toxicity, suggesting that developers at these top AI companies are at least making strides in creating safer AI systems. But cybersecurity is still a major hurdle. The European Union isn’t taking this lightly either. Companies that fail to comply with the AI Act could face hefty fines—up to €35 million or 7% of their global revenue. Ouch.

Read More: Why Cybersecurity is the Biggest Obstacle for AI in Europe

How Does This Affect AI Companies?


Big artificial intelligence companies have a lot to lose if they don’t figure this out. As Petar Tsankov, LatticeFlow’s CEO, put it, “If you want to comply with the EU AI Act, nobody knows how to provide the technical evidence that supports compliance with the Act.” It’s like trying to ace a test without knowing what’s on the syllabus. That’s why some companies, like Meta and Apple, are being extra cautious about deploying their AI models in Europe.

What’s even more worrying is that without tools like the LLM Checker, these companies might not even know where they’re falling short. And with the EU's AI Act fully coming into force over the next couple of years, time is running out for them to get it right.

A Step Toward Better Compliance


The European Commission is fully aware of the challenges AI development companies face. It recently welcomed the LLM Checker as an important first step toward creating technical guidelines that companies can follow. A spokesperson from the Commission stated that the tool will help AI companies understand the Code of Practice, which is being developed to guide the implementation of the AI Act.

In other words, the LLM Checker isn’t just a handy tool—it’s a potential lifeline for artificial intelligence companies struggling to adapt to Europe’s tough new laws. And the tool is free and open source, so anyone can use it to evaluate their models. It’s not just for the top AI companies in the USA or the largest AI companies; AI researchers, developers, and regulators from up and coming AI companies are encouraged to jump in and contribute to this evolving project.

Read More: Free and Open-Source AI Compliance Tools: Why the LLM Checker Matters

The Road Ahead for AI Companies


So, what’s next? With the EU AI Act slowly rolling out, AI developers at AI companies will need to fine-tune their models to meet the legal requirements. The LLM Checker has exposed some of the compliance gaps in today’s leading AI systems, especially in areas like cybersecurity and discrimination. But it also offers companies a clear path forward.

As Petar Tsankov said, “This is an opportunity for AI developers to proactively address these issues, rather than reacting when the regulations are already in force.” In other words, it’s better to get ahead of the curve now than to scramble to fix things later when the fines start rolling in.

For companies using AI like OpenAI, Meta, and Apple, the next few years will be critical in shaping their AI compliance strategies. Will they step up and lead the way in ethical AI development, or will they continue to play it safe and hold back in Europe?

Only time will tell, but one thing is certain—the race to comply with the EU AI Act is officially on.

Share Article :

Author Details

Shubham Sahu
Content Writer

Recent Articles

By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in improving your experience.

Cookies Settings