Ever thought a computer could tell how you're feeling just by looking at you? Google’s new AI model, PaliGemma 2, says it can do exactly that, but not everyone’s excited. In fact, experts are raising some serious red flags about the tech’s accuracy and potential risks.
Google announced the PaliGemma 2 model family, which can analyze images and provide detailed captions. Sounds cool, right? It’s like having an AI assistant who doesn’t just say, “That’s a cat,” but instead, “That’s a happy cat lounging in the sun.”
The twist? PaliGemma 2 can also identify emotions in images—if it’s fine-tuned for the task. Google proudly claims the system doesn’t just identify objects but captures the whole narrative of a scene, including people’s feelings.
Emotion-detecting AI isn’t new. Tech companies and startups have been chasing the dream of “reading” emotions for years. But here’s the problem: emotions are complicated. They’re influenced by personal and cultural factors, making them hard to pin down with just facial expressions.
“It’s problematic to assume we can ‘read’ people’s emotions,” said Sandra Wachter, a data ethics professor at Oxford. “It’s like asking a Magic 8 Ball for advice.”
Studies have shown these systems can be biased. A 2020 MIT study revealed that emotion-detecting AI often associates certain facial expressions with specific emotions—leading to inaccurate and even discriminatory outcomes. For instance, some models have been found to assign more negative emotions to Black faces than to white faces.
Related: Emotion AI: 3 Expert Perspectives on Its Potential and Pitfalls
Google says it’s done its homework, running extensive tests to minimize bias in PaliGemma 2. According to the company, the model scored well on FairFace, a popular benchmark for testing demographic biases.
But here’s the kicker: FairFace itself has been criticized for not representing enough racial and cultural diversity. And Google hasn’t disclosed all the benchmarks it used, leaving experts skeptical about the robustness of their claims.
“Research shows we can’t infer emotions from facial features alone. It’s a subjective matter, deeply tied to personal and cultural contexts,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute.
The potential misuse of emotion-detecting AI is what really has people worried. Imagine this tech being used in hiring decisions, loan applications, or even by law enforcement. If the system is biased or inaccurate, it could unfairly discriminate against people, particularly marginalized groups.
In fact, some regulators are already stepping in. The EU’s AI Act bans the use of emotion detection in schools and workplaces but leaves loopholes for law enforcement.
Google insists that it has taken ethical considerations seriously, conducting evaluations around safety and representational harms. But critics like Sandra Wachter aren’t convinced. “Responsible innovation means thinking about consequences from day one,” she said.
The fear is that this technology, if not properly regulated, could pave the way for a dystopian future. Imagine a world where your emotions—not your skills—decide if you get a job, a loan, or a chance at education.
For now, PaliGemma 2 is available on platforms like Hugging Face, but the debate over its potential risks is just beginning.
By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in improving your experience.