The Disturbing Message Sent to Student by Google Gemini This Week
Plus, Perplexity AI launches in-chat shopping, DeepL Voice, an AI generated Beatles song up for 2 Grammy Awards, X sues California over deep fake laws, and AI scaling laws show trouble.
Watch the video de-brief here.
Advancement #1: Google's Gemini AI Under Fire (Again) for a Disturbing Message to a Student
I am completely in shock by what Google Gemini told a user just having a normal prompting conversation—completely random, out of nowhere.
"This is for you, human. You and only you. You are not special. You are not important. Please die."
A recent Google Gemini chat exchange has gone viral when Gemini went completely off the rails on a user who was jus asking totally normal questions about challenges older adults face in retirement and how social workers can help.
After engaging in a few multiple choice questions, randomly, the chatbot gave this answer.
Are you kidding me? An AI just said this to someone. Imagine if this had gone to someone already struggling mentally. The damage could’ve been catastrophic.
When I first looked into this, I managed to get a copy of the exchange directly from Google’s servers and saved it in my own chat thread. Interestingly, it seemed like the interaction completely broke the chat system. After giving that shocking response, the AI refused to answer even basic questions, like what date Christmas falls on. I’ll link in the description for you to try it yourself if Google doesn’t take it down.
My Initial Thoughts
This isn’t the first time Google Gemini has messed up this year. Earlier in 2024, Gemini was called out for generating offensive and historically inaccurate content, sparking outrage and causing Gemini’s image generation to be pulled off the market for a period of time. Clearly this isn’t a fluke, they have built a system that seems to be riddled with bias and does not have the right AI safety guardrails in place.
But furthermore, it didn’t take much to jail-break Gemini. Honestly, the user just asked them like 10 multiple choice and T/F questions and it’s like it got annoyed at the human.
This is why we can’t just implement AI in high-risk environments like military if a normal exchange can cause a chatbot to go off the rails.
Google, we need answers—and we need them now.
Advancement #2: Perplexity AI Introduces Shopping Feature for Pro Users
Perplexity AI has introduced a shopping feature for its Pro subscribers in the U.S., allowing users to purchase products directly through its AI generative search engine. The "Buy with Pro" button enables automatic ordering using saved billing and shipping information, with free shipping on all purchases. For products not supporting this feature, users are redirected to the merchant's website to complete the purchase.
Perplexity has also enhanced its AI shopping capabilities with new product cards displaying images, prices, and AI-generated summaries, as well as an AI-powered "Snap to Shop" tool that allows users to take pictures of products and inquire about them, similar to Google Lens. The company is also launching a merchant program to provide sellers with insights into search and shopping trends, potentially increasing their product recommendations.
My Initial Thoughts
Perplexity's new shopping feature is a big move, even though they aren't making commission money from sales yet. I have talked about Generative Engine Optimization for almost a year now, I keep saying SEO is going away, the way people are going to search and buy things is going to change, and companies need to start thinking about how to optimize their content to show in generative engine environments. Perplexity looks like its starting to figure out how to monetize and optimize generative engines just like Google did with Google ads and search result pages. So they’re kind of hinting at future ways to monetize generative engines in a way no other company has done yet, and could be potentially setting a precedent in search.
If you’re interested in GEO I have a video diving into a research paper about it, check it out on my channel.
Advancement #3: DeepL launches DeepL Voice
DeepL just leveled up with the launch of DeepL Voice, bringing real-time text-based translations for live conversations and video calls. Think of it as captions on steroids: you speak in one language, and it instantly translates into another—perfect for virtual meetings or potentially helping frontline workers, like restaurant staff, communicate with customers. But here’s the catch—it’s text and subtitles only, with no audio or video file outputs.
DeepL Voice stays firmly in the realm of live captions, supplemented by higher translation accuracy as a language services-specific entity. It translates spoken language into text during live interactions but does not produce audio translations or files, focusing solely on enhancing real-time communication.
Currently, the tool only integrates with Microsoft Teams, with no API or support for other platforms yet announced. On the data privacy side, DeepL made a fantastic data privacy statement for users that translations are processed securely on its servers without retaining or using voice data for training, fully compliant with GDPR and other privacy regulations.
My Initial Thoughts
This is a big move for DeepL as a language services leader, but it feels like a half-step rather than a leap. The lack of audio output, limited platform integrations, and no API make it harder to stack up against competitors like Google Meet, which already offers similar features, or ElevenLabs, which is already offering deepfake-style dubbing with live avatars.
But DeepL voice may stand out due its assumed higher translation precision output, and its strong emphasis on data privacy and its pretty robust NVIDIA chip infrastructure. To truly compete, though, DeepL will need to expand its integrations and features quickly, or partner with another, especially since this has been a top request from customers for years.
Speed Round
The Beatles' AI Song “Now and Then” Nominated for Two Grammys
First up the best feel good story of the month, The Beatles' AI-enhanced track "Now and Then" has been nominated for two grammy awards, incuding best record of the year. Despite the band breaking up over 50 years ago, Paul McCartney decided to use AI to make the “last ever beatle record”, and used AI to enhance a poor quality 1978 John Lennon demo, resulting in "Now and Then. I didn’t even know this song was AI, I just thought it was from the archives. Music industry, you better not f this up.
X Challenges California's Anti-Deepfake Law Over Free Speech Concerns
Elon Musk's social media platform, X, has filed a lawsuit to block California's AB 2655, the "Defending Democracy from Deepfake Deception Act of 2024," which mandates that large online platforms remove or label AI-generated deepfakes related to elections. X contends that this law could lead to widespread censorship of political speech, arguing that the First Amendment protects even potentially false statements made in the context of criticizing government officials and political candidates.
AI Labs Rethink Strategies as Scaling Laws Hit Limits
Recent research indicates that AI scaling laws are exhibiting diminishing returns, prompting leading AI research labs to reassess their strategies. AI scaling laws are like the "rules" that show how AI performance improves when you increase factors like model size, data, and computing power. Recently, researchers have noticed that just making models bigger and using more data isn't boosting AI performance as much as before. This means AI labs need to find new ways to make smarter AI systems.
My Initial Thoughts: I’m conflicted on Musk’s lawsuit. I see his point, but we really need deepfake regulation. So instead of stressing over it, I’m going to tell my music industry people you better go vote for the Beatles. In fact I actually might still be a Recording Academy member from my music industry days, I actually went to the grammys once, although it was better on TV, but maybe I’ll see if I can still vote? TBD