The Increasing Threat of Provocative AI Deepfakes
Plus, new OpenAI o1 model (Project Strawberry), SmartCat secures $43m in funding, nations meet to discuss military AI, plus are AI search tools killing traditional websites?
This week’s video is jam packed, 11 minutes and covering 10+ topics. Please tune in, watch comment. I promise it’s worth it!
I have been monitoring AI news constantly, and I have to say we're in a bit of a stagnant period in terms of advancements. Things are happening, and I'm waiting for the day I wake up and we have AGI, but I'm seeing a huge shift towards regulatory initiatives and human rights protection against AI happening a lot more frequently right now.
Before we get started, I want to give a quick disclaimer that this first section is covering a highly sensitive but urgent topic.
Microsoft Bing this week began to fight a major threat of deepfake provocative images generated by AI, which are even targeting teenage girls at a rampant rate. So if this is something that is too hard to read, please skip to the next section.
Advancement #1: Microsoft Bing to Become First Search Engine to Combat AI Deepfakes Made by Undressing Sites
The surge in generative AI tools has caused a troubling increase in synthetic nude images that mimic real people.
This week, Microsoft revealed a new partnership with StopNCII, a group that assists victims by creating digital fingerprints of explicit images to prevent their distribution. This initiative joins similar efforts by major platforms like Facebook, Instagram, and TikTok, and Microsoft has already acted on 268,000 explicit images in Bing search in its pilot program.
Google has been criticized for not partnering with StopNCII in a similar manner.
The US lacks a federal AI deepfake porn law, relying instead on a mix of state and local laws to address the problem, but the issue extends beyond adults. AI generated un-dressing sites are gaining significant popularity, which takes a normal clothed photo of a person and generates a fake nude version from it, targeting high school students.
"How to describe what's been happening to me for the past 48 hours. Two days ago, someone sent me a message request on Instagram from a faceless account, uh, no followers, no posts, nothing, and it was pictures of me that I had posted. Fully clothed. Completely clothed. Um, and they have put them through some editing AI program to edit me naked.
They basically photoshopped me naked. I tried to post about this, I was like, it's not real by the way. And all the comments were so disgusting, like actually vile. They made me want to throw up multiple times, like they were all like now you gotta post the real ones. Where's the link? Obviously you want more people to see this, that's why you're posting about it.
No, it's because I want you to know they're not real. Please stop."
San Francisco prosecutors recently sued to shut down 16 undressing sites, and while 23 states have passed laws against non consensual deep fakes, nine have rejected such proposals.
My Initial Thoughts: I'm sure you all heard about the deep, fake nudes of Taylor Swift that circulated. That's exactly the situation we're dealing with here.
Microsoft teamed up with StopNCII, it's a great move, but it's just one piece of a messy puzzle. Right now it feels like we're patching things up with duct tape because different platforms and states are all doing their own thing and there's no consistent way to protect people.
However, as I sit here, breaking news from Washington.
AI companies, Adobe, Anthropic, Microsoft, and OpenAI, among others, pledged Thursday to remove nude images from training data and implement safeguards against sexual deepfakes, when appropriate, and depending on the model's purpose.
So it is a voluntary opt in, but it is a step in the right direction. The commitment, which was brokered by the Biden administration, supports efforts to combat image based sexual abuse of children and non consensual AI deepfakes of adults. So there's a crazy amount of conversation happening about this topic this week that I could not even get through editing a video without an update coming from Washington.
Advancement #2: Speed Round
SmartCat secured 43 million dollars in Series C Funding
SmartCat secured 43 million dollars in Series C funding this week for their AI powered translation platform. AI is unlikely to completely replace human translators because AI translations often miss the nuance and have more mechanical outputs, which is exactly the reason/what language industry investors see when investing in a company like SmartCat.
What's the difference between a tool like SmartCat and Google Translate?
SmartCat offer a suite for AI plus human expert type situation to get the best of both worlds. Other AI human translation solutions in this space include DeepL, Lilt, and RWS Evolve (among others).
My mom, who owned a language company, has been a SmartCat user since pretty much day one, and she actually consulted on part of their earlier stages of their platform, so I feel like I've seen them grow from nothing over the last eight years. So, good for you, SmartCat.
90 Nations Meet in Seoul to Discuss Military Use of AI This Week
Next up, over 90 nations, including the USA and China, met at a summit in Seoul to establish guidelines for the military use of AI, aiming to set minimum guardrails and responsible deployment principles aligned with NATO standards.
The catalyst mainly was recent developments in the Russia Ukraine war, as Ukrainian AI enabled drone caused growing urgency. Ukraine is hoping to leverage AI and their drones to help overcome signal jamming as well as enable unmanned aerial vehicles, also known as UAVs, to work in larger groups.
Sidebar, if you didn't know about my past life, I actually worked in a project for the Department of Defense programming MUSVs, which were Medium Unmanned Surface Vessels, leveraging AI for unmanned missions by ship and sea.
Unmanned Aerial Vehicles, or UAVs that we're talking about in the context of Ukraine, are exactly the same concept. Basically, planes or other aerial vehicles that don't have a pilot but can carry out missions.
First International Treaty to Protect Human Rights Against AI
Third, the first international treaty to protect human rights against AI was introduced by Council of Europe to ensure protection of three key areas: human rights, democracy, and the rule of law.
In addition to providing guidelines, it also requires countries to set up laws to handle any AI risks, though it doesn't actually spell out what those risks are. Surpriser. Major countries like the US, UK, and EU are all on board, but each one still has to ratify it individually, so it's really unclear how long that will actually take.
Mistral AI’s First Multimodal Model
The French startup Mistral AI released its first multimodal model this week called Pixtral 12b that can process images and text. Mistral is seen as Europe's answer to OpenAI if you're not very familiar with them.
Apple Intelligence Mini Launch
Apple Intelligence technically mini launched this week alongside their iPhone 16 and everyone is publishing everywhere that is mediocre and sucks.Instead of spending time talking about it and why they're misinformed, I'm just going to refer you back to my video on Apple Intelligence and how it works.
Watch it. I'm serious. Yes, on the surface, Apple intelligence is embedded writing tools, and Siri is finally useful and you can generate some images, but what is so amazing is their confidential cloud compute infrastructure to protect users data privacy, because data privacy in the world of AI is a big concern right now.
There's nothing like it yet on the market, so you don't see all of the amazing work Apple has done as the end user, but it doesn't mean it's not absolutely incredible. So because you follow me, you can appreciate it. Be proud of yourself. Apple intelligence is great.
OpenAI Launches Project Strawberry (O1 Model) Which Has Chain-of-Thought Techniques
Last but not least this week, OpenAI launched Project Strawberry, aka GPT Omni 1, and OmniMini.
O1's main goal focuses on improving reasoning in the company's model, also known as chain of thought reasoning, which mimics how a human would actually think about a problem. Typically, LLMs just start spitting out the answer without really thinking about it, but chain of thought techniques improve that. Again, I'll refer you to another video I made about a chain of thought technique called Quiet Star a few months ago. It really helps you to understand a technique for the model to kind of pause and think about the right answer before actually answering.
Advancement #3: Senators Think AI Search Tools Engaging in Anti-Competitive Practices
Are AI search tools engaging in anti-competitive practices?
That's what a group of democratic senators believe when they wrote a letter to the FTC & Justice Department. They want to investigate AI tools that summarize and reproduce online content like Perplexity AI, or any generative search engine, are engaging in anti competitive practices.
Why are the senators saying this?
Well, traditional search results and news feeds typically direct users to publisher's websites, allowing the creators to benefit from traffic. But when you introduce AI generated summaries users often fail to navigate to the original source or even know what the original source was, and it enables companies to profit from data collection at the original creator's expense.
Some AI features also misuse third party content and they present it as original AI generated material, which is a completely other different problem. But publishers are in a pickle because if they want to prevent their content from being summarized by AI generated search results, they have currently no choice.
But to opt out of search indexing entirely, meaning not even showing up on Google, which will lead to an incredible amount of loss of referral traffic. So it forces content creators into a very difficult position, either allow their work to be used as input for AI tools, or risk being excluded from the online marketplace altogether and ever being found.
TLDR version: The senators are arguing that a small number of powerful companies, Google, dominates the market for monetizing original content through advertising and are manipulating the system to their advantage. Makes enough sense.
My Initial Thoughts: This is clearly a serious issue, and my mind immediately goes to my multilingual magazine interview about 10 months ago, where I said that the biggest impact of AI would be how we obtain information online, because there's a possibility that websites, website design, they could be rendered completely obsolete if people have no need to navigate to websites anymore.
Now, we all know what SEO is and how it helps you rank higher on traditional sites, but AI powered search brings this new concept of GEO, Generative Engine Optimization.
It's still in a research phase right now, but GEO discusses potential techniques to optimize your content to be shown within a generative engine environment… because now we have to acknowledge that traditional SEO might not be enough anymore.
I actually made a video about how generative engine optimization works about six months ago based off of a research paper, so check it out on my channel if you're interested.
All in all, the FTC clearly has their work cut out for them, and well, a friend of mine who SEO said "that would be great if they banned it,” but I would bet my bottom dollar there's no way on earth that they will ban it. The space is too experimental right now and the value is too high to the end user, so really interested to see how this shakes out.
And that's it for this week. Please remember to like, subscribe. It means the world to me.