OpenAI Whistleblower is Dead, Quantum Chips, and Model Launches: Your 10-Minute Post-Holiday AI Recap
Google’s Quantum Chip Willow, 12 Days of OpenAI, Veo 2, Amazon Nova, Gemini 2.0 Flash Thinking Experimental , Meta’s Llama 3.3, the OpenAI Copyright Whistleblower That Suddenly is Un-Alive.
Welcome back to the AI Almanac, where you get the top AI news distilled into what actually matters for the week in about 10 minutes or less.
While we were all busy stuffing our faces with Christmas cookies and pretending we didn't have jobs for a while, the AI world went absolutely crazy. So instead of dropping a video during the holidays, I decided to wait until we all got our brains back online to hit you with a proper review. :)
I also will be doing a special article just on the 12 Days of Open AI. It will be launched on Thursday, so stay tuned for that.
Also we are getting back to a normal cadence finally, and can’t wait to bore you all with my horrendous jokes.
Advancement #1: Google’s New Quantum Chip Willow
Google launched new quantum chip, “Willow,” and it’s blowing minds because it solved a problem in five minutes that would take today’s fastest supercomputers 10 septillion years. For context, that’s about 13.8 billion years and longer than the universe has existed—by a lot.
This level of performance isn’t just fast; it’s almost impossible to wrap your head around, and it’s also why some researchers from top universities like Harvard are pointing back to theories of the Multiverse or the Many-Worlds Interpretation of quantum mechanics, which suggests quantum computers could be operating across multiple realities to perform these unimaginable calculations, so Willow’s capabilities are making scientists take a fresh look at questions that could redefine how we understand the universe itself.
The key to Willow’s power lies in qubits, the building blocks of quantum computers. Unlike regular computer bits, which can only be a 0 or a 1, qubits can be both at the same time, thanks to a property called superposition. This means quantum computers can explore multiple solutions at once, making them exponentially faster for certain problems. But qubits are also fragile—they’re easily disrupted by noise, which leads to errors. That’s where Willow makes a big leap. Google scaled up its qubit arrays from a 3x3 grid to a 7x7 grid and managed to cut error rates in half each time using advanced quantum error correction. This is a huge deal in quantum computing because it means we’re finally figuring out how to make these systems reliable.
My Initial Thoughts: The fact that it solved a problem in five minutes that would take 13.8 billion years on a supercomputer feels like something out of a sci-fi movie where I’m about to jump between worlds. The whole parallel universe talk? It sounds crazy but that’s not just fun speculation—some of the smartest minds in quantum physics are actually taking it seriously again because of breakthroughs like this.
If Google can keep this momentum, we’re looking at a future where quantum computing isn’t just theoretical anymore. Insanity. Don’t even know what to say.
Advancement #2: The 12 Days of Shipmas for OpenAI
For 12 days straight, OpenAI dropped 30-minute videos announcing shiny new features, tools, and models, turning December into the ultimate tech binge-watch. It was like an AI advent calendar, with each day bringing something fresh—some updates were huge, others were... well, filler. Either way, I don’t know about you.. but I was checked out for the Christmas holiday to keep up.
Out of the 12 days, the biggest standouts that you should know were 3 main releases: Their preview of their new o3 models, Sora is now publicly available, and a new feature called Projects.
The o3 preview showed off crazy high AI reasoning, setting records in math and programming while using a new safety method called "deliberative alignment."
Sora, OpenAI’s text-to-video model, finally made its debut with an interface like Midjourney and allows you to create 20 second 1180p videos, and even have cool tools like storyboard editing.
And Projects? It’s basically ChatGPT’s new productivity hub for organizing conversations, files, and tasks. It’s simple, but I love it.
There is way too much to unpack here for my normal videos, so I am making a special episode with the full breakdown of the 12 days of shipmas and everything that was launched, so subscribe to my channel to be notified when it’s dropped in a few days.
My Initial Thoughts: Because I am going to give you so many of my personal thoughts on each launch on my next video, I will withhold all of my hilarious jokes on this one. Theres a little bit of singing involved. I couldn’t help it. I got the Christmas spirit still.
3. OpenAI Copyright Whistleblower Found Un-Alive in San Francisco Apartment
Suchir Balaji, a 26-year-old former OpenAI researcher, was found un-alive in his San Francisco apartment on November 26. The cause of death has been reported as an alleged suicide. Balaji worked at OpenAI for nearly four years before leaving due to ethical concerns about the company’s use of copyrighted data for training its models. Known as a whistleblower, he publicly raised concerns about generative AI violating copyright laws at major news outlets like the New York Times. His death has reignited discussions about the ethical and emotional challenges researchers face in the AI industry.
So why am I reporting this? Balaji’s perspective stands out because he worked directly on ChatGPT and conducted extensive research into the copyright issues generative AI raises.
For example, I came across his website where in a October 2024 blog post, Suchir Balaji critically examined whether generative AI models like ChatGPT qualify for fair use exemptions under US copyright law when trained on copyrighted material.
He analyzed the four factors of fair use and argued that the commercial nature of these models and their potential to replace original works could weigh against a fair use defense. Balaji concluded that products like ChatGPT may not meet fair use criteria, highlighting the broader implications of generative AI.
He also gave some pretty compelling words on his final X post: “I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn't know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they're trained on.
That being said, I don't want this to read as a critique of ChatGPT or OpenAI per se, because fair use and generative AI is a much broader issue than any one product or company. I highly encourage ML researchers to learn more about copyright -- it's a really important topic.”
My Initial Thoughts: I talk about copyright so much, I used to work directly with copyright in the entertainment industry, and I always positioned myself kind of in the middle of this debate. I actually saw the loophole in copyright law that these AI companies are utilizing, I also simultaneously never saw that argument as foul proof.
But the bottom line is what stopped me from giving a more formal opinion is these machines are like black boxes. We don't fully understand how they work, what data was actually used, and therefore it's hard to reverse engineer what would be considered copyright infringement. So when an ex OpenAI researcher shows up un-alive after being very vocal about his belief it's not protected, even with this legal loophole, that's really hard to swallow.
Advancement #4: Three Main Model Launches
Amazon Nova
Amazon launched its own family of foundation models called Nova, designed to handle text, image, and video inputs. The lineup naturally includes models of different sizes for low-cost text processing to complex reasoning tasks. ptions like Nova Micro for fast, low-cost text processing, Nova Pro for balanced accuracy and speed, and Nova Premier for complex reasoning tasks. They are available through Amazon Bedrock.
Google
Google released its own reasoning AI model called Gemini 2.0 Flash Thinking Experimental. I wish I was kidding. It utilizes Chain-of-Thought (CoT) techniques to mimic human-like reasoning and is available in Google’s AI studio. If you’re curious about how CoT works, check out my video breaking down these techniques in detail.
Meta Llama 3.3
Meta’s launched another Llama model, Llama 3.3 70B, claims to outperform competitors like OpenAI’s GPT-4o and Amazon Nova Pro in certain performance benchmarks like MMLU (which is for language understanding).
My Initial Thoughts: Amazon Nova is Amazon’s first big model to compete with the top players, so it’s one to watch. Meta’s Llama 3.3 feels like mainly a solid performance upgrade, Aad seriously, whoever at Google came up with the name “Gemini 2.0 Flash Thinking Experimental” needs to find a new job.
Advancement #5: Google Launches Veo 2
Google DeepMind has launched Veo 2 which is its next-generation video-generation AI. It can create videos over two minutes long in 4K resolution (4096 x 2160 pixels), which is over 6x the duration and 4x the resolution of OpenAI’s Sora.
My Initial Thoughts: This is one is a straight forward update, not much to say on it. Clearly Google wants to dominate the video generation game and with those stats they definitely will for now.
Well, I know this was a heavier episode than usual. Honestly, it was tough to distill all the information this time because there was so much of it from the holidays—but hey, I still managed to keep it under 10 minutes, so I’m calling that a win. Just a reminder, I’ll be publishing a more extensive breakdown of the 12 Days of OpenAI Shipmas in the next few days, so make sure to subscribe to the channel so you don’t miss it.
And, as always, every comment, like, and share means the world to me. If you haven’t done it yet, please take a second to do so—it really helps. Thanks for watching!