SEO Yesterday... Meet Generative Engine Optimization (GEO) and How it's Changing the Search Game Rules in the AI Era
This week in AI: Introducing Generative Engine Optimization (GEO), Elon Musk sues OpenAI, Figure AI's humanoid robots will work at BMW factories, and Anthropic launches Claude 3.
Don’t have time to read? Watch the video briefing on YouTube.
Advancement #1: Introducing Generative Engine Optimization (GEO), the SEO replacement in the Era of AI-Powered Searches
During a recent interview with Multilingual Magazine covering a recent win in an AI innovation competition, I was asked to share my thoughts on which aspect of Generative AI might become a significant disruptor. I stated that I believed would be in SEO (Search Engine Optimization) and international marketing, as how we obtain information from the internet will significantly change.
This week, I came across a study conducted by researchers at Princeton and MIT that confirmed this, coining a new term for optimizing search results in the age of AI: Generative Engine Optimization (GEO).
If you are not interested in reading how it works (and want to skip to the next section), the biggest takeaway was that the most utilized SEO technique of keyword stuffing actually proved detrimental in GEO by 10%, marking a serious shift from traditional SEO techniques.
Since I am going to be breaking down a complex research paper, I will temporarily stray from my traditional format and cover the topic over 5 sections:
What are generative engines;
GEO vs. SEO explained;
Utilizing GEO-Bench and the 2 metrics used to measure performance;
The 9 SEO/GEO techniques that were evaluated;
Results.
1. What are Generative Engines
A generative engine is a type of AI technology that synthesizes information from multiple sources to generate personalized and accurate responses to user queries. Unlike traditional search engines, which retrieve and rank existing content, generative engines use large language models (LLMs) to understand and compile data into coherent, contextually relevant answers. This capability allows for more dynamic interaction with information, as these engines can produce responses, summaries, and content that weren't explicitly written but are generated in real-time based on the input query. Real-live examples including Bing search, Gemini, Perplexity.ai, and even ChatGPT in some cases.
Generative engines and traditional search engines differ fundamentally in their approach to handling queries. Traditional search engines index and retrieve existing web content, ranking it based on relevance and authority to present a list of links. Generative engines, however, use advanced AI to synthesize and generate responses in real-time, drawing from vast datasets to provide comprehensive, contextually relevant answers without necessarily directing users to specific web pages.
A Generative Engine processes a user's query and personalized information, such as preferences and history, to produce a natural language response. It functions by integrating a set of generative models with a search engine to find relevant documents, then synthesizes these findings into a coherent response. This response not only answers the query but also includes inline attributions to the sources, ensuring that the information provided is verifiable and grounded in the retrieved documents.
In the form of an equation and algorithm (at least, according to the study), a Generative Engine (GE) takes as input a user query qu and returns a natural language response r, where PU represents personalized user information, such as preferences and history. The GE can be represented as a function: fGE = (qu, PU ) → r.
2. GEO vs. SEO Explained
SEO and GEO differ fundamentally in their focus and application, as the way users obtain information is significantly different.
SEO optimizes content to rank higher on traditional search engine results pages (SERPs), using techniques like keyword optimization (aka keyword stuffing), backlinks, and site accessibility. GEO, on the other hand, optimizes content for visibility within AI-driven generative engines, emphasizing content structuring and strategic modifications to align with these engines' unique content generation patterns. Because of this, traditional SEO techniques are not able to be applied to optimize results in a Generative Engine environment.
3. Utilizing GEO-BENCH and the Key Metrics
The study introduces GEO-BENCH as the first benchmark specifically designed for assessing Generative Engine Optimization (GEO). Recognizing the complexity of measuring content visibility in generative engines, which integrate and reference information in varied and nuanced ways, they crafted a multi-dimensional visibility metric.
GEO-BENCH, comprising 10K diverse queries from multiple domains, utilizes datasets like MS Macro, ORCAS-1, and Natural Questions, alongside others such as AllSouls, LIMA, ELI-5, and GPT-4 to test GEO strategies. This benchmark, which is the first of its kind, aims to systematically assess and enhance content visibility by up to 40% in generative engine responses, paving the way for further research and optimization in this emerging field. However, it’s important to note that it is far from perfect and is only a first iteration.
Two main metrics for optimizing content visibility within AI-generated responses were established while utilizing GEO-BENCH:
Position Adjusted Word Count: Assesses a source's impact by considering both the amount of content and its position in the response; a higher word count at a prominent position indicates greater visibility.
Subjective Impression: Involves the perceived value of the content, including its relevance to the query, the uniqueness of information, and the likelihood of user engagement with the citation.
4. The 9 GEO Techniques Evaluated
The study explored nine GEO techniques to tailor content for generative engines, blending traditional SEO with novel approaches.
Authoritative: Enhances text to appear more confident and persuasive.
Keyword Stuffing: Increases keyword presence, aligning with SEO practices.
Statistics Addition: Introduces quantitative data for evidence-based discussion.
Cite Sources: Adds citations from reputable sources for credibility.
Quotation Addition: Includes relevant quotations to deepen content authenticity.
Easy-to-Understand: Simplifies content for better accessibility and engagement.
Fluency Optimization: Boosts text fluency for smoother reading.
Unique Words: Introduces novel vocabulary to make content distinctive.
Technical Terms: Integrates domain-specific jargon to showcase expertise.
5. Results
The research identified Cite Sources, Quotation Addition, and Statistics Addition as the leading strategies for Generative Engine Optimization (GEO), marking a 30-40% increase on the Position-Adjusted Word Count and 15-30% improvement on the subjective Impression metric compared to the baseline. These approaches, focusing on enhancing content credibility and richness with minimal alterations to the original sources were found to outperformed all others significantly.
Stylistic improvements like Fluency Optimization and Easy-to-Understand significantly increased visibility, underscoring the value generative engines place on not just the content but its presentation and ease of reading.
Surprisingly, an authoritative tone showed no notable benefit other than in areas of historical debate, highlighting generative engines' ability to navigate personal biases.
Contrary to traditional SEO practices, keyword stuffing proved detrimental, often reducing visibility by 10%, suggesting a pivot from conventional SEO strategies to those prioritizing content credibility and user engagement in the GEO context.
My Initial Thoughts: As the first study introducing GEO and a way to quantify results, this research marks a significant shift in how information retrieval on the internet is conceptualized, aligning with my long-held and preached view of impending change. The detrimental impact of keyword stuffing (-10%) and the substantial improvement from modest text adjustments (+30-40%) were particularly striking findings.
However, many SEO experts are a bit skeptical how foolproof this study is, so take everything that is being said with a grain of salt.
Advancement #2: Elon Musk Sues OpenAI Due to Breach of Contract of Company’s Non-Profit Origins
Elon Musk has initiated a lawsuit against OpenAI (including its CEO Sam Altman and other associated figures) for allegedly breaching the foundational agreement of keeping the organization a nonprofit dedicated to advancing AI for the betterment of humanity.
Musk, a co-founder and initial major financier of OpenAI, contributed over $44 million to the organization from 2016 to September 2020 under the belief that it would operate as a nonprofit in order to advance AI for public good and counterbalance Google's dominance. In his legal filing, Musk states that OpenAI has significantly deviated from the core objectives that guided his initial investments. He argues that despite OpenAI's original commitment to openly share AI advancements with the world, it has since become one of the world's most clandestine organizations focus on profit maximization.
This shift toward prioritizing profits became significantly more pronounced after OpenAI partnered with Microsoft, which has made considerable investments in the company.
The lawsuit suggests that OpenAI functions as a significant arm of Microsoft, directing its efforts on refining its advanced general intelligence (AGI) technology to raise Microsoft's profits rather than for the betterment of humanity.
Microsoft's influence over OpenAI was further displayed by comments in a recent interview by CEO Satya Nadella. Nadella hinted at Microsoft's comprehensive access to OpenAI's intellectual property and resources."If OpenAI disappeared tomorrow," he said, "we have all the IP rights and all the capability. We have the people, we have the compute, we have the data, we have everything."
In addition, the lawsuit quoted that by licensing GPT-4 to Microsoft -- which is considered by some, including Elon Musk, to be an AGI with intelligence comparable to (or surpassing) human levels -- OpenAI should be help to the commitment of using it for the welfare of humanity instead of being used for commercial gain.
While Musk has been continuously offered a stake in the for-profit arm of the startup, he has refused to accept it over a principled stand. Musk seeks to realign OpenAI with its original, humanitarian-focused mission, emphasizing the need to prevent the commercialization of its technologies to the exclusive benefit of corporate entities like Microsoft.
My Initial Thoughts:
The lawsuit Elon Musk has brought against OpenAI seems to have a solid foundation, pointing out potential deviations from the company's original mission that could put “the protection of humanity” in jeopardy (a mission that is still active on the OpenAI website, by the way).
Yet, there's an element of strategic maneuvering that cannot be overlooked. Musk's critique of OpenAI for moving away from its altruistic roots strongly contradicts with his own handling of Grok (an AI developed by his company X), which remains proprietary rather than open-sourced.
This contradiction implies that Musk's push for open-source AI, while ideologically appealing, doesn't fully align with his actions.
It seems conceivable that Musk's actions could be motivated, at least in part, by a desire to temper OpenAI's progress. Whether for his own gain or because OpenAI’s progress is moving dangerously fast with Microsoft behind them for many to catch up to, I believe this lawsuit could be a broader strategy to influence the AI landscape in favor of his interests, rather than a pure commitment to AI for the greater good.
Despite my reservations, I am inclined to think that Musk's initial investments was aimed at leveling the playing field for "humanity" against the major players in the tech industry. If I had invested $44 million and dedicated years on the board of a nonprofit AI organization, only to watch it essentially morph into a subsidiary of Microsoft, I too would see substantial grounds for legal action. It appears that they used nonprofit contributions to make major advancements in AI technology, only to then hand these innovations over to one of the very industry giants they initially aimed to guard humanity against.
Advancement #3: Figure AI Secures Significant Investment for Humanoid Robots Amid BMW Partnership
Figure AI, a pioneering startup in the development of humanoid robots powered by AI, has recently secured a significant investment of approximately $675 million. This investment round saw substantial contributions from major tech entities and investors, including Amazon founder Jeff Bezos, Nvidia, Microsoft, Intel's venture capital division, and Explore Investments.
The investment spree followed the company's announcement of a partnership with BMW to introduce humanoid robots into manufacturing roles, highlighting the industry's recognition of the potential for humanoid robots to address labor shortages and enhance safety in manufacturing environments. As part of the deal announced Thursday, Figure also stated that said it’s partnering with OpenAI to “develop next generation AI models for humanoid robots”, and will also use Microsoft’s Azure cloud services for AI infrastructure, training and storage.
Figure AI has taken a significant leap in robotics with the development of their general-purpose robot, Figure 01, designed to emulate human appearance and movements. This innovation is particularly targeted towards industries grappling with severe labor shortages, such as manufacturing, shipping and logistics, warehousing, and retail. The company is clear in its intention that Figure 01 is not to be utilized for military or defense purposes.
My Initial Thoughts
Humanoid technology pushes are nothing new, but the announcements of their usage in a factory setting by BMW reflect a significant turning point in how we perceive and integrate such technologies into practical environments. This collaboration to deploy these robots in a factory demonstrates a leap from theoretical applications to tangible utility.
Figure AI's demo video of Figure 01 showcases the robot's fluid, yet notably slow movements. This observation brings to light a potential trade-off between precision and speed, although the robot's ability to operate continuously without the need for rest could compensate for its slower pace, offering a unique solution to round-the-clock productivity challenges.
What I noticed more, however, is one can't help but ponder the implications of yet another significant tech collaboration involving Microsoft.
This repeated pattern of AI partnerships with Microsoft demonstrates its expanding influence and might lend credence to Elon Musk's concerns as manifested in his lawsuit. Musk aims to slow down Microsoft's rapid expansion, and considering there is yet another significant partnership here, could be seen not just as a competitive maneuver but as a necessary check on the accelerating consolidation of power within the tech industry. Developments such as yet another partnership (now with Figure AI and Microsoft) could indeed validate the underlying rationale of Musk's lawsuit, suggesting it may serve a broader purpose in advocating for a more balanced competitive landscape and preventing potential monopolies that could stifle innovation and dominate critical future technologies.
Advancement #4: Anthropic Launches Claude 3 Family that Outperforms GPT-4
Anthropic announced on Monday the launch of the Claude 3 model family, marking an AI race milestone as the first model to outperform GPT-4 across a spectrum of cognitive tasks. This groundbreaking family consists of three models: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, each tailored to offer a unique balance between intelligence, speed, and cost-effectiveness to meet diverse application needs. From Haiku's unparalleled efficiency in its class to Opus's near-human levels of understanding and reasoning across complex domains, the Claude 3 models redefine the benchmarks for artificial intelligence capabilities.
Claude 3 Opus stands out for its exceptional performance on rigorous AI benchmarks, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It's designed to facilitate high-level analytical tasks, nuanced content creation, and efficient code generation in multiple languages, including non-English options like Spanish, Japanese, and French. The Claude 3 model family not only surpasses previous generations in textual comprehension and generation but also introduces advanced vision capabilities, equipping it to interpret a wide range of visual formats with precision.
With Claude 3 model family's significant advancements in areas such as biological knowledge, cyber-related expertise, and autonomous capabilities over prior models, Anthropic continued to adhere to AI Safety Level 2 (ASL-2) and engaged red teamers during its testing.
My Initial Thoughts: While this may just seem like another model announcement (at least you’re on top of all of them though because of the AI Almanac, right? 😉) Claude 3 Opus really feels stronger compared to GPT-4, which marks a landmark moment for a model to finally surpass Open AI (for now). There have bene reports that it is much slower, however, but its important to note that GPT-4 was also slow when it was first released, so I’m sure Anthropic will speed it up.
More importantly, I find myself particularly impressed by their proactive stance in an era largely devoid of standardized safety guidelines.
It's commendable that Anthropic has not only recognized the need for such measures but has also taken the lead in establishing a framework through their Responsible Scaling Policy for complete transparency. Establishing four current tiers of AI safety guidelines (with one still being hypothetical), it signals a significant step forward in the responsible development and deployment of AI technologies.
It's especially encouraging to see a company not just adhering to but also shaping the conversation around AI safety and potentially influencing future legislation and cybersecurity protocols.
Bonus Round
Although I didn’t have the time to write about it, here are a few other cool finds for the week.
Google brings Stack Overflow repository to Gemini
Apple says they will launch groundbreaking Generative AI this year
Brain AI, the software where the AI is the operating system
Former Twitter employees launching AI newsreader named Particle AI
Adobe announces GenAI app for music generation
Thank you for your support!
Please like and subscribe to my Youtube channel and Substack. It means the world and keeps me going!