In our previous post, we introduced the three-layer GenAI stack to make sense of where value is being created—and where investor interest is flowing.
Let’s briefly recap:
So far, the bulk of investment has flowed into infrastructure and models, each receiving >$40B in investment in 2024, double 2023’s figures. We believe the next wave of value (and competition) will center on the application layer and are not alone in this view.
Why the Application Layer Matters Now
Applications are where models create value for the majority of end users. While the early focus in AI has been on B2B use cases (cost savings, productivity, developer tools), consumer-facing AI is next.
The consumer AI space is hard to grasp. New tools launch weekly. Categories blur. It’s difficult to understand which propositions are gaining traction or where innovation is clustering. Also because as we know, the best technical innovations take time to shape new behaviours.
Source: Image created with Midjourney with a simple prompt
The current (and evolving) consumer GenAI landscape:
Several VCs are helping map and make sense of the space. Firms like Andreessen Horowitz (a16z), Redpoint, Sequoia, and Forerunning have published rankings, breakdowns, reports to help understand where things are going.
In the case of Sequoia the majority of their identified “AI Killer Apps” in 2025 are in the B2B space:
Source: Sequoaicap.com
Harvey – AI-powered legal research
Abridge – AI for medical conversations
Sierra – AI platform for software engineers
Glean – Enterprise search tool for internal company data
In the a16z report on the top 100 GenAI Consumer Apps the majority of the apps/web identified are either content editing/creation tools (52%) or LLM-based (36%). The remaining 12% are a mix bag of consumer apps with specific utility beyond content or LLM interaction.
This is not surprising when we look at the usage data. According to Similarweb, LLM based applications (“General”) dominate the space with more than 80-85% of the total traffic.
Source: Similar Web (25/4/25 Global tracker report)
The main categories after “General” are:
Character & Chat - AI companions basically (ex. Character.AI & Replika)
Music Generation - to create songs (ex Suno)
Design & Image Generation - to generate images through text (ex. Midjourney, Leonardo)
Code completion - tools to help developers in writing, testing or debugging (finding errors in the code and fixing them) (ex. Lovable, Cursor & Replit)
However the landscape is changing really fast. For instance, Lovable who are one of Europe’s fastest growing consumer AI applications reaching $50M in ARR in 6 months (according to Anton Osika, founder of Lovable) wasn’t even mentioned in the a16z ranking from March 2025.
Source: Similar Web (25/4/25 Global tracker report)
Forerunners' 2025 report on Consumer trends provides a good overview of what they see in the Gen AI space:
The Critical transition: from Passive Tools to Autonomous Agents
When we map consumer AI applications a pattern emerges pointing directly to the next frontier; most successful consumer AI applications are beginning to incorporate the capability to perform autonomous actions on behalf of users. This is the bridge to the agent revolution.
At a high level, we estimate that around 15% of the top 100 apps found in the a16z report incorporate some level of agentic capability to varying degrees.
We believe that agents represent the natural evolution of consumer AI because it addresses 3 key limitations of the current landscape:
Limited utility: Most current apps are either content tools or LLM interfaces, not solving specific consumer problems (as mentioned above)
Passive interaction: Users must actively prompt and guide most current AI tools
Fragmented experience: Different tools for different tasks create friction in the user experience
Agents can solve these problem by:
Taking Action: Moving beyond generating content to completing tasks
Working Autonomously: Planning and executing with minimal user intervention
Integrating Services: Connecting multiple tools and data sources seamlessly
What is an Agent anyway?
An AI agent is software above the LLM boundary that can:
✅ Plan tasks
✅ Make decisions
✅ Take actions autonomously
✅ Remember and recall relevant information over time (it has memory!)
Examples of current and future use cases:
🔹 "Browse this site and summarize for me" (Arc)
🔹 "Book me a flight and file my expenses" (Future Copilot)
🔹 "Coordinate my week based on priorities and constraints" (Agent planning)
Closing Thoughts
The first wave of Gen AI breakthroughs scaled on infrastructure building on models that powered foundational capabilities. The real excitement from a consumer perspective is now at the application layer, where AI shifts from passive tools to active agents executing tasks, making decisions, and streamlining workflows autonomously.
This transition raises fundamental questions about the future of the digital ecosystem:
🔹 What happens to SEO when AI agents navigate the web independently, bypassing traditional search behaviors?
🔹 What will be the impact on online advertising, especially models that rely on influencing human clicks?
🔹 How will businesses adapt when AI-driven interactions reshape user engagement?
🚀 The race to define this next phase is already underway. Who will lead?
📍 Incumbents like Google, Microsoft, and Amazon leveraging their vast ecosystems and distribution advantage?
📍 LLM players moving up the stack. OpenAI’s acquisition of Windsurf and partnership with Jony Ive hint at ambitions beyond models, potentially integrating AI into hardware and consumer devices
📍 New entrants yet to emerge from a host of startups prioritizing direct consumer needs that don’t just assist but act independently
In our following posts, we’ll explore how AI agents actually work—from their underlying architecture to their ability to interact with one another. How to build on? And what are the risks and challenges of deploying agents at scale?
📌 Definitely worth staying tuned
And if you have any questions, insights, or fascinating research to share, we’d love to hear from you. After all, this is a learning journey we’re on together.
Tools used for this post:
- Gemini, ChatGPT, Copilot, Manus for editing and research
- Canva & Midjourney for charts and images