hypercapitalism and the AI talent wars
the AI talent wars challenge the shared trust and mission that aligned founders, employees, and investors
Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
The talent mania could fizzle out as the winners and losers of the AI war emerge, but it represents a new normal for the foreseeable future. If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.
This inequality certainly manifests at the founder level (Founders Fund exists for a reason), but applies to employees too. Key people have driven billions of dollars in value – look at Jony Ive’s contribution to the iPhone, or Jeff Dean’s implementation of distributed systems at Google, or Andy Jassy’s incubation of AWS.
The tech industry gradually scaled capital deployment, compounding for decades to reach trillions in market cap. The impact on the labor force has been inflationary, but predictable. But in the two and a half years post-ChatGPT, AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone don't guarantee good models, the capital cannon is shifting towards talent.
Silicon Valley built up decades of trust – a combination of social contracts and faith in the mission. But the step-up in the capital deployment is what Deleuze would call a deterritorializing force, for both companies and talent pools. It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow.
Trust can no longer be assumed as an industry baseline. The social contracts between employees, startups, and investors must be rewritten. In the age-old tension between mission and money, missionary founders must prepare themselves for the step-function increase in mercenary firepower.
Hypercapitalist AI talent wars will rewrite employment contracts and investment norms, concentrate returns, and raise the bar for mission and capital required to create great new companies.
Talent
As a thought exercise, how much should Google have paid for Deepmind? In 2014, a $400M acquisition for a pre-revenue company seemed nonsensical. But with the leverage that comes with Google scale, the DCF value could be quite high – a few percentage points in net savings from their datacenter costs could make it a 100x+ return over a decade, and that’s in a pre-LLM world! In the context of Google paying $3B for Noam, they’ve probably already earned back that investment in his help getting Gemini training runs unstuck; the deal even looks modest with a year of hindsight.
From the Big Tech point of view, if AI is a $10T+ revenue opportunity, and your research team sized scales sublinear to revenue with a cap of a few hundred researchers, is the difference between spending $5M/year/researcher and $10M/year and $20M/year enough to stop you? $10B per year in researcher comp is less than a quarter of Meta’s annual capex. No matter the odds of ultimate product-market fit, the sunk cost is too large to turn back now.
Even in 2014, the AI talent wars existed: Meta was reportedly the other bidder in the Google-Deepmind deal. But why didn’t pricing for top talent run up sooner? The confluence of compute leverage, urgency, and supply constraint means the labor share of value is higher than prior technological waves:
Compute leverage: Large labs spent tens of billions of dollars on compute clusters, and are ramping to hundreds of billions. If the utility of compute is a function of compute × research efficiency, and well-used compute drives value across a massive revenue base, the willingness to pay for top researchers grows exponentially.
Demand urgency: AI products distribute faster than software or internet products – chatbots and codegen being two early examples – so the urgency to get ahead is extreme as the market pecking order is established.
We’re still in an indefinite R&D phase of AI, where research quality determines frontier product status. If the codegen market is representative of frontier pricing power, the delta in enterprise value capture for being 2 months ahead vs 2 months behind has never been larger.Supply constraint: If you think the key end markets will be decided in the next 1-2 years across the product categories that matter, there’s little time to train up new talent, and only a few hundred people currently have the skills to harness current frontier model capabilities.
No single analogy is perfect, but we can learn a lot from athletes, actors, and traders, where the best are worth 10x or 100x the average. All three categories have tremendous capital magnification – a superstar needs expensive infrastructure (compute clusters, risk systems, studio marketing, training facilities). Managing these superstars has a few unique properties:
Discovery: For non-urgent hires, athlete scouting could translate nicely to AI labor to spot young talent before it runs up in pricing. In Hollywood, spotting early talent is similar to AI researchers in that it’s more art than science; until they have their “break” performance, actors are hard to value.
Negotiation / pricing: CAA emerged in the 1970s to build up acting talent, package players together as teams, and increase the protections for stars. Like in sports and acting, researchers are already starting to use talent agents to represent themselves. For lower-priced talent, Hollywood guilds emerged to set day-rates as a collective bargaining mechanism to protect the labor.
In sports, moneyball shifted talent pricing from intuition to statistics. But unlike in sports, researcher impact is hard to quantify, especially ahead of time. Even in hindsight, the corporate history gets rewritten by the victors, and it is often challenging to attribute research breakthroughs singularly.Retention: On Wall Street, losing a star trader is costly in alpha leakage, and financial institutions fight hard to lock people in with non-competes and garden leave policies. In tech, it’s harder to prevent information diffusion when researchers are in the same social circles, but the labs are becoming more locked down from an infosec perspective. Most secrets get held across many labs, but a small fraction are proprietary; and for many submarkets, marginal model edge determines pricing power.
Hypercapitalism erodes Silicon Valley’s trust culture. Industry-level trust alone no longer guarantees loyalty between companies and talent. With trade secret leakage risk and money big enough to tear teams apart, vanilla at-will employment contracts don’t protect either side.
The industry needs a SAFE equivalent for tech talent. New employment contracts must satisfy demands from both companies and talent:
Company side: I expect companies to push for more aggressive trade secret protection, stricter NDAs (SSI-esque secrecy), more exclusivity (named-competitor noncompetes), and garden leaves to safeguard against flighty talent. I’m not an employment lawyer, and non-competes get tricky in California, but I wouldn’t be surprised to see heavier measures to approximate exclusive contracts, like deferred compensation or equity clawback agreements for defection to a named list of competitors.
Talent side: AI researchers are one of the only super-earning groups outside of finance that doesn’t have negotiating help, and seem poised for professional representation or collective bargaining. To match public company alternatives, startups may need to offer liquidity guarantees or early IPOs. The rank and file employees also need new mechanisms to ensure that founders and execs won’t walk away without them.
We’re in the early days of labor repricing – big tech’s AI capex investments are so large that they already have the sunk cost, and the labor as a percentage of total investment is still low. Companies must re-think their recruiting and retention strategies.
Companies
The talent war is a net-consolidating force on the AI research frontier. At the research labs, big dollars for researchers makes it nearly impossible for new entrants to play. For the same reasons, it’s nearly impossible to start a new quant fund – you can’t get the same leverage out of the talent that big players can.
In the tradeoff between money and mission, the money has gone parabolic. Founders use both to magnetize top talent to their companies, but as the capital opportunity cost increases, only the strongest missions can justify the economic sacrifice that candidates make. To the credit of both OpenAI and Anthropic, money alone has not been enough for the best researchers to defect – their cult status effectively creates a multiplier on their R&D budgets.
The labs feel the talent war most directly, but all startups now require extreme resource aggregation to make AI R&D bets. When the opportunity cost of top talent is higher (both founders and engineers), it becomes harder to coordinate top talent around an early-stage bet. SSI, Thinking Machines, and Physical Intelligence all required massive funding rounds for a shot on goal. A single research hire can cost the entirety of a Series A fundraise, making AI R&D far more expensive for startups, pushing most to live above the APIs.
A startup industrial complex around the Seed → Series A → Series B progression emerged in the 2010s to support the growth of software companies. Some companies still follow this pattern successfully: Harvey, Abridge, Glean, and others. But I believe that going forward, an increasing share of startup successes will have a “fat pitch” founding story: incubations with stacked founding teams, high institutional credibility on day one, and uniquely powerful missions.
Modern successes like SpaceX, Anduril, and OpenAI could not be built as lean startups. They are too long-horizon and capital intensive to work through the traditional apparatus. The most promising tech frontiers often have high activation energy – foundation models, robotics, biology – where mega-rounds are the only way to bridge to the future. The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!
On the big tech side, the talent wars thin the playing field to companies with 1) tens of billions in net income that they can cut into, and 2) leaders with founder-like agency that will heavily sacrifice earnings for a seat at the AI table. A sharper power law will create a new “giga-cap” class, with multiple $10T companies by 2035.
Some subset of startup winners will have a similar formula to what worked in the 2010s: small, scrappy team, building iteratively until they crack product-market fit. An increasing percentage of the new winners will have large war chests and strong missions from day one. AI talent wars will be a net-consolidating force.
Investors
Being a rigid seed or Series A-only investor in 2025 is anachronistic. Should you simply ignore the most important tech companies of this generation?
Investors must be more flexible than prior generations. The best companies won’t map to the predictable fundraise sequence of the past 20 years. Rapid product adoption will require investors to swallow their pride and admit misses much more quickly. For some companies investors passed on 6 months ago, the right decision is to invest today at 2-3x the valuation.
At the early stage, a new deal consideration has emerged: investors evaluate companies with the team quality constituting their downside case. Character and other “talent deals” make investors think that they can’t lose money investing in top tier research teams – it’s almost like investing in an AI researcher labor union. As long as the company exits for more than the cumulative fundraising amount, investors get paid back first. Investors then justify putting more dollars in at an earlier stage than they would otherwise.
SSI and Thinking Machines both follow this pattern. Investors don’t even need to scrutinize the exact technical approach, because the upside of AGI is infinity (1% chance of breakthrough → $10T+ company). If you believe you can’t lose money given the team quality, the upside case is almost like a “free” call option.
But if VCs assess the talent wrong – over-estimate the talent, or over-estimate the talent’s commitment to the company – they can get nuked on big slugs of capital. Even if a team gets to a technical breakthrough, capturing the value isn’t guaranteed. Research teams that achieve technical breakthroughs are not necessarily the ones that get product and sales right.
Historically, the social contract of starting a company meant the founders would see it through to an exit. But how big do the numbers need to get before that breaks? People didn’t used to leave companies they founded, especially when they’re nascent and/or highly valued, but the AI talent war is deterritorializing. This fragility enables a CEO or key execs to leave their company with minimal recourse.
Like the founder <> researcher social contract, investors also need to reconfigure the founder investor social contracts for this new world, particularly for research-heavy teams:
Key man clauses: Like VC funds themselves often have key man clauses for key investors to protect LPs, investors need protection against extreme instances of talent departure, particularly when it involves the founders. One instantiation could be that founders taking new jobs would qualify as an M&A event, or allow investors to redeem their capital.
Dilution: For any company participating in the AI talent war, investors need to factor in more option pool dilution than they would otherwise. The share of value captured by employees vs. the capital layer may look fundamentally different for this type of company. Some may need fewer employees to win, but for many categories, AI companies will be structurally less profitable and more dilutive than internet companies.
Only lead checkwriters of large rounds have enough leverage to command these terms. This makes it harder than ever to be a pure early-stage investor in AI research-driven companies.
As an investor, the founders you invest in need an answer to the talent war. Either they have a cult-like missionary following, or need a clear path to winning the mercenary game with higher stakes than ever.
Conclusion
In the 2010s software bull market, success in the startup world was broad, and it felt like everyone could win (anyone could start a software company, or at least join / invest in one). That’s still somewhat true; lots of people have built multi-$m ARR AI businesses in short order. The one-person unicorn concept suggests that anyone can start a big company using AI.
But in the new world, the concentration of outcomes will be different, both at the talent and company levels. Fewer companies getting more funding and revenue, fewer employees getting paid more. Only the fiercest founders and strongest missions can offset the inflection in mercenary market forces.
High earners usually avoid attention, but splashy 9-figure researcher offers draw significant public interest. There is a human bias against accepting singular winners (of talent, of companies); it doesn’t feel fair to have a few people run away with big markets. There is something uniquely unstable about a more uneven distribution of success. The French had a uniquely high Gini coefficient before the Revolution.
The M&A talent war is just beginning, raising compensation baselines and labor promiscuity. To protect against the deterritorialization, I expect new labor dynamics to emerge on both sides of the table: agents, unions, aggressive non-compete tactics. As the numbers get bigger for talent and companies, all sides need to reimagine the social contract. As the glue holding teams together, company mission matters more than ever.
The AI talent wars will rewire Silicon Valley.
Thanks to Axel Ericsson, Philip Clark, Melisa Tokmak, Joey Krug, Cat Wu, Will Manidis, Robert Windesheim, Lachy Groom, and Will Depue for their thoughts and feedback on this article.
I completely agree with the idea that the unspoken, gentleman’s agreement of mutual loyalty between companies and employees no longer holds. I also support the argument that hypercapitalism plays a role in this shift. However, I believe it's important to highlight the root cause of this cultural breakdown:
In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.