- by foxnews
- 22 Nov 2024
If you want to raise ungodly amounts of money, you better have some godly reasons.
That's what Anthropic CEO Dario Amodei laid out for us on Friday in more than 14,000 words: otherworldly ways in which artificial general intelligence (AGI, though he prefers to call it "powerful AI") will change our lives. In the blog, titled "Machines of Loving Grace," he envisions a future where AI could compress 100 years of medical progress into a decade, cure mental illnesses like PTSD and depression, upload your mind to the cloud, and alleviate poverty. At the same time, it's reported that Anthropic is hoping to raise fresh funds at a $40 billion valuation.
Today's AI can do exactly none of what Amodei imagines. It will take, by his own admission, hundreds of billions of dollars worth of compute to train AGI models, built with trillions of dollars worth of data centers, drawing enough energy from local power grids to keep the lights on for millions of homes. Not to mention that no one is 100 percent sure it's possible. Amodei says himself: "Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses."
AI execs have mastered the art of grand promises before massive fundraising. Take OpenAI's Sam Altman, whose "The Intelligence Age" blog preceded a staggering $6.6 billion round. In Altman's blog, he stated that the world will have superintelligence in "a few thousand days" and that this will lead to "massive prosperity." It's a persuasive performance: paint a utopian future, hint at solutions to humanity's deepest fears - death, hunger, poverty - then argue that only by removing some redundant guardrails and pouring in unprecedented capital can we achieve this techno-paradise. It's brilliant marketing, leveraging our greatest hopes and anxieties while conveniently sidestepping the need for concrete proof.
The timing of this blog also highlights just how fierce the competition is. As Amodei points out, a 14,000-word utopian manifesto is pretty out of step for Anthropic. The company was founded after Amodei and others left OpenAI over safety concerns, and it has cultivated a reputation for sober risk assessment rather than starry-eyed futurism. It's why the company continues to poach safety researchers from OpenAI. Even in last week's post, he insists Anthropic will prioritize candid discussions of AI risks over seductive visions of a techno-utopia.
But safety isn't a thrilling pitch deck subject, and new challengers are emerging daily. OpenAI cofounder Ilya Sutskever has launched his own AI lab, and departing OpenAI CTO Mira Murati is rumored to be starting her own venture. Amodei's blog isn't just about selling his vision - it's about staying competitive.
Despite his stated aversion to grandiose claims, he spends thousands of words painting a future where AI could reshape humanity's destiny (seeming to match Altman's silver-tongued pitches, which are legendary in tech circles). The post only makes one mention of alignment, aka ensuring AI systems operate in accordance with human values, and no mention of safety. This contradiction exposes a harsh truth: in the high-stakes world of AI development, even those wary of hype find themselves forced to play the game of grand promises to secure crucial funding.
Right now, there is an incredible gap between what this technology is capable of and the utopia that AI leaders imagine. This has led to a lot of fair criticism - how is it going to change the world in just a few years if it can't even reliably count the number of R's in the word strawberry? What AI reliably excels at now is automating routine tasks and analyzing massive datasets, identifying patterns with increasing precision. This strength is already aiding fields like finance, medicine, and autonomous vehicles. But while these advances are impressive, claims like Amodei's that AI might "structurally favor democracy" feel premature and overly optimistic. It's worth noting that AGI believers consider his blog post to be a bit too tame (which Amodei anticipated in his blog, writing in the footnotes that they should "touch grass").
Amodei's blog post isn't for the average AI-curious reader, nor is it a convincing roadmap for the future of AGI - again, a thing that doesn't and may never exist in the form the AI industry envisions. It's a pitch for investors: back Anthropic, and you're not just funding a company; you're buying a stake in humanity's radiant future. With varying levels of subtlety, that's what every AI executive - Altman at OpenAI, Elon Musk at xAI, Sergey Brin at Google - is promising. Get on board with their vision of the future or be left behind.
For all of AI's novelty, tech titans have long peddled their innovations as world-saving solutions. Mark Zuckerberg pitched the universal internet as a poverty cure. Brin once stated that Google could "cure death." Musk framed SpaceX's interplanetary ambitions as the ultimate backup plan for our species (and more recently, an imperative to vote for Donald Trump). But when there's a finite amount of investor money going around, altruism is a zero-sum game. As a tech mogul in Mike Judge's Silicon Valley famously put it, "I don't wanna live in a world where someone else makes the world a better place than we do."
Amodei is clearly aware of the risk of sounding hyperbolic. "AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they're attempting to distract from downsides," he writes. That hasn't stopped him from playing his part.
"It is a thing of transcendent beauty," Amodei caps his blog. "We have the opportunity to play some small role in making it real."
Booking.com released its 2025 travel predictions list, and one trend, "passport to longevity," has 57% of travelers seeking vacations to "extend their lifespan."
read more