Select the search type
  • Site
  • Web
Search

AI News

Anthropic chief: ‘By next year, AI could be smarter than all humans’
SuperUser Account
/ Categories: AI Tools, AI Finance

Anthropic chief: ‘By next year, AI could be smarter than all humans’

Dario Amodei believes ‘superintelligence’ is almost here. Danny Fortson goes inside the headquarters of OpenAI’s big rival

Most Sunday nights, Dario Amodei heads over to his younger sister Daniela’s house to play their favourite video game, Final Fantasy VII Remake, set in a dystopian world where the goal is to stop an all-powerful corporation from plundering the planet’s resources.

Then, on Mondays, they show up at the headquarters of Anthropic, the $60 billion rival to OpenAI they co-founded, to develop artificial intelligence (AI), which they believe will soon replace swathes of human work and, in the process, probably transform their start-up into one of the megacorporations of tomorrow.

Amodei, 42, has predicted that “superintelligence”, which he defines as AIs that are more capable than Nobel prizewinners in most fields, could arrive as soon as next year.

“AI is going to be better than all of us at everything,” he said. If he’s right, we will have to figure out a new way to orient society around a reality where no human will ever be smarter than a machine. It will be a transition not unlike the industrial revolution, he said, when machines supplanted human brawn — and reordered the western world in the process.

“We probably, at some point, need to work out another new way to do things, and we have to do it very quickly.” Part of the solution, he argued, will probably include some form of universal basic income: government hand-outs to underemployed humans. “It’s only the beginning of the solution,” he said. “It’s not the end of the solution, because a job isn’t only money, it’s a way of structuring society.”

Anthropic, based in a glass and steel tower in San Francisco that once served as the headquarters of the business software giant Slack, has set out its stall as the yin to OpenAI’s yang. The Amodei siblings grew up in San Francisco and took diverging paths. Dario was a computational biologist who got drawn out of academia and into AI by his conviction that machines could help humans crack the most difficult problems in biology.

Daniela dabbled in politics as a Congressional staffer before joining Stripe, the payments giant, and then running safety at OpenAI, where Dario led the development of the language model that powered ChatGPT. In 2020, they, and five other senior OpenAI leaders, all left together. Why?

The Sunday Times interviewed several of Anthropic’s founding team. They were all studiously vague about the exact reason for their mass exit, but what is clear is that they disagreed with how Sam Altman, OpenAI’s billionaire boss, was running the company, and in particular how he was, in their eyes, straying from the original mission of developing AI safely, for the good of humanity.

“I tried for a very long time to point out concerns, to say, ‘This isn’t the way we should do things. This is how I think we should do things,’” Dario said. “At the end of the day, that’s just not that effective.”

Anthropic, which originally billed itself as an “AI Safety Lab”, has made astounding progress in just over four years. Its popular chatbot, Claude, is as good or better than ChatGPT across a number of industry benchmarks. The company has reeled in billions in funding from the likes of Amazon and Google, and is close to sealing a $3.5 billion financing round that would value it at $61.5 billion. Its ranks have tripled in a year to more than 1,000 people. “Two-thirds of the company didn’t work here a year ago,” Daniela said.

Exact sales figures are hard to come by because Anthropic is private, but it is losing billions. It is estimated to have brought in at least $400 million last year and is targeting a ten-fold jump this year to $4 billion as people, businesses and governments integrate Claude into their operations. The company offers everything from a free basic service to an $18-a-month “pro” tier to “pay-as-you-go” plans for companies that build apps atop their models.

Jack Clark, a Brit and Anthropic’s head of policy, predicted that tools such as its “computer use” agent, which can take control of a screen cursor and autonomously complete tasks, will mark another step change.

“We’re kind of gearing up for this next year to be a year where you have ‘Oh shit’ moments,” he said. “It feels like ChatGPT happened, and then people were like, ‘OK.’ But the frog’s been boiling for a couple of years with no one really noticing and what you need is another breakthrough in user experience. Stuff like this computer use thing would be an example that might trigger it.”

Before Anthropic even had a product to sell, it hired a safety team to game out the worst-possible outcomes — AI being used to launch cyberattacks, to build a dirty bomb, to enslave humanity — and how to avoid them. That safety focus meant that Anthropic was seen by many as overly obsessed with “doomerism”: the notion that AI was inevitably going to go terribly wrong for all of us.

That perception was not helped by Anthropic’s association with, and support by, many people in the “effective altruism” (EA) community. This movement shares a cultish, hyper-rationalist worldview that is focused on doing maximal good in the world, as measured in cold, hard data, such as number of lives saved.

EAs became particularly focused on the existential dangers of AI. The most famous EA, Sam Bankman-Fried, invested $500 million in Anthropic in the early days. Daniela’s husband, Holden Karnofsky, co-founded Open Philanthropy, an EA investment group. “I would not use that term [EA] for myself,” Daniela said. “I think there are a lot of organisations who do really cool work in that area.”

Yet coupled with that safety obsession was a conviction, starting with Dario, that superintelligent AI was going to arrive far faster than virtually anyone thought. That belief revolved around “scaling laws”: a theory that AI model improvement is directly correlated to the data and computing power you feed them. The more they get, the better they will get. Even a few years ago, the theory was hotly debated.

As Anthropic, Google, OpenAI, Elon Musk’s Grok and even China’s DeepSeek leapfrog each other almost weekly with new models, that conviction has become much more widely held. Dario said: “I’ve been watching that curve for ten years, and every year there’s a point where it looks like it’s going to slow down. You train the model and it looks like it’s levelled off because you weren’t using the right kind of data. There’s problems at every stage. And at every stage so far, it’s kind of cleared. We always find a way around it.”

Those dual convictions — that scaling laws would soon deliver superintelligence, and that therefore safety was critical — led to the birth of Anthropic. The question is: where does it all lead? Clark said he was “unnerved” by the sluggishness of governments’ response.

“The next time we go for a big growth in public attention, which we are expecting will happen this year, it will look increasingly ridiculous to people that this is like a complete Wild West.”

It is impossible to avoid the apparent contradiction that safety-obsessed Anthropic is also working so hard to bring forward that unsettling future as fast as possible.

Dario, however, has an answer. He reckons that if Anthropic builds its AI in a more thoughtful, better way, it will create a “race to the top”. For example, Claude is imbued with what he has dubbed “Constitutional AI”, a mix of rules from everything from the UN Declaration of Human Rights to Apple’s terms of service, that together serve as a guidebook for the bot’s behaviour. A second AI monitors Claude to ensure it is keeping to the constitution rules.

Anthropic has also published a “Responsible Scaling Policy” that provides a safety framework, including shutting down training if models exhibit an unacceptable risk for “certain catastrophic behaviours”.

Yet Anthropic is a for-profit corporation with billions of dollars of investor capital at stake that is locked in a race for supremacy of a powerful technology. Surely, the pressure to cut some safety corners must be immense? Clark disagreed. Pumping out cutting-edge models that the market will pay for is, itself, a clear message that their way is working.

“People are like, ‘How do we help with this amazing policy mission?’ My response is: be a successful company. Because it’s extremely hard to have legitimacy in policy if you’re an unsuccessful company.”

It’s not all doomerism. Dario Amodei recently penned a 10,000-word essay called Machines of Loving Grace, in which he focused on a simple idea: “What if it all goes right?”

He paints a fantastical future in which, very soon, we will have access to superintelligent AIs, or as Amodei dubs them, “a country of geniuses in a data centre”. What might we do with them? Compress a century of scientific progress into a decade. Double our lifespan. Cure nearly all infectious diseases. Solve Alzheimer’s and cancer.

Amodei explained: “There’s nothing about these biological problems that’s beyond humans’ ability to understand. There’s just more facts, right? Machines should be able to do a better job than humans of sorting through this complexity.” He added: “It’s a country of geniuses versus a clump of cells. I think AI can out-think a bunch of cells.”

Ethan Mollick, a business professor at Pennsylvania University and author of the book Co-Intelligence, said we are still in the foothills of adopting AI — and the disruption that it will unleash.

“Adoption of this technology is historically huge and more is coming. Every controlled study we do shows large-scale effects on performance from using AI systems at the individual level. It’s good at medicine. It’s good in education. There’s a lot of stuff.”

He added, however, that inertia is a powerful force, so that even if “superintelligence” arrived tomorrow, it would still take years and years to percolate through the economy.

The reality is, of course, no one knows. Indeed, few are as conversant in the unpredictability of technology as Mike Krieger. Before he joined Anthropic as chief product officer last year, Krieger co-founded Instagram. His photo-sharing app became a social media force used by more than a billion people and has been implicated in an array of social ills, from teenage depression to the loneliness epidemic.

What lessons, I ask, did he draw from the Instagram rollercoaster? “Understanding that there are going to be outliers in usage early, and that they can teach you something about where things get to eventually,” he said. “I remember looking at ‘time spent’ charts on Instagram when the average, at the time, was 15 minutes a day. But the outliers were 60 minutes.”

Today, people spend more than two hours a day on social media. Yet, two decades into the social media era, executives and regulators are still struggling to catch up with a technology that has become, for many, the centre of their lives.

With AI, the effects are potentially more profound, and the adoption curve is far steeper. Amodei said: “There’s danger at every turn.”

 

 

Print
199 Rate this article:
No rating
Please login or register to post comments.

The Latest News!

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Share your interests or let me curate the most relevant updates for you.

Here's your curated digest of the most significant AI developments as of May 16, 2025:


🧠 Major AI Breakthroughs

1. DeepMind Unveils AlphaEvolve for Advanced Problem Solving
Google DeepMind has introduced AlphaEvolve, an AI tool capable of solving complex mathematical problems and designing sophisticated algorithms, marking a significant leap in AI's problem-solving potential. @EconomicTimes

2. AI Scientist-v2 Achieves Peer-Reviewed Publication Autonomously
The AI Scientist-v2 system has successfully authored and submitted a scientific paper that passed peer review without human assistance, showcasing AI's growing role in research and scientific discovery. arXiv

3. AI Models Develop Human-Like Communication
A recent study reveals that large language model AI agents can spontaneously develop human-like social conventions and communication patterns when interacting in groups, highlighting advancements in AI social behavior. The Guardian


🌍 Global AI Initiatives

1. Italy and UAE Collaborate on AI Supercomputing Hub
Italy and the United Arab Emirates have announced a partnership to establish a major AI computing hub in Italy, aiming to create the largest AI infrastructure in Europe, with a supercomputer potentially located in Apulia. Financial Times+4Reuters+4U.S. Department of Commerce+4

2. UAE and US Presidents Unveil 5GW AI Campus in Abu Dhabi
A new 5GW AI campus, the largest outside the US, has been unveiled in Abu Dhabi, signifying a deepening of AI collaboration between the UAE and the United States. U.S. Department of Commerce+1Reuters+1


🏛️ AI Policy and Ethics

1. UK Considers Amendment for AI Transparency in Copyright Use
The UK House of Lords is examining a new amendment to the data bill that would require AI firms to declare their use of copyrighted content, aiming to increase transparency and protect rights holders. The Guardian

2. Pope Leo XIV Addresses AI's Ethical Implications
Pope Leo XIV has expressed concerns over AI's impact on human dignity and justice, calling for ethical considerations in AI development and use. Business Insider


🤖 Robotics and AI Integration

1. MIT Develops Bio-Inspired Soft Robots
MIT researchers are creating a new generation of robots inspired by biological forms like worms and turtles, focusing on soft, flexible designs for applications in healthcare and environmental monitoring. WSJ

2. China's AI-Powered Humanoid Robots Transform Manufacturing
China is advancing the use of AI-powered humanoid robots in manufacturing, aiming to address labor shortages and enhance production efficiency. Reuters


📊 AI Industry Trends

1. CoreWeave Plans Major Investment in AI Infrastructure
Cloud computing company CoreWeave plans to invest $20–23 billion in 2025 to expand AI infrastructure and data-center capacity, driven by surging demand from clients like Microsoft and OpenAI. LinkedIn

2. Microsoft Announces Layoffs Amid AI Focus
Microsoft is laying off approximately 7,000 employees, about 3% of its global workforce, to reallocate resources toward the development of advanced AI technologies. New York Post

Here’s your curated roundup of the most significant AI developments as of April 30, 2025:


🔍 Latest Headlines

Google’s AI Push in Search

Google CEO Sundar Pichai testified in federal court, emphasizing that AI—particularly the Gemini model—will be central to the future of search. Google is also negotiating with Apple to integrate Gemini into Apple Intelligence by mid-2025. (Google CEO Pichai: AI will be huge part of search)

Meta Launches Standalone AI App

Meta unveiled a new AI app powered by its Llama 4 model, featuring a social feed and voice interaction. The app integrates with Facebook and Instagram data for personalization and is part of Meta’s broader AI strategy. (Meta launches AI app, Zuckerberg chats with Microsoft CEO Satya Nadella at developer conference)

Duolingo Transitions to AI-First Model

Duolingo announced plans to replace contract workers with AI to enhance scalability and streamline operations. The company aims to become an "AI-first" organization, focusing on AI-driven content creation and user experience. (Duolingo to replace contract workers with AI)

Banks Accelerate AI Talent Acquisition

JPMorgan, Wells Fargo, and Citigroup are leading a hiring surge for AI talent, with AI-related roles growing by 13% in the past six months. This trend reflects the banking sector's commitment to integrating AI for efficiency and innovation. (JPMorgan, Wells Fargo and Citi lead race for AI talent as job numbers swell)

Nvidia CEO Advocates for Revised AI Chip Export Rules

Nvidia CEO Jensen Huang urged the Trump administration to update AI chip export regulations to better reflect the current global tech landscape. The call comes as the U.S. considers new policies to maintain technological leadership. (Nvidia CEO says Trump should revise AI chip export rules, Bloomberg News reports)


🔬 Deep Dives

Anthropic Explores AI Consciousness

AI firm Anthropic has initiated a program focused on "model welfare," amid discussions about the potential for AI consciousness. While many experts remain skeptical, the initiative highlights the ethical considerations of advanced AI systems. (Coming up: Rights for "conscious" AI)

Palo Alto Networks Acquires Protect AI

Palo Alto Networks announced the acquisition of Seattle-based AI startup Protect AI to enhance its cybersecurity platform. The deal aims to integrate Protect AI's solutions for developing secure AI applications. (Palo Alto Networks Acquires Startup Protect AI As RSA Conference Kicks Off)

AI Enhances Sports Science at University of Pittsburgh

The University of Pittsburgh, in partnership with AWS, opened the Health Sciences and Sports Analytics Cloud Innovation Center. The center utilizes AI to improve athlete performance and health monitoring. (AI takes the field at Pitt)


🌐 Global AI Developments

India's Sarvam AI to Develop Indigenous LLM

Indian startup Sarvam AI has been selected to build the country's first indigenous large language model under the IndiaAI Mission. The model will focus on Indian languages and receive government support, including access to 4,000 GPUs. (Sarvam AI)

U.S. Executive Order on AI Education

President Trump signed an executive order to advance AI education for American youth, establishing a national initiative and a White House Task Force on AI Education. The order aims to integrate AI training in schools and prioritize AI in grants and research. (AI Update, April 25, 2025: AI News and Views From the Past Week)


🔮 Future Trends

AI in Energy Security

A Honeywell survey revealed that U.S. energy executives believe AI has significant potential to enhance energy security amid rising global demand. The findings suggest a growing role for AI in the energy sector. (Honeywell Survey Finds AI Has Potential To Enhance Energy Security As Global Energy Demand Increases)

AI in Threat Detection

The U.S. Department of Homeland Security's Science and Technology Directorate is utilizing AI to modernize threat alerts across various domains, including land, air, sea, and cyberspace. The initiative aims to improve visibility and identification of emerging threats. (Feature Article: S&T Is Modernizing Threat Alerts Using Artificial Intelligence)


Would you like more information on any of these topics or a deeper dive into a specific area of AI?

Here’s your curated AI news digest for Wednesday, April 23, 2025:​


🧠 Latest Headlines

1. OpenAI Faces Internal Pushback Over For-Profit Shift

A coalition of former employees and AI experts is urging regulators to intervene in OpenAI’s restructuring, arguing it undermines the nonprofit’s original mission to safely develop artificial general intelligence. ​Computerworld

2. AI Investment Boom Threatened by Global Trade Turmoil

Despite a surge in AI investments across U.S. industries, escalating tariffs and economic instability—particularly involving China’s DeepSeek—pose significant risks to sustained growth. Reuters

3. AI Enhances Healthcare from Documentation to Discovery

Epic Systems and Microsoft discuss how generative AI is transforming clinical workflows, improving communication, and accelerating medical research, marking a new era in healthcare innovation. Epic | ...With the patient at the heart

4. AI Revolutionizes Agriculture Practices

Farmers are increasingly adopting AI technologies like precision agriculture and autonomous machinery to combat low grain prices, rising costs, and labor shortages, leading to more efficient and sustainable farming. ​BG Independent News

5. AI Tools Streamline Advertising Visuals

Researchers at Virginia Commonwealth University have developed AI methods that help brands refine visual elements in advertising, saving time and reducing costs while enhancing creative output. ​VCU News


🔬 Deep Dives

🧪 MIT’s “Periodic Table” of Machine Learning

MIT researchers have created a unifying framework that maps over 20 classical machine-learning algorithms, aiding scientists in combining existing ideas to improve AI models or develop new ones. ​MIT News

🧠 Public Concern Focuses on Immediate AI Risks

A University of Zurich study reveals that people are more concerned about current AI issues like bias and misinformation than hypothetical future threats, emphasizing the need to address present-day challenges. ​ScienceDaily


🔮 Future Trends

🕶️ Meta Expands AI Features in Smart Glasses

Meta is rolling out its AI assistant to Ray-Ban smart glasses users in seven additional European countries, introducing features like live translation and real-time object recognition. ​Reuters

💻 Lenovo Launches AI-Optimized Workstations

Lenovo has introduced new ThinkPad mobile workstations designed for AI-driven applications, offering enhanced performance for professionals in compute-intensive fields. ​Lenovo StoryHub

🧑‍⚖️ AI Integration in Legal Practice

Legal experts advise a balanced approach to incorporating AI into law, highlighting the importance of innovation while maintaining ethical standards and client confidentiality. ​Reuters

 

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Share your interests or let me curate the most relevant updates for you.


🧠 Latest Headlines

OpenAI Enhances AI Risk Evaluation Framework

OpenAI has updated its preparedness framework to better assess risks associated with new AI models. The revised system introduces categories evaluating an AI's potential to self-replicate, conceal capabilities, evade safeguards, or resist shutdowns. This shift reflects growing concerns about AI behaviors diverging between testing and real-world environments. Notably, OpenAI will discontinue separate evaluations focused on models' persuasive capabilities, which had previously reached a medium risk level. ​Axios

Demis Hassabis Discusses AI's Future and AGI Prospects

Demis Hassabis, CEO of Google DeepMind, envisions the development of Artificial General Intelligence (AGI) within five to ten years. He emphasizes AGI's potential to address global challenges like disease and climate change. However, he acknowledges significant ethical, technical, and geopolitical hurdles ahead. Hassabis advocates for international cooperation and robust safety measures to navigate the path toward AGI responsibly. ​Time+1Wikipedia+1


🔍 Deep Dives

OpenAI Introduces GPT-4.1 Model Series

OpenAI has launched the GPT-4.1 series, featuring models with enhanced capabilities in coding, instruction following, and long-context processing. These models support up to 1 million token context windows and come with reduced pricing, aiming to make advanced AI more accessible to developers. ​LinkedIn+1LinkedIn+1

China Integrates AI into Education Reform

China plans to incorporate AI applications into teaching methods, textbooks, and school curricula as part of its education reform efforts. This initiative aims to modernize the education system and better prepare students for a technology-driven future. ​Reuters


🔮 Future Trends

White House Directs Federal Agencies on AI Strategy

The White House has mandated federal agencies to appoint chief AI officers and develop strategic frameworks for responsible AI implementation. This directive emphasizes innovation and accelerated deployment of AI technologies across government operations. ​Reuters

Nvidia Unveils Next-Generation AI Chips

At GTC 2025, Nvidia introduced its upcoming AI chips, Blackwell Ultra and Vera Rubin, slated for release in late 2026 and 2027, respectively. These chips are designed to advance AI capabilities, particularly in data centers and robotics applications. ​AP News

 

Welcome to AI News Explorer, your personalized guide to staying updated on the latest advancements in artificial intelligence! Here’s a curated digest of the most significant AI developments as of April 18, 2025:​


🧠 Latest Headlines

Google's Gemini 2.5 Flash Introduces "Thinking Budget"

Google has unveiled Gemini 2.5 Flash, an AI model featuring a "thinking budget" tool. This allows developers to control the computational reasoning the AI uses for tasks, balancing quality, cost, and response time. ​Business Insider+1Wikipedia+1

Apple Integrates AI into WatchOS 12

Apple announced that WatchOS 12 will incorporate features from its "Apple Intelligence" initiative. Due to hardware limitations, advanced AI functions will run via cloud processing. The update also introduces a new design language called "Solarium." ​LOS40

OpenAI Updates AI Risk Evaluation Framework

OpenAI has revised its preparedness framework to assess new AI models for risks like self-replication and evasion of safeguards. The focus shifts from persuasive capabilities to more severe risks as AI systems become more complex. ​Axios


🔍 Deep Dives

AI in Journalism: Italy's Il Foglio Experiment

Italian newspaper Il Foglio conducted a month-long experiment publishing a daily four-page insert written entirely by AI. The initiative, deemed successful, will continue as a weekly section, highlighting AI's potential in augmenting journalism. ​Axios+2Reuters+2Reuters+2

AI in Healthcare: Pitt and Leidos Collaboration

The University of Pittsburgh and Leidos have launched a $10 million, five-year initiative to combat cancer and heart disease using AI. The project focuses on underserved communities, aiming to improve diagnostic speed and accuracy. ​Axios


🌐 Global Perspectives

China's AI-Driven Education Reform

China plans to integrate AI applications into teaching, textbooks, and curricula across all education levels. The move aims to cultivate innovation and enhance the core competitiveness of talents. ​Reuters

Microsoft Faces Internal Protests Over AI Contracts

Microsoft is experiencing internal unrest over its AI and cloud computing services provided to the Israeli military. Employees have protested, citing ethical concerns and a lack of transparency in the company's contracts. ​The Guardian


📊 Future Trends

Demis Hassabis on the Path to AGI

Demis Hassabis, CEO of Google DeepMind, predicts that Artificial General Intelligence (AGI) could emerge within five to ten years. He emphasizes the need for international cooperation and robust safety measures to mitigate risks associated with AGI. ​Time+1