- Anthropic revealed 'dreaming' for AI agents, meant to refine their working memory and reduce errors.
- The company pushed related tools into the public hands' at its annual developer conference.
- Anthropic co-founder Jack Clark predicted this week that new tools will help AI self-improve
Anthropic's bid to build self-improving AI agents just spawned a new, anthropomorphic technique dubbed "dreaming."
The high-flying AI lab announced the new process at its developer conference on Wednesday. "Dreaming" is part of Anthropic's push to overhaul software engineering and other knowledge work with increasingly autonomous tools. The new capability will plug into its nascent Claude Managed Agents product.
The technique is meant to refine a system's memory by running evaluations between sessions. It will review old behavior and seek patterns, then help agents establish better ways of working and cut down on mistakes.
For now, the "dreaming" process is launching as a research preview, where developers must apply for access.
Anthropic's revenue has surged in recent months as software engineers embrace its Claude Code service and related offerings that help developers fire up agents to pursue long-running coding projects. The startup is trying to expand these capabilities beyond software engineering to sectors like finance and law.
Getting agents to remember and learn from their previous work could make Anthropic agents more accurate and productive over time, increasing their value to paying customers.
The new release also sheds light on a splashy essay published Monday by Anthropic co-founder and policy head Jack Clark. Writing on his Import AI newsletter, Clark posited that there's a 60% chance frontier AI models will be able to autonomously train their successors by the end of 2028.
He cited the current wave of research, as well as the rollout of new products from frontier labs. Clark didn't mention "dreaming," but it falls under the company's effort to turn its AI models into agentic workhorses that partially manage themselves.
"This significant rise in the length of time that AI systems can work independently correlates neatly with the explosion in agentic coding tools," Clark wrote in the essay. "This is the productization of AI systems which do work on behalf of people, acting independently for significant periods of time."
Also on Wednesday, Anthropic moved two Managed Agents tools out of research preview and onto the public beta. One uses rubric-style outcomes to guide agents; the other can delegate work to multiple sub-agents.
Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 415-757-8198. Use a personal email address, a nonwork WiFi network, and a nonwork device; here's our guide to sharing information securely.












