The scientific community has reported two breakthrough developments that bring us closer to creating truly autonomous AI agents. The key trend is moving away from dependence on external training datasets (structured data used for AI training) and toward building long-term, accumulative memory.
Researchers from Salesforce Research and Stanford introduced the Agent0 system, which completely eliminates the need for external data. Two agents created from a single base model undergo a co-evolution cycle: one generates complex tasks (the Curriculum agent), while the other learns to solve them using built-in tools (the Executor agent).
This self-sustaining process enables AI to develop skills beyond human-provided knowledge. In tests, the Qwen3-8B-Base model demonstrated significant improvement: an 18% increase in mathematical ability and a 24% improvement in general reasoning.
Meanwhile, BAAI, Beijing Polytechnic University, and the University of Hong Kong presented a new approach to memory formation called General Agentic Memory (GAM). Unlike modern systems that rely on surface-level retrieval, GAM builds memory through deep exploration.
The system uses two components: Memorizer (compresses information) and Researcher (plans exploration and checks data relevance). This means that knowledge acquired for one task can scale to future tasks, forming domain expertise rather than serving as a one-time context. The technology has already reached over 90% accuracy on complex multi-step problems.
Together, these developments show that AI agents are moving toward full autonomy and the ability to build their own unique intellectual capital.
ORIENT
