New Framework Classifies AI Agents by How They Think and Work
Researchers proposed a two-dimensional system to categorize AI agents. It considers both how they process information and what tasks they perform, helping designers build better tools. This could make AI systems more reliable and easier to understand.

A team of researchers published a paper on arXiv introducing a new way to classify AI agents. The framework looks at two key aspects: cognitive function (what the agent does) and execution topology (how data flows through the system). Existing guides from companies like Anthropic and Google only focus on one of these dimensions, leaving gaps in understanding.
This matters because it helps people build and use AI tools more effectively. For example, an agent designed for planning tasks will work differently from one meant for verification. The framework makes these differences clear, so developers can choose the right approach for their needs. It also helps users understand why an AI might fail or succeed at certain tasks.
To put this into practice, you can start by thinking about the AI tools you use. If you're working with an AI assistant, ask yourself: Is it primarily planning, delegating, or verifying information? For instance, if you use an AI coding helper, notice whether it's more focused on writing code (planning) or checking for errors (verification). This simple exercise can help you understand the framework in action.