AWS has announced the launch of Strands Agents, an open source SDK that takes a model-driven approach to building and running AI agents in just a few lines of code. The technology giant has claimed that Strands can scale from simple to complex agent use cases, and from local development to deployment in production. “Strands simplifies agent development by embracing the capabilities of state-of-the-art models to plan, chain thoughts, call tools, and reflect. With Strands, developers can simply define a prompt and a list of tools in code to build an agent, then test it locally and deploy it to the cloud. Like the two strands of DNA, Strands connects two core pieces of the agent together: the model and the tools,” says Amazon in its official blog post.
Also read: AWS Announces SaaS Manager for Amazon CloudFront
AWS announced that Strands Agents is an open community initiative, welcoming contributions and support from various companies, including Accenture, Anthropic, Langfuse, mem0.ai, Meta, PwC, Ragas.io, and Tavily. Notable contributions include Anthropic’s integration of support for using models via the Anthropic API and Meta’s addition of support for Llama models through the Llama API. The community, the company adds, is actively inviting developers to explore and contribute through its GitHub repository.
Core concepts of Strands Agents
AWS adds in its blog that the simplest definition of an agent is a combination of three things: 1) a model, 2) tools, and 3) a prompt. The agent uses these three components to complete a task, often autonomously. The agent’s task could be to answer a question, generate code, plan a vacation, or optimize your financial portfolio. In a model-driven approach, the agent uses the model to dynamically direct its own steps and to use tools in order to accomplish the specified task.
“An agent interacts with its model and tools in a loop until it completes the task provided by the prompt. This agentic loop is at the core of Strands’ capabilities. The Strands agentic loop takes full advantage of how powerful LLMs have become and how well they can natively reason, plan, and select tools. In each loop, Strands invokes the LLM with the prompt and agent context, along with a description of your agent’s tools. The LLM can choose to respond in natural language for the agent’s end user, plan out a series of steps, reflect on the agent’s previous steps, and/or select one or more tools to use. When the LLM selects a tool, Strands takes care of executing the tool and providing the result back to the LLM. When the LLM completes its task, Strands returns the agent’s final result,” adds the company in the blog.