← back8 min read
aiagentsllm

Building AI Agents: From Prompt Chains to Autonomous Systems

Feb 18, 2025·8 min read·Alfito Febriansyah

The term "AI agent" gets thrown around a lot, but what does it actually mean to build one? At its core, an agent is a system that uses an LLM not just to generate text, but to reason, plan, and take actions in the world — calling tools, browsing the web, writing and running code.


The Anatomy of an Agent

A basic agent has four components: a language model (the brain), a set of tools it can call, a memory system (short-term context + optionally long-term vector storage), and an orchestration loop that runs until the task is complete.

The loop looks like this: the model receives a task, decides which tool to use, calls the tool, gets the result back, and decides what to do next — repeat until done.


ReAct: The Most Common Agent Pattern

The ReAct (Reasoning + Acting) pattern prompts the model to alternate between thinking out loud and taking action. Each step looks like:

Thought: I need to find the current price of BTC.
Action: web_search("BTC price today")
Observation: Bitcoin is trading at $67,420.
Thought: I have the data. I can now answer the question.
Answer: The current price of Bitcoin is $67,420.

This pattern dramatically improves reliability by forcing the model to reason before acting.


Building One with LangChain

LangChain makes it straightforward to wire up an agent with tools:

from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun

llm = ChatOpenAI(model="gpt-4o") tools = [DuckDuckGoSearchRun()] agent = create_react_agent(llm, tools, prompt) executor = AgentExecutor(agent=agent, tools=tools)

Multi-Agent Systems

Single agents hit limits quickly — context windows fill up, tasks get too complex. The next step is multi-agent systems, where specialized agents collaborate: a planner agent breaks down the task, worker agents execute subtasks, and a critic agent reviews the output.

Frameworks like LangGraph and CrewAI make this much easier to implement than rolling your own coordination logic.


The Hard Problems

The technical setup is actually the easy part. The hard problems are reliability (agents can loop or hallucinate tool calls), cost management (long agentic runs burn tokens fast), and knowing when to stop. Building robust agents means investing heavily in evals, guardrails, and human-in-the-loop checkpoints.

← All posts— end —