Building AI agents and bots with AWS Bedrock and Pydantic AI

Like many Product Managers, I’ve been diving deep into AI agents and bots to understand how they shape new products. However, my role requires more than just understanding what they are, I also need to know the technical how so we can build the systems our engineering teams will build with.

As I started to dig into how to build an agent, I realised that with the libraries and support out there, in reality there is not much to them. Pydantic AI for example only requires a prompt to create an “agent”.

For the purpose of this post, we’ll use the term ‘agent’ to align with Pydantic AI library’s definition which is as an LLM interface, the end result is what Google would define as a “bot”.

I’d also say that with a personal project, using AWS Bedrock is a bit overkill. It’s a complex system designed for organisations. You are better off going directly to the LLM provider of choice. So why am I explaining it here? Because we use Bedrock at work and I wanted to learn how it worked so I could understand its systems in play for the ecosystem our teams work in. This is a learning experiment, but there’s easier ways to get this done.

There is a perk with using AWS – you get six months to take advantage of US$100 of free credits, and as you use more features you can get another US$100. This means you can play around with this without spending a cent.

Prepping your dev environment

First, let’s set up the desktop. I’m using Windows 11 with WSL 2.0 running Ubuntu.

The Python ecosystem has become even easier of late with the introduction of UV. It combines a number of individual Python dev tools into a single command line tool. No more fighting with venv.

I’m not going into too much detail on how to set each of these things up, but I recommend checking out UV and installing that.

Getting those AWS credits

Go to the AWS homepage and create an account. Follow the instructions and make sure you add a payment method (it’s the only way to get the free credits, and it won’t be charged until you run out).

Now you would have created a “root account” that you use to sign in. 

AWS features are broken up by region, and when logged in you’ll find what region you are signed in to in the upper right. Generally the default should be fine, I’m playing around in us-east-1 (the famous one).

The good news is that since I started this post AWS have gone ahead and just activated all the models with no manual intervention. It used to be you had to go through a bit of a process to enable each model you wanted.

However Anthropic models still need some details. In the left hand nav go to “Chat / Text playground”, click “Select model”, choose Anthropic and then your model of choice (for this I’ll just play with Claude Haiku 4.5). Anthropic models need an Inference region, just select one.

You should be able to play around with the model easily in here. In classic AWS fashion, it doesn’t explain how activating these Anthropic models works and I’ve already done it once. If you do get asked for company information, I just said it was for personal projects and it gave me access nearly right away.

While you’re here, let’s set up your API key. These let you access Bedrock models much easier than managing it through AWS IAM roles. Go to “API keys” in the left hand nav, choose “Long-term API keys” and select “Generate long-term API keys”. Select an appropriate expiry date, then hit “Generate”. Copy this API key immediately.

Coding your first agent

Right, time to do some coding. First, I open up Terminal into my Ubuntu instance and create a new folder.

First we initialise our project,

$ uv init

It will set the project up for us. Then we add some dependencies

$ uv add pydantic-ai python-dotenv

The two dependencies are:

  • Pydantic AI, which we will use to build our agents
  • python-dotenv, which is used to set up our environment variables through a .env file

Now let’s open this folder in our code editor. I’m going to use VSCode for this, so I just type code . to open a VSCode editor in this folder.

We’ll just keep everything in one file for our purposes. Open up the main.py file in this folder.

# Import the dotenv library to load environment variables from a .env file
import dotenv
dotenv.load_dotenv()

# Import os module to access environment variables, and asyncio for async support
import os
import asyncio

# Import the Agent class from Pydantic, which is the core component for creating AI agents
from pydantic_ai import Agent


# Retrieve the model name from environment variables
model_name = os.environ["MODEL_NAME"]


# Create an AI agent instance with a specific behavior and personality
helpful_agent = Agent(
  # Specify which AI model to use for responses, this will be the environment variable we set earlier
  model=model_name,
  # Define the agent's role and behavior through a system prompt
  system_prompt="You are an expert translator that helps translate from English to Spanish. You always respond to the user with the Spanish translation only."
)


# Define the main async function that will run the agent
async def main():
  # Start an interactive command-line interface for the agent
  # This allows users to chat with the agent in the terminal
  await helpful_agent.to_cli()


# Standard Python entry point - runs when the script is executed directly
if __name__ == "__main__":
  asyncio.run(main())

By using the await helpful_agent.to_cli() we just set up a simple chat interface, it will wait for us to input a prompt and give us the response from the server. We don’t need to do anything else.

Finally, create another file, called .env in the root directory, we will have two variables in here,

  • AWS_BEARER_TOKEN_BEDROCK, paste in your key from earlier as it’s used for authentication (if you lost it, you’ll have to create a new one)
  • MODEL_NAME, the ID of the model we’re using, for Anthropic models this must be an inference profile. You can find this by going to the AWS Bedrock console and selecting “Cross-region inference” in the left hand nav and copy the value in the “Inference profile ID”. Prefix the word “bedrock:” like below in your .env file first

The file should look like this,

AWS_BEARER_TOKEN_BEDROCK="{key}"
MODEL_NAME="bedrock:global.anthropic.claude-haiku-4-5-20251001-v1:0"

So to overview what we’ve done so far,

  1. Created an initialised an AWS account to use the LLM models available through AWS Bedrock
  2. We initialised a new Python project with uv, and imported dependencies
  3. Created a simple agent (this one translates text into Spanish), and told it to just run as a CLI
  4. Finally we set up the configuration parameters so it works

With all this, we should just be able to run

uv run main.py

And you should just be greeted with,

pydantic-ai ➤ 

Type in your prompt and hit Enter.

(first-agent) $ uv run main.py
pydantic-ai ➤ Hello, how are you?
Hola, ¿cómo estás?                                                                                          
pydantic-ai ➤

If you want to change its name by the way, you can just set the prog_name value in the to_cli call like below,

await helpful_agent.to_cli(prog_name="Spanish Translator")

Congratulations! You’ve made an AI bot!

What next?

This is a very simple example of a bot. The Pydantic AI API is quite rich and I recommend reading through it, especially its agent concept primer.

It’s honestly been surprising how small the gap has become from concept to execution. The interface to build agents/bots has come a long way, since we’ve just built a translation bot in minutes.

But it only scratches the surface of what agents and bots can really do. You can get them to talk to other agents, call other tools, and even return data in a format other API’s can understand.

So, what would you do with the agents you create?