How to Build Your Own Autonomous AI Agent From Scratch!

Techiral
5 min read6 days ago

--

Learn how to create your own self-operating AI agent using Python, the OpenAI API, and ReAct prompts. This step-by-step beginner-friendly guide covers real-time data handling and practical code examples to build an autonomous AI assistant. autonomous AI agent, build AI agent, Python AI tutorial, OpenAI API, ReAct prompts, AI agent from scratch, self-operating AI assistant, step-by-step AI build, beginner AI tutorial
Build Your Autonomous AI Agent from Scratch | Python, OpenAI API & ReAct Prompts Tutorial

Hey there, tech fans! Today, we’re exploring how to create an impressive autonomous AI agent using basic Python. That means no complex third-party libraries like Langchain or CrewAI — we’re sticking to a straightforward, hands-on approach so you can fully understand the core principles.

Quick mention: If you’re excited about this guide, be sure to check out my GitHub repo for the complete AI Agent code. And if you enjoy this tutorial, please give it a like and visit my LinkedIn page for more projects and updates.

What’s an AI Agent, Anyway?

Imagine asking ChatGPT, “What’s the response time for 10x-ship.vercel.app/?” If it doesn’t know right away, that’s because it only relies on pre-trained data. That’s where our autonomous AI agent comes in.

An Autonomous AI Agent takes a large language model (LLM) and enhances it with external functions and intelligent prompting. In simple terms, it’s the special feature that allows the AI to fetch real-time data and provide answers efficiently.

How It Works:

  1. Query Input: You submit your question.
  2. Processing with ReAct System Prompt: The LLM goes through a thought process to decide its next step.
  3. External Function Execution: It selects an action — like calling a function to check website response times.
  4. Response Generation: With the latest data, it produces a quick answer.

Getting Started: Setting Up Your Python Project

Let’s get started on building this AI agent step by step. Open your preferred IDE (I personally recommend Visual Studio Code) and let’s begin coding!

1. Create and Activate a Virtual Environment

Open your terminal and run these commands to create and activate your virtual environment. This keeps your project’s dependencies organized:

# Create a new virtual environment
python -m venv myenv

# Activate the environment (Windows)
myenv\Scripts\activate

# Activate the environment (Mac/Linux)
source myenv/bin/activate

2. Install the OpenAI Package

We’re using the OpenAI API as our core model. Make sure you have your API key ready, then create a .env file in your project folder with:

OPENAI_API_KEY="sk-XX"

Now, install the OpenAI package:

pip install openai python-dotenv

Great — let’s continue!

3. Set Up Your Project Files

Create these three files in your project:

  • actions.py
  • prompts.py
  • main.py

This file structure will help keep everything organized.

Generating Text with the OpenAI API

In your main.py, write a simple function to call the OpenAI API. This function will be the heart of our AI agent:

from openai import OpenAI
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Create an instance of the OpenAI class
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))


def generate_text_with_conversation(messages, model = "gpt-4o-mini"):
response = openai_client.chat.completions.create(
model=model,
messages=messages
)
return response.choices[0].message.content

Run your script to ensure everything is working. If it prints a response, you’re all set! ✅

Defining the Agent’s Functions

Next, let’s create some basic functions that our agent will use. In actions.py, add this code to simulate checking a website’s response time:

def get_response_time(url):
if url == "google.com":
return 0.5
if url == "openai.com":
return 0.3
if url == "medium.com":
return 0.4
return None # For any other website

This simple function returns fixed response times, which clearly shows how our agent can call external functions.

Crafting the ReAct Prompt

The key behind our agent’s effective behavior is the ReAct Prompt. It’s a set of instructions that guides the AI through a loop of Thought, Action, and Response — almost like it’s having a brief planning session with itself.

In prompts.py, set up your system prompt as follows:

#react prompt
system_prompt = """

You run in a loop of Thought, Action, PAUSE, Action_Response.
At the end of the loop you output an Answer.

Use Thought to understand the question you have been asked.
Use Action to run one of the actions available to you - then return PAUSE.
Action_Response will be the result of running those actions.

Your available actions are:

get_response_time:
e.g. get_response_time: google.com
Returns the response time of a website

Example session:

Question: what is the response time for google.com?
Thought: I should check the response time for the web page first.
Action:

{
"function_name": "get_response_time",
"function_parms": {
"url": "google.com"
}
}

PAUSE

You will be called again with this:

Action_Response: 0.5

You then output:

Answer: The response time for google.com is 0.5 seconds.


"""

This prompt directs the LLM on exactly what to do — from understanding the question to choosing a function and finally providing the answer.

Bringing It All Together

Now, integrate everything in main.py. First, list your available actions:

from openai import OpenAI
import os
from dotenv import load_dotenv
from actions import get_response_time
from prompts import system_prompt
from json_helpers import extract_json

# Load environment variables
load_dotenv()

# Create an instance of the OpenAI class
openai_client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))


def generate_text_with_conversation(messages, model = "gpt-4o-mini"):
response = openai_client.chat.completions.create(
model=model,
messages=messages
)
return response.choices[0].message.content


#Available actions are:
available_actions = {
"get_response_time": get_response_time
}

user_prompt = "what is the response time of amazon.com?"

messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]

turn_count = 1
max_turns = 5


while turn_count < max_turns:
print (f"Loop: {turn_count}")
print("----------------------")
turn_count += 1

response = generate_text_with_conversation(messages, model="gpt-4o-mini")

print(response)

json_function = extract_json(response)

if json_function:
function_name = json_function[0]['function_name']
function_parms = json_function[0]['function_parms']
if function_name not in available_actions:
raise Exception(f"Unknown action: {function_name}: {function_parms}")
print(f" -- running {function_name} {function_parms}")
action_function = available_actions[function_name]
#call the function
result = action_function(**function_parms)
function_result_message = f"Action_Response: {result}"
messages.append({"role": "user", "content": function_result_message})
print(function_result_message)
else:
break

This loop is the heart of the ReAct cycle. It continuously processes the input, checks whether a function call is needed, executes it, and then feeds the result back into the conversation.

Note: I created a helper function called extract_json to easily extract the JSON block from the LLM response. You can implement it in a way that suits your needs.

Testing Your Agent in Action

It’s time to see your agent in action. Run your project and watch as your AI agent cycles through Thought, Action, and Response until it provides you with that final answer.

If you encounter any issues, don’t worry — I’m available almost every day on the forum (for free!) to help you out.

Final Touches & Additional Resources

Before you start experimenting on your own, here are some additional resources:

  • Download the Complete Code: Access the full codebase from my GitHub repo. It’s all set up and ready to run.
  • Further Learning: For a deeper understanding and real-world examples, check out the free video tutorial that walks you through the process of building this AI agent step by step.
  • Show Your Support: If you enjoyed this tutorial, please give it a like and share it on social media. Also, connect with me on LinkedIn to stay updated on more tech projects and tutorials.

Watch the Full Video Tutorial

For those who prefer video learning, click here to watch a detailed walkthrough on how this AI agent works and how you can build it yourself.

Thank you for reading! Remember, coding can be both enjoyable and rewarding — even with a limited budget. Keep experimenting, share your success, and most importantly, continue to improve your skills.

Happy coding, and see you next time!

— Your friendly AI builder

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Techiral
Techiral

No responses yet

Write a response