A practical guide for building effective LLM-powered systems using pure Python—no heavy frameworks, just proven patterns and reusable code snippets.
This repository provides real-world examples and design patterns for building reliable, composable LLM workflows. Whether you're working on agents, assistants, or task-specific pipelines, you'll find ready-to-use patterns like:
- 🔗 Prompt Chaining
- 🔀 Request Routing
- ⚡ Parallel Execution
- 🤖 Orchestrator–Worker Architecture
💡 Inspired by:
- Building AI Agents in Pure Python - by Dave Ebbelaar
🔗 Learn more about the theory and practice behind these patterns:
- Building Effective Agents - Anthropic's blog post
- Basic LLM calls
- Structured outputs
- Tool usage
- Retrieval integration
- Prompt chaining
- Routing
- Parallel guardrails
- Orchestrator–Worker design
🔧 Requirements: Python basics, OpenAI SDK, and API key
Includes implementations using the above patterns to build a functional calendar assistant:
Break down complex tasks into structured steps for better reliability.
Direct user input to specialized logic based on intent.
Run multiple LLM checks (e.g., security + calendar validation) concurrently.
Coordinate planning, writing, and review stages with specialized agents.
Follow these steps to set up the project locally:
- Clone the repository:
git clone <repository-url> cd llm-workflow-cookbook
- Create and activate a virtual environment:
python -m venv .venv source venv/bin/activate # On Windows use `.venv\Scripts\activate`
- Install dependencies:
pip install -r requirements.txt
- Set up environment variables:
Create a
.env
file in the root directory and add your OpenAI API key:OPENAI_API_KEY=your-api-key-here