Agentic Framework Comparison Repository
This repository provides a comprehensive comparison of different agentic AI orchestration frameworks through working implementations. Each branch contains the same core application built with a different framework, allowing you to explore and compare approaches to building AI agents.
π Table of Contents - Framework Implementations
| FrameworkΒ | BranchΒ Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β | Description | Key Features |
|---|---|---|---|
| OpenAIΒ Direct | πΒ openai | Direct OpenAI API integration | Simple, minimal setup with raw OpenAI calls |
| CrewAI | πΒ crewai | Multi-agent collaboration framework | Agent crews, role-based workflows, task delegation |
| LangGraph | πΒ langgraph | Graph-based agent workflows | State management, conditional routing, complex workflows |
| PydanticΒ AI | πΒ pydantic | Type-safe AI agent framework | Built-in validation, structured outputs, type safety |
| LiteLLM | πΒ litellm | Multi-provider LLM gateway | Unified API for 100+ LLM providers, easy provider switching |
| DSPy | πΒ dspy | Declarative language model programming | Signatures, modules, prompt optimization, composability |
| Groq | πΒ groq | High-performance inference API | Lightning-fast responses, open-source models, OpenAI compatibility |
π What This Repository Demonstrates
Each implementation provides the same core functionality:
- Interactive chat interface with a web-based demo
- Phoenix observability for tracing and monitoring agent behavior
- Docker containerization for consistent deployment
- REST API for programmatic agent interaction
- Conversation memory and context management
ποΈ Common Infrastructure
All implementations share the same foundational components:
Core Features
- FastAPI server for HTTP endpoints
- Flask demo interface for interactive testing
- Phoenix integration for comprehensive observability and tracing
- Docker containerization with Python 3.12 and uv package management
- LRU caching for conversation state management
- Pydantic schemas for request/response validation
Observability & Monitoring
- Phoenix dashboard at
localhost:6006for trace visualization - OpenInference instrumentation specific to each framework
- Request/response tracing with conversation context
- Performance metrics and error tracking
Development Tools
- Automatic environment setup with
./bin/bootstrap.sh - Hot reload for development iterations
- Comprehensive logging for debugging
- Standardized project structure across all implementations
π§ Quick Start (Any Branch)
- Choose your framework - Switch to the branch you want to explore
- Set up environment - Run
./bin/bootstrap.sh(installs Python 3.12 + uv automatically) - Configure API keys - Create
.envfile with your OpenAI API key - Launch the stack - Run
./bin/run_agent.sh --build - Explore the demo - Visit
localhost:8080for the chat interface - Monitor with Phoenix - Visit
localhost:6006for observability
π Framework Comparison
Complexity vs. Capability
| Framework | Setup Complexity | Learning Curve | Capability | Best For |
|---|---|---|---|---|
| OpenAI Direct | Low | Low | Basic | Simple chatbots, prototyping |
| CrewAI | Medium | Medium | High | Multi-agent workflows, team collaboration |
| LangGraph | High | High | Very High | Complex state machines, conditional logic |
| Pydantic AI | Low | Low | Medium | Type-safe applications, structured data |
| LiteLLM | Low | Low | Medium | Multi-provider applications, vendor flexibility |
| DSPy | Medium | Medium | High | Declarative prompting, optimization, research |
| Groq | Low | Low | Medium | High-speed inference, open models, performance-critical apps |
Key Differences
OpenAI Direct
- Minimal abstraction over OpenAI API
- Direct control over all parameters
- Simplest to understand and debug
CrewAI
- Multi-agent orchestration
- Role-based agent definitions
- Built-in task delegation and collaboration
LangGraph
- Graph-based workflow definition
- Advanced state management
- Conditional routing and complex logic flows
Pydantic AI
- Type-safe agent interactions
- Built-in validation and structured outputs
- Clean, pythonic API design
LiteLLM
- Unified API for 100+ LLM providers
- Easy provider switching without code changes
- Built-in cost tracking and fallback mechanisms
DSPy
- Declarative approach to LM programming
- Automatic prompt optimization
- Modular components (ChainOfThought, ReAct, etc.)
- Research-focused with composability
Groq
- Ultra-fast inference with custom silicon
- OpenAI-compatible API for easy integration
- Access to open-source models (Llama, Mixtral)
- Cost-effective high-performance inference
π οΈ Switching Between Implementations
Each branch is fully self-contained. To explore a different framework:
# Switch to desired framework branch
git checkout <framework-branch>
# Set up and activate the environment (if you want to make changes)
./bin/bootstrap.sh && source .venv-{framework}/bin/activate.sh
# Launch the application
./bin/run_agent.sh --build
π Project Structure
All implementations follow this consistent structure:
βββ agent/
β βββ agent.py # Core agent implementation (framework-specific)
β βββ server.py # FastAPI server with observability
β βββ prompts.py # Prompt templates and formatting
β βββ schema.py # Pydantic models for validation
β βββ caching.py # Conversation state management
β βββ demo_code/ # Flask demo interface
βββ bin/
β βββ bootstrap.sh # Environment setup script
β βββ run_agent.sh # Docker launch script
βββ Dockerfile # Python 3.12 + uv container
βββ docker-compose.yml # Multi-service orchestration
βββ requirements.txt # Framework-specific dependencies
βββ README.md # Framework-specific documentation
π Observability Features
All implementations include comprehensive observability through Phoenix:
- Trace Visualization - See complete request flows
- Performance Monitoring - Track response times and errors
- Conversation Context - View full conversation history
- Framework-Specific Metrics - Understand framework internals
- Real-time Dashboards - Monitor live agent interactions
π― Learning Path Recommendations
- Start with OpenAI Direct - Understand the basics without framework abstractions
- Explore Pydantic AI - Learn type-safe agent development
- Try Groq - Experience ultra-fast inference with open-source models
- Try LiteLLM - Experience multi-provider flexibility and cost optimization
- Experiment with DSPy - Try declarative prompting and optimization
- Experience CrewAI - Build multi-agent orchestration systems
- Master LangGraph - Create complex, stateful agent workflows
π€ Contributing
Each branch maintains the same application interface while showcasing different framework approaches. When contributing:
- Keep the core API consistent across implementations
- Update framework-specific documentation in each branch
- Ensure observability features work across all frameworks
- Maintain the same development experience (bootstrap, run scripts, etc.)
π Further Reading
- Phoenix Observability: Phoenix Documentation
- OpenInference Standards: OpenInference Specification
- Framework Documentation: See individual branch READMEs for framework-specific guides
Choose a branch above to start exploring different approaches to building AI agents! Each implementation provides the same functionality with different architectural patterns and capabilities.