Back to Projects
πŸ”€

openinference_frameworks

Multi-framework LLM implementation comparison

PythonLangGraphCrewAIDSPyPhoenix

Agentic Framework Comparison Repository

This repository provides a comprehensive comparison of different agentic AI orchestration frameworks through working implementations. Each branch contains the same core application built with a different framework, allowing you to explore and compare approaches to building AI agents.

πŸ“‹ Table of Contents - Framework Implementations

FrameworkΒ BranchΒ Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β DescriptionKey Features
OpenAIΒ DirectπŸ“Β openaiDirect OpenAI API integrationSimple, minimal setup with raw OpenAI calls
CrewAIπŸ“Β crewaiMulti-agent collaboration frameworkAgent crews, role-based workflows, task delegation
LangGraphπŸ“Β langgraphGraph-based agent workflowsState management, conditional routing, complex workflows
PydanticΒ AIπŸ“Β pydanticType-safe AI agent frameworkBuilt-in validation, structured outputs, type safety
LiteLLMπŸ“Β litellmMulti-provider LLM gatewayUnified API for 100+ LLM providers, easy provider switching
DSPyπŸ“Β dspyDeclarative language model programmingSignatures, modules, prompt optimization, composability
GroqπŸ“Β groqHigh-performance inference APILightning-fast responses, open-source models, OpenAI compatibility

πŸš€ What This Repository Demonstrates

Each implementation provides the same core functionality:

  • Interactive chat interface with a web-based demo
  • Phoenix observability for tracing and monitoring agent behavior
  • Docker containerization for consistent deployment
  • REST API for programmatic agent interaction
  • Conversation memory and context management

πŸ—οΈ Common Infrastructure

All implementations share the same foundational components:

Core Features

  • FastAPI server for HTTP endpoints
  • Flask demo interface for interactive testing
  • Phoenix integration for comprehensive observability and tracing
  • Docker containerization with Python 3.12 and uv package management
  • LRU caching for conversation state management
  • Pydantic schemas for request/response validation

Observability & Monitoring

  • Phoenix dashboard at localhost:6006 for trace visualization
  • OpenInference instrumentation specific to each framework
  • Request/response tracing with conversation context
  • Performance metrics and error tracking

Development Tools

  • Automatic environment setup with ./bin/bootstrap.sh
  • Hot reload for development iterations
  • Comprehensive logging for debugging
  • Standardized project structure across all implementations

πŸ”§ Quick Start (Any Branch)

  1. Choose your framework - Switch to the branch you want to explore
  2. Set up environment - Run ./bin/bootstrap.sh (installs Python 3.12 + uv automatically)
  3. Configure API keys - Create .env file with your OpenAI API key
  4. Launch the stack - Run ./bin/run_agent.sh --build
  5. Explore the demo - Visit localhost:8080 for the chat interface
  6. Monitor with Phoenix - Visit localhost:6006 for observability

πŸ“Š Framework Comparison

Complexity vs. Capability

FrameworkSetup ComplexityLearning CurveCapabilityBest For
OpenAI DirectLowLowBasicSimple chatbots, prototyping
CrewAIMediumMediumHighMulti-agent workflows, team collaboration
LangGraphHighHighVery HighComplex state machines, conditional logic
Pydantic AILowLowMediumType-safe applications, structured data
LiteLLMLowLowMediumMulti-provider applications, vendor flexibility
DSPyMediumMediumHighDeclarative prompting, optimization, research
GroqLowLowMediumHigh-speed inference, open models, performance-critical apps

Key Differences

OpenAI Direct

  • Minimal abstraction over OpenAI API
  • Direct control over all parameters
  • Simplest to understand and debug

CrewAI

  • Multi-agent orchestration
  • Role-based agent definitions
  • Built-in task delegation and collaboration

LangGraph

  • Graph-based workflow definition
  • Advanced state management
  • Conditional routing and complex logic flows

Pydantic AI

  • Type-safe agent interactions
  • Built-in validation and structured outputs
  • Clean, pythonic API design

LiteLLM

  • Unified API for 100+ LLM providers
  • Easy provider switching without code changes
  • Built-in cost tracking and fallback mechanisms

DSPy

  • Declarative approach to LM programming
  • Automatic prompt optimization
  • Modular components (ChainOfThought, ReAct, etc.)
  • Research-focused with composability

Groq

  • Ultra-fast inference with custom silicon
  • OpenAI-compatible API for easy integration
  • Access to open-source models (Llama, Mixtral)
  • Cost-effective high-performance inference

πŸ› οΈ Switching Between Implementations

Each branch is fully self-contained. To explore a different framework:

# Switch to desired framework branch
git checkout <framework-branch>

# Set up and activate the environment (if you want to make changes)
./bin/bootstrap.sh && source .venv-{framework}/bin/activate.sh

# Launch the application
./bin/run_agent.sh --build

πŸ“ Project Structure

All implementations follow this consistent structure:

β”œβ”€β”€ agent/
β”‚   β”œβ”€β”€ agent.py          # Core agent implementation (framework-specific)
β”‚   β”œβ”€β”€ server.py         # FastAPI server with observability
β”‚   β”œβ”€β”€ prompts.py        # Prompt templates and formatting
β”‚   β”œβ”€β”€ schema.py         # Pydantic models for validation
β”‚   β”œβ”€β”€ caching.py        # Conversation state management
β”‚   └── demo_code/        # Flask demo interface
β”œβ”€β”€ bin/
β”‚   β”œβ”€β”€ bootstrap.sh      # Environment setup script
β”‚   └── run_agent.sh      # Docker launch script
β”œβ”€β”€ Dockerfile            # Python 3.12 + uv container
β”œβ”€β”€ docker-compose.yml    # Multi-service orchestration
β”œβ”€β”€ requirements.txt      # Framework-specific dependencies
└── README.md            # Framework-specific documentation

πŸ” Observability Features

All implementations include comprehensive observability through Phoenix:

  • Trace Visualization - See complete request flows
  • Performance Monitoring - Track response times and errors
  • Conversation Context - View full conversation history
  • Framework-Specific Metrics - Understand framework internals
  • Real-time Dashboards - Monitor live agent interactions

🎯 Learning Path Recommendations

  1. Start with OpenAI Direct - Understand the basics without framework abstractions
  2. Explore Pydantic AI - Learn type-safe agent development
  3. Try Groq - Experience ultra-fast inference with open-source models
  4. Try LiteLLM - Experience multi-provider flexibility and cost optimization
  5. Experiment with DSPy - Try declarative prompting and optimization
  6. Experience CrewAI - Build multi-agent orchestration systems
  7. Master LangGraph - Create complex, stateful agent workflows

🀝 Contributing

Each branch maintains the same application interface while showcasing different framework approaches. When contributing:

  • Keep the core API consistent across implementations
  • Update framework-specific documentation in each branch
  • Ensure observability features work across all frameworks
  • Maintain the same development experience (bootstrap, run scripts, etc.)

πŸ“– Further Reading


Choose a branch above to start exploring different approaches to building AI agents! Each implementation provides the same functionality with different architectural patterns and capabilities.