A comprehensive SDK for creating and managing AI agents powered by Microsoft Semantic Kernel with MCP (Model Context Protocol) server integration. Build sophisticated conversational agents with tool integration, deploy them as web services, or interact with them through a rich terminal interface.
- π€ Agent Factory: Create and manage multiple Semantic Kernel-based agents with different configurations
- π MCP Integration: Connect agents to external tools via Model Context Protocol (stdio and streamable HTTP servers)
- π₯οΈ Interactive Console: Rich terminal-based chat interface with multi-agent support powered by Textual
- π Web Service Factory: Deploy agents as HTTP/REST APIs with A2A (Agent-to-Agent) protocol support
- β‘ Streaming Support: Real-time response streaming for both console and web interfaces
- π Health Monitoring: Built-in MCP server health checks and status monitoring
- π§ Flexible Configuration: YAML-based configuration with environment variable support
- π― Structured Outputs: Support for JSON schema-based response formatting
# Install core functionality only
pip install semantic-kernel-agent-factory
# For console/CLI interface
pip install semantic-kernel-agent-factory[console]
# For web service deployment
pip install semantic-kernel-agent-factory[service]
# For development (includes testing, linting, and type checking tools)
pip install semantic-kernel-agent-factory[dev]
# For documentation generation
pip install semantic-kernel-agent-factory[docs]
# Install all optional features
pip install semantic-kernel-agent-factory[all]
For local development:
# Clone the repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with development dependencies only
pip install -e ".[dev]"
# OR install with all features for comprehensive development/testing
pip install -e ".[dev-all]"
# Use the Makefile for quick setup
make install-dev # Basic development setup
make install-dev-all # Development setup with all features
Create a configuration file config.yaml
:
agent_factory:
agents:
GeneralAssistant:
name: "GeneralAssistant"
instructions: |
You are a helpful AI assistant.
Answer questions clearly and concisely.
model: "gpt-4"
model_settings:
temperature: 0.7
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
Run the interactive console:
# Note: Requires console dependencies
# Install with: pip install semantic-kernel-agent-factory[console]
agent-factory -c config.yaml
import asyncio
from agent_factory import AgentFactory, AgentFactoryConfig, AgentConfig
async def main():
# Create configuration
config = AgentFactoryConfig(
agents={
"assistant": AgentConfig(
name="assistant",
instructions="You are a helpful AI assistant",
model="gpt-4"
)
},
openai_models={
"gpt-4": {
"model": "gpt-4",
"api_key": "your-api-key",
"endpoint": "your-endpoint"
}
}
)
# Create and use agents
async with AgentFactory(config) as factory:
agent = factory.get_agent("assistant")
# Use the agent for conversations
asyncio.run(main())
Create a service configuration service_config.yaml
:
agent_factory:
agents:
ChatBot:
name: "ChatBot"
instructions: "You are a helpful chatbot"
model: "gpt-4"
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
service_factory:
services:
ChatBot:
card:
name: "ChatBot"
description: "AI-powered chatbot service"
enable_token_streaming: true
Deploy as web service:
# Note: Requires service dependencies
# Install with: pip install semantic-kernel-agent-factory[service]
from agent_factory import AgentServiceFactory, AgentServiceFactoryConfig
import uvicorn
async def create_app():
config = AgentServiceFactoryConfig.from_file("service_config.yaml")
async with AgentServiceFactory(config) as factory:
app = await factory.create_application()
return app
if __name__ == "__main__":
uvicorn.run("main:create_app", host="0.0.0.0", port=8000)
Connect agents to external tools using Model Context Protocol servers:
agent_factory:
agents:
ToolAgent:
name: "ToolAgent"
instructions: "You have access to various tools"
model: "gpt-4"
mcp_servers: ["time", "kubernetes"]
mcp:
servers:
time:
type: "stdio"
command: "python"
args: ["-m", "mcp_server_time"]
kubernetes:
type: "streamable_http"
url: "https://k8s-mcp-server.example.com/mcp"
timeout: 10
For HTTP-based MCP servers that require authentication, there's a limitation with the MCP Python client SDK. While it's easy to obtain user access tokens in HTTP services, the underlying HTTP client utilization doesn't provide a straightforward way to add tokens to HTTP headers when communicating with MCP servers.
Workaround: Filter-based Token Injection
As a workaround, the system uses filters to inject access tokens before sending requests to MCP servers. Check out filters.py for details.
Important Notes:
- The server should consume the
access_token
for authentication purposes - Do not include
access_token
in the tool's input schema definition - The token is automatically injected by the filter before the request reaches the MCP server
- This is a temporary workaround until the MCP Python client SDK provides better header customization support
The interactive console provides:
- Multi-Agent Chat: Switch between different agents in tabbed interface
- Real-time Streaming: See responses as they're generated
- MCP Status Monitoring: Live health checks of connected MCP servers
- Function Call Visibility: See tool calls and results in real-time
- Keyboard shortcuts:
Ctrl+Enter
: Send messageCtrl+L
: Clear chatF1
: Toggle agent panelF2
: Toggle logsCtrl+W
: Close tab
agent_factory:
agents:
MyAgent:
name: "MyAgent"
instructions: "System prompt for the agent"
model: "gpt-4"
model_settings:
temperature: 0.7
max_tokens: 2000
response_json_schema: # Optional structured output
type: "object"
properties:
answer:
type: "string"
mcp:
servers: ["tool1", "tool2"]
agent_factory:
openai_models:
gpt-4:
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
gpt-3.5-turbo:
model: "gpt-3.5-turbo"
api_key: "${OPENAI_API_KEY}"
endpoint: "${AZURE_OPENAI_ENDPOINT}"
Stdio servers (local processes):
mcp:
servers:
local_tool:
type: "stdio"
command: "python"
args: ["-m", "my_mcp_server"]
env:
DEBUG: "true"
Streamable HTTP servers (HTTP-based):
mcp:
servers:
remote_tool:
type: "streamable_http"
url: "https://api.example.com/mcp"
timeout: 15
# Start interactive chat (requires console dependencies)
agent-factory -c config.yaml
# List configured agents
agent-factory list -c config.yaml
# Enable verbose logging
agent-factory -c config.yaml --verbose
# Custom log directory
agent-factory -c config.yaml --log-dir /path/to/logs
Configure using environment variables:
# OpenAI Configuration
export OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Optional: Agent Factory Settings
export AGENT_FACTORY__MODEL_SELECTION="cost" # first, cost, latency, quality
export AGENT_FACTORY__MCP_FAILURE_STRATEGY="lenient" # strict, lenient
See the examples/
directory for:
cli_example.yaml
- Console application setupagent_service_factory_config.yaml
- Web service configurationweb_service.py
- Web service deployment example
# Clone repository
git clone https://github.com/jhzhu89/semantic-kernel-agent-factory
cd semantic-kernel-agent-factory
# Install in editable mode with all development dependencies
pip install -e ".[dev]"
The [dev]
extra includes:
- Testing: pytest, pytest-asyncio, pytest-cov, pytest-mock
- Code Formatting: black, isort, ruff
- Type Checking: mypy with type stubs
- Linting: flake8, ruff
- Coverage: pytest-cov for test coverage reports
# Run tests
pytest
# Run tests with coverage
pytest --cov=agent_factory --cov-report=html
# Format code
black .
isort .
# Lint code
ruff check .
flake8 .
# Type checking
mypy agent_factory
# Run all quality checks
make test-cov # Runs tests with coverage
make format # Formats code
make type-check # Type checking
# Install with console dependencies for development
pip install -e ".[dev,console]"
# For web service development
pip install -e ".[dev,service]"
# Install all features for development
pip install -e ".[dev,all]"
The Semantic Kernel Agent Factory consists of several key components:
- AgentFactory: Core factory for creating and managing Semantic Kernel agents
- AgentServiceFactory: Web service wrapper that exposes agents as HTTP APIs (requires
[service]
extra) - MCPProvider: Manages connections to Model Context Protocol servers
- Console Application: Terminal-based interface for interactive agent chat (requires
[console]
extra) - Configuration System: YAML-based configuration with validation
Different installation options enable additional features:
[console]
: Interactive terminal interface with Textual UI, Click CLI commands[service]
: A2A-based web services, Starlette server support[docs]
: Sphinx-based documentation generation[dev]
: Development tools for testing, linting, and type checking
- Python 3.10+
- Microsoft Semantic Kernel
- Azure OpenAI or OpenAI API access
- Optional: MCP-compatible tool servers
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Install development dependencies:
pip install -e ".[dev]"
- Add tests for new functionality
- Run the test suite and ensure all checks pass:
# Run tests pytest # Format code black . isort . # Lint code ruff check . flake8 . # Type checking mypy agent_factory
- Submit a pull request
# Install with console dependencies for development
pip install -e ".[dev,console]"
# For web service development
pip install -e ".[dev,service]"
# For full development environment
pip install -e ".[dev,all]"
agent_factory/
- Core library codetests/
- Test suiteexamples/
- Usage examplesdocs/
- Documentation (when using[docs]
extra)
This project uses:
- Black for code formatting
- isort for import sorting
- Ruff for linting
- Flake8 for additional linting
- mypy for type checking
- pytest for testing with >80% coverage requirement
This project is licensed under the MIT License - see the LICENSE file for details.
- π GitHub Repository
- π Issue Tracker
- π¬ Discussions
- π Examples
- Microsoft Semantic Kernel
- Model Context Protocol
- Textual - Powers the console interface