📄️ Quick Start
Quick start CLI, Config, Docker
📄️ Proxy Config.yaml
Set model list, apibase, apikey, temperature & proxy server settings (master-key) on the config.yaml.
🔗 📖 All Endpoints
📄️ Use with Langchain, OpenAI SDK, LlamaIndex, Curl
Input, Output, Exceptions are mapped to the OpenAI format for all supported models
📄️ Virtual Keys, Users
Track Spend, Set budgets and create virtual keys for the proxy
📄️ 💰 Budgets, Rate Limits
Requirements:
📄️ 🔑 [BETA] Proxy UI
Create + delete keys through a UI
📄️ Model Management
Add new models + Get model info without restarting proxy.
📄️ Health Checks
Use this to health check all LLMs defined in your config.yaml
📄️ Debugging
2 levels of debugging supported.
📄️ PII Masking
LiteLLM supports Microsoft Presidio for PII masking.
🗃️ 🔥 Load Balancing
2 items
📄️ Caching
Cache LLM Responses
🗃️ Logging, Alerting, Caching
3 items
🗃️ Admin Controls
2 items
📄️ 🐳 Docker, Deploying LiteLLM Proxy
You can find the Dockerfile to build litellm proxy here
📄️ CLI Arguments
Cli arguments, --host, --port, --num_workers