Unified API across providers
LiteLLM provides a unified OpenAI-compatible interface to call 100+ LLM providers including Anthropic, Google, Azure, and self-hosted models without changing your code. It is used as a proxy server or Python SDK to standardize API calls, manage fallbacks between providers, and track spend across different models.
Install with `pip install litellm` and set the API keys for each provider you want to use (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY). Call `litellm.completion()` with a model string like `claude-3-opus-20240229` or `gpt-4`. For team use, run the proxy server with `litellm --model gpt-4` to get a local OpenAI-compatible endpoint.
$ pip install litellm` and set the API keys for each provider you want to use (e Be the first to share a LiteLLM case study and get discovered by clients.
Submit a case studySubmit a brief and we'll match you with vetted specialists who have proven LiteLLM experience.