Skip to main content

Modify / Reject Incoming Requests

Modify data just before making litellm completion calls call on proxy

See a complete example with our parallel request rate limiter

Quick Start

  1. In your Custom Handler add a new async_pre_call_hook function

This function is called just before a litellm completion call is made, and allows you to modify the data going into the litellm call See Code

from litellm.integrations.custom_logger import CustomLogger
import litellm

# This file includes the custom callbacks for LiteLLM Proxy
# Once defined, these can be passed in proxy_config.yaml
class MyCustomHandler(CustomLogger): # https://docs.litellm.ai/docs/observability/custom_callback#callback-class
# Class variables or attributes
def __init__(self):
pass

#### ASYNC ####

async def async_log_stream_event(self, kwargs, response_obj, start_time, end_time):
pass

async def async_log_pre_api_call(self, model, messages, kwargs):
pass

async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
pass

async def async_log_failure_event(self, kwargs, response_obj, start_time, end_time):
pass

#### CALL HOOKS - proxy only ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal["completion", "embeddings"]):
data["model"] = "my-new-model"
return data

proxy_handler_instance = MyCustomHandler()
  1. Add this file to your proxy config
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
  1. Start the server + test the request
$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:8000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "good morning good sir"
}
],
"user": "ishaan-app",
"temperature": 0.2
}'