Pipes
- class openwebui_token_tracking.pipes.AnthropicTrackedPipe[source]
Bases:
BaseTrackedPipe
Anthropic-specific implementation of the BaseTrackedPipe for handling API requests to Anthropic’s chat completion endpoints with token tracking.
This class handles authentication, request formatting, and response processing specific to the Anthropic API, including support for image processing and multi-modal messages.
- class openwebui_token_tracking.pipes.BaseTrackedPipe(provider, url)[source]
Bases:
ABC
Base class for handling API requests to different AI model providers with token tracking.
This class provides a common interface for making requests to AI model APIs while tracking token usage. It handles both streaming and non-streaming responses, and manages token usage limits.
- Parameters:
- DATABASE_URL_ENV = 'DATABASE_URL'
- MODEL_ID_PREFIX = '.'
- get_models()[source]
Get a list of available models for this provider.
Retrieves models from the token tracker and formats them into a list of dictionaries containing model information.
- non_stream_response(headers, payload, model_id, user, sponsored_allowance_name=None)[source]
Handle non-streaming responses from the API.
Makes the request and ensures token usage is logged after receiving the response.
- Parameters:
- Returns:
The API response
- Return type:
Any
- Raises:
RequestError – If the API request fails
- pipe(body, __user__, __metadata__)[source]
Process an incoming request through the appropriate model pipeline.
This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response
- Parameters:
- Returns:
Either a string response or a generator for streaming responses
- Return type:
Union[str, Generator, Iterator]
- Raises:
TokenLimitExceededError – If user has exceeded their token limit
RequestError – If the API request fails
- stream_response(headers, payload, model_id, user, sponsored_allowance_name=None)[source]
Handle streaming responses from the API.
Makes the streaming request and ensures token usage is logged after the response is complete.
- Parameters:
- Yield:
Response chunks from the API
- Raises:
RequestError – If the API request fails
- class openwebui_token_tracking.pipes.GoogleTrackedPipe[source]
Bases:
BaseTrackedPipe
Tracked pipe implementation for Google’s Gemini API.
This class handles API requests to Google’s Gemini models while tracking token usage. It supports both streaming and non-streaming responses, and handles multimodal inputs including text and images.
- Parameters:
- class Valves(**data)[source]
Bases:
BaseModel
Configuration parameters for the Google Gemini pipe.
- Parameters:
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- pipe(body, __user__, __metadata__)[source]
Process an incoming request through the appropriate model pipeline.
This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response
- Parameters:
- Returns:
Either a string response or a generator for streaming responses
- Return type:
Union[str, Generator, Iterator]
- Raises:
TokenLimitExceededError – If user has exceeded their token limit
RequestError – If the API request fails
- class openwebui_token_tracking.pipes.MistralTrackedPipe[source]
Bases:
BaseTrackedPipe
Tracked pipe implementation for Mistral AI’s API.
This class handles API requests to Mistral AI models while tracking token usage. It supports both streaming and non-streaming responses, and implements rate limiting handling with automatic retries.
- class openwebui_token_tracking.pipes.OpenAITrackedPipe[source]
Bases:
BaseTrackedPipe
OpenAI-specific implementation of the BaseTrackedPipe for handling API requests to OpenAI’s chat completion endpoints with token tracking.
Note that providers that are fully compliant with OpenAI’s API specification (both regarding the request and the response structure), can also be used with this pipe by setting the respective values in the Valves.
This class handles authentication, request formatting, and response processing specific to the OpenAI API while leveraging the base class’s token tracking functionality.
- class Valves(**data)[source]
Bases:
BaseModel
Configuration parameters for OpenAI (compatible) API connections.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- pipe(body, __user__, __metadata__)[source]
Process an incoming request through the appropriate model pipeline.
This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response
- Parameters:
- Returns:
Either a string response or a generator for streaming responses
- Return type:
Union[str, Generator, Iterator]
- Raises:
TokenLimitExceededError – If user has exceeded their token limit
RequestError – If the API request fails