Pipes

class openwebui_token_tracking.pipes.AnthropicTrackedPipe[source]

Bases: BaseTrackedPipe

Anthropic-specific implementation of the BaseTrackedPipe for handling API requests to Anthropic’s chat completion endpoints with token tracking.

This class handles authentication, request formatting, and response processing specific to the Anthropic API, including support for image processing and multi-modal messages.

class Valves(**data)[source]

Bases: BaseModel

Configuration parameters for Anthropic API connections.

Parameters:
  • ANTHROPIC_API_KEY (str)

  • DEBUG (bool)

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

ANTHROPIC_API_KEY: str
DEBUG: bool
class openwebui_token_tracking.pipes.BaseTrackedPipe(provider, url)[source]

Bases: ABC

Base class for handling API requests to different AI model providers with token tracking.

This class provides a common interface for making requests to AI model APIs while tracking token usage. It handles both streaming and non-streaming responses, and manages token usage limits.

Parameters:
  • provider (str) – The name of the AI provider.

  • url (str) – The base URL for the provider’s API

DATABASE_URL_ENV = 'DATABASE_URL'
MODEL_ID_PREFIX = '.'
get_models()[source]

Get a list of available models for this provider.

Retrieves models from the token tracker and formats them into a list of dictionaries containing model information.

Returns:

List of dictionaries, each containing: - id: The model identifier - name: The display name of the model

Return type:

list[dict]

non_stream_response(headers, payload, model_id, user, sponsored_allowance_name=None)[source]

Handle non-streaming responses from the API.

Makes the request and ensures token usage is logged after receiving the response.

Parameters:
  • headers (dict) – HTTP headers for the request

  • payload (dict) – Request payload

  • model_id (str) – The ID of the model being accessed

  • user (dict) – User information for token tracking

  • sponsored_allowance_name (str, optional) – The name of the sponsored allowance

Returns:

The API response

Return type:

Any

Raises:

RequestError – If the API request fails

pipe(body, __user__, __metadata__)[source]

Process an incoming request through the appropriate model pipeline.

This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response

Parameters:
  • body (dict) – The request body containing model selection and message

  • __user__ (dict) – User information for token tracking

  • __metadata__ (dict) – Additional metadata for the request

Returns:

Either a string response or a generator for streaming responses

Return type:

Union[str, Generator, Iterator]

Raises:
pipes()[source]

Alias for get_models().

Returns:

List of available models

Return type:

list[dict]

See:

get_models()

stream_response(headers, payload, model_id, user, sponsored_allowance_name=None)[source]

Handle streaming responses from the API.

Makes the streaming request and ensures token usage is logged after the response is complete.

Parameters:
  • headers (dict) – HTTP headers for the request

  • payload (dict) – Request payload

  • model_id (str) – The ID of the model being accessed

  • user (dict) – User information for token tracking

  • sponsored_allowance_name (str, optional) – The name of the sponsored allowance

Yield:

Response chunks from the API

Raises:

RequestError – If the API request fails

class openwebui_token_tracking.pipes.GoogleTrackedPipe[source]

Bases: BaseTrackedPipe

Tracked pipe implementation for Google’s Gemini API.

This class handles API requests to Google’s Gemini models while tracking token usage. It supports both streaming and non-streaming responses, and handles multimodal inputs including text and images.

Parameters:
  • provider (str) – The provider name, set to “google_genai”

  • url (str) – The base URL for the Gemini API

class Valves(**data)[source]

Bases: BaseModel

Configuration parameters for the Google Gemini pipe.

Parameters:
  • GOOGLE_API_KEY (str) – API key for authenticating with Google’s API

  • USE_PERMISSIVE_SAFETY (bool) – Whether to use permissive safety settings

  • DEBUG (bool) – Enable debug logging

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

GOOGLE_API_KEY: str
USE_PERMISSIVE_SAFETY: bool
DEBUG: bool
pipe(body, __user__, __metadata__)[source]

Process an incoming request through the appropriate model pipeline.

This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response

Parameters:
  • body (dict) – The request body containing model selection and message

  • __user__ (dict) – User information for token tracking

  • __metadata__ (dict) – Additional metadata for the request

Returns:

Either a string response or a generator for streaming responses

Return type:

Union[str, Generator, Iterator]

Raises:
class openwebui_token_tracking.pipes.MistralTrackedPipe[source]

Bases: BaseTrackedPipe

Tracked pipe implementation for Mistral AI’s API.

This class handles API requests to Mistral AI models while tracking token usage. It supports both streaming and non-streaming responses, and implements rate limiting handling with automatic retries.

class Valves(**data)[source]

Bases: BaseModel

Configuration parameters for the Mistral pipe.

Parameters:
  • MISTRAL_API_KEY (str) – API key for authenticating with Mistral’s API

  • DEBUG (bool) – Enable debug logging

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

MISTRAL_API_KEY: str
DEBUG: bool
class openwebui_token_tracking.pipes.OpenAITrackedPipe[source]

Bases: BaseTrackedPipe

OpenAI-specific implementation of the BaseTrackedPipe for handling API requests to OpenAI’s chat completion endpoints with token tracking.

Note that providers that are fully compliant with OpenAI’s API specification (both regarding the request and the response structure), can also be used with this pipe by setting the respective values in the Valves.

This class handles authentication, request formatting, and response processing specific to the OpenAI API while leveraging the base class’s token tracking functionality.

class Valves(**data)[source]

Bases: BaseModel

Configuration parameters for OpenAI (compatible) API connections.

Parameters:
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

API_KEY: str
API_BASE_URL: str
PROVIDER: str
DEBUG: bool
pipe(body, __user__, __metadata__)[source]

Process an incoming request through the appropriate model pipeline.

This method handles the high-level flow of processing a request: 1. Checks token limits 2. Prepares the request 3. Makes the API call 4. Handles the response

Parameters:
  • body (dict) – The request body containing model selection and message

  • __user__ (dict) – User information for token tracking

  • __metadata__ (dict) – Additional metadata for the request

Returns:

Either a string response or a generator for streaming responses

Return type:

Union[str, Generator, Iterator]

Raises:
pipes()[source]

Alias for get_models().

Returns:

List of available models

Return type:

list[dict]

See:

get_models()