Skip to main content

OpenRouter

LiteLLM supports all the text / chat / vision / embedding models from OpenRouter

Open In Colab

Usage​

import os
from litellm import completion

os.environ["OPENROUTER_API_KEY"] = ""
os.environ["OPENROUTER_API_BASE"] = "" # [OPTIONAL] defaults to https://openrouter.ai/api/v1
os.environ["OR_SITE_URL"] = "" # [OPTIONAL]
os.environ["OR_APP_NAME"] = "" # [OPTIONAL]

response = completion(
model="openrouter/google/palm-2-chat-bison",
messages=messages,
)

Configuration with Environment Variables​

For production environments, you can dynamically configure the base_url using environment variables:

import os
from litellm import completion

# Configure with environment variables
OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY")
OPENROUTER_BASE_URL = os.getenv("OPENROUTER_API_BASE", "https://openrouter.ai/api/v1")

# Set environment for LiteLLM
os.environ["OPENROUTER_API_KEY"] = OPENROUTER_API_KEY
os.environ["OPENROUTER_API_BASE"] = OPENROUTER_BASE_URL

response = completion(
model="openrouter/google/palm-2-chat-bison",
messages=messages,
base_url=OPENROUTER_BASE_URL # Explicitly pass base_url for clarity
)

This approach provides better flexibility for managing configurations across different environments (dev, staging, production) and makes it easier to switch between self-hosted and cloud endpoints.

OpenRouter Completion Models​

🚨 LiteLLM supports ALL OpenRouter models, send model=openrouter/<your-openrouter-model> to send it to open router. See all openrouter models here

Model NameFunction Call
openrouter/openai/gpt-3.5-turbocompletion('openrouter/openai/gpt-3.5-turbo', messages)
openrouter/openai/gpt-3.5-turbo-16kcompletion('openrouter/openai/gpt-3.5-turbo-16k', messages)
openrouter/openai/gpt-4completion('openrouter/openai/gpt-4', messages)
openrouter/openai/gpt-4-32kcompletion('openrouter/openai/gpt-4-32k', messages)
openrouter/anthropic/claude-2completion('openrouter/anthropic/claude-2', messages)
openrouter/anthropic/claude-instant-v1completion('openrouter/anthropic/claude-instant-v1', messages)
openrouter/google/palm-2-chat-bisoncompletion('openrouter/google/palm-2-chat-bison', messages)
openrouter/google/palm-2-codechat-bisoncompletion('openrouter/google/palm-2-codechat-bison', messages)
openrouter/meta-llama/llama-2-13b-chatcompletion('openrouter/meta-llama/llama-2-13b-chat', messages)
openrouter/meta-llama/llama-2-70b-chatcompletion('openrouter/meta-llama/llama-2-70b-chat', messages)

Passing OpenRouter Params - transforms, models, route​

Pass transforms, models, routeas arguments to litellm.completion()

import os
from litellm import completion

os.environ["OPENROUTER_API_KEY"] = ""

response = completion(
model="openrouter/google/palm-2-chat-bison",
messages=messages,
transforms = [""],
route= ""
)

Embedding​

from litellm import embedding
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

response = embedding(
model="openrouter/openai/text-embedding-3-small",
input=["good morning from litellm", "this is another item"],
)
print(response)

Image Generation​

OpenRouter supports image generation through select models like Google Gemini image generation models. LiteLLM transforms standard image generation requests to OpenRouter's chat completion format.

Supported Parameters​

  • size: Maps to OpenRouter's aspect_ratio format

    • 1024x1024 → 1:1 (square)
    • 1536x1024 → 3:2 (landscape)
    • 1024x1536 → 2:3 (portrait)
    • 1792x1024 → 16:9 (wide landscape)
    • 1024x1792 → 9:16 (tall portrait)
  • quality: Maps to OpenRouter's image_size format (Gemini models)

    • low or standard → 1K
    • medium → 2K
    • high or hd → 4K
  • n: Number of images to generate

Usage​

from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

# Basic image generation
response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A beautiful sunset over a calm ocean",
)
print(response)

Advanced Usage with Parameters​

from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

# Generate high-quality landscape image
response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A serene mountain landscape with a lake",
size="1536x1024", # Landscape format
quality="high", # High quality (4K)
)

# Access the generated image
image_data = response.data[0]
if image_data.b64_json:
# Base64 encoded image
print(f"Generated base64 image: {image_data.b64_json[:50]}...")
elif image_data.url:
# Image URL
print(f"Generated image URL: {image_data.url}")

Using OpenRouter-Specific Parameters​

You can also pass OpenRouter-specific parameters directly using image_config:

from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A futuristic cityscape at night",
image_config={
"aspect_ratio": "16:9", # OpenRouter native format
"image_size": "4K" # OpenRouter native format
}
)
print(response)

Response Format​

The response follows the standard LiteLLM ImageResponse format:

{
"created": 1703658209,
"data": [{
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA...", # Base64 encoded image
"url": None,
"revised_prompt": None
}],
"usage": {
"input_tokens": 10,
"output_tokens": 1290,
"total_tokens": 1300
}
}

Cost Tracking​

OpenRouter provides cost information in the response, which LiteLLM automatically tracks:

response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A cute baby sea otter",
)

# Cost is available in the response metadata
print(f"Request cost: ${response._hidden_params['additional_headers']['llm_provider-x-litellm-response-cost']}")