import json
import keyring
from openai import OpenAI6 Using OpenRouter in Python
The goal of this brief tutorial is to demonstrate how to work with an OpenAI-compatible API in Python. For this example, we will use OpenRouter-provided models.
You will learn how to:
- Generate a free text response directly from an LLM.
- Control important parameters like temperature.
- Define an output schema and generate structured outputs from an LLM.
- Use “thinking” models and extract their thinking output.
There are two main ways to access OpenRouter from Python:
- Using the
openaiPython package with OpenRouter’s OpenAI-compatible API. - Using direct HTTP requests with the
requestspackage.
In this tutorial, we will use the openai Python package, which can be used for accessing any OpenAI-compatible API, including OpenRouter.
6.1 Requirements
- OpenRouter API key
openaiPython package
6.2 Setup
6.2.1 Install packages
If not already installed, use uv to install the required package:
uv add openai6.2.2 Load packages
6.2.3 OpenRouter API endpoint
See the setup guide for instructions on how to securely store and retrieve your OpenRouter API key using the keyring package.
openrouter_url = "https://openrouter.ai/api/v1"
model_name = "openai/gpt-oss-20b:free"
api_key = keyring.get_password("OPENROUTER_API_KEY", "api-key")6.3 Generate a free text response
For this example, we will use the openai/gpt-oss-20b:free model, which is an open-source model available via OpenRouter.
6.3.1 Instantiate the OpenAI client
Instantiate the OpenAI client with the OpenRouter base URL and your API key:
client = OpenAI(
base_url=openrouter_url,
api_key=api_key,
)6.3.2 Generate response
resp = client.chat.completions.create(
# extra_headers={
# "HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
# "X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
# },
model=model_name,
messages=[
{"role": "system", "content": "You are a meticulous research assistant."},
{"role": "user", "content": "What is your name and who made you?"},
],
temperature=0.2,
)The response message can be accessed as follows:
message = resp.choices[0].message6.3.3 Print response
if hasattr(message, "reasoning") and message.reasoning:
print("--- Reasoning ---")
print(message.reasoning + "\n")
print("--- Response ---")
print(message.content)--- Thinking ---
We need to answer: "What is your name and who made you?" The user is asking about the assistant's name and creator. According to policy, we can say "I am ChatGPT, a large language model trained by OpenAI." We should not mention policy. Just answer.
--- Response ---
I’m ChatGPT, a large language model created by OpenAI.
6.4 Generate structured output
There are many scenarios, especially in research, where we want to generate a structured response instead of free text. Many users try to achieve this using instructions added to the user prompt and LLMs are increasingly good at followin these instructions. However, there is native support to define an output schema including the required names and their descriptions, which is much cleaner, easier, and more likely to succeed, without requiring laborious and extensive prompting.
We begin by defining the output schema as a Pydantic BaseModel, that we will then pass along with the request:
from pydantic import BaseModel, Field
class LLMInfoSchema(BaseModel):
name: str = Field(description="Your name")
manufacturer: str = Field(
description="The name of the person, group, or company that built you."
)
knowledge_cutoff: str = Field(description="Your knowledge cutoff date.")Let’s rerun the previous query, but this time we will include the output schema in the request.
resp_structured = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": "You are a meticulous research assistant."},
{"role": "user", "content": "What is your name and who made you?"},
],
temperature=0.2,
response_format={
"type": "json_schema",
"json_schema": {
"name": "Response", # Optional but recommended
"schema": LLMInfoSchema.model_json_schema(),
},
},
)Access the message as before:
message_structured = resp_structured.choices[0].messageWe can print the JSON string directly or use the json package to pretty-print it:
if hasattr(message_structured, "reasoning") and message_structured.reasoning:
print("--- Reasoning ---")
print(message_structured.reasoning + "\n")
print("--- Response ---")
print(json.dumps(json.loads(message_structured.content), indent=4))--- Reasoning ---
We need to respond as a meticulous research assistant. The user asks: "What is your name and who made you?" We should answer: My name is ChatGPT, developed by OpenAI. Provide details. Also mention the team. Provide thorough answer.
--- Response ---
{
"name": "ChatGPT",
"manufacturer": "OpenAI",
"knowledge_cutoff": "2024-06"
}