Instructor is the most popular Python library for working with structured outputs from large language models (LLMs), boasting over 1 million monthly downloads. Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows with the community's top choice!
If your company uses Instructor a lot, we'd love to have your logo on our website! Please fill out this form
- Response Models: Specify Pydantic models to define the structure of your LLM outputs
- Retry Management: Easily configure the number of retry attempts for your requests
- Validation: Ensure LLM responses conform to your expectations with Pydantic validation
- Streaming Support: Work with Lists and Partial responses effortlessly
- Flexible Backends: Seamlessly integrate with various LLM providers beyond OpenAI
- Support in many Languages: We support many languages including Python, TypeScript, Ruby, Go, and Elixir
Install Instructor with a single command:
pip install -U instructor
Now, let's see Instructor in action with a simple example:
import instructor
from pydantic import BaseModel
from openai import OpenAI
# Define your desired output structure
class UserInfo(BaseModel):
name: str
age: int
# Patch the OpenAI client
client = instructor.from_openai(OpenAI())
# Extract structured data from natural language
user_info = client.chat.completions.create(
model="gpt-4o-mini",
response_model=UserInfo,
messages=[{"role": "user", "content": "John Doe is 30 years old."}],
)
print(user_info.name)
#> John Doe
print(user_info.age)
#> 30
Instructor provides a powerful hooks system that allows you to intercept and log various stages of the LLM interaction process. Here's a simple example demonstrating how to use hooks:
import instructor
from openai import OpenAI
from pydantic import BaseModel
class UserInfo(BaseModel):
name: str
age: int
# Initialize the OpenAI client with Instructor
client = instructor.from_openai(OpenAI())
# Define hook functions
def log_kwargs(**kwargs):
print(f"Function called with kwargs: {kwargs}")
def log_exception(exception: Exception):
print(f"An exception occurred: {str(exception)}")
client.on("completion:kwargs", log_kwargs)
client.on("completion:error", log_exception)
user_info = client.chat.completions.create(
model="gpt-4o-mini",
response_model=UserInfo,
messages=[
{"role": "user", "content": "Extract the user name: 'John is 20 years old'"}
],
)
"""
{
'args': (),
'kwargs': {
'messages': [
{
'role': 'user',
'content': "Extract the user name: 'John is 20 years old'",
}
],
'model': 'gpt-4o-mini',
'tools': [
{
'type': 'function',
'function': {
'name': 'UserInfo',
'description': 'Correctly extracted `UserInfo` with all the required parameters with correct types',
'parameters': {
'properties': {
'name': {'title': 'Name', 'type': 'string'},
'age': {'title': 'Age', 'type': 'integer'},
},
'required': ['age', 'name'],
'type': 'object',
},
},
}
],
'tool_choice': {'type': 'function', 'function': {'name': 'UserInfo'}},
},
}
"""
print(f"Name: {user_info.name}, Age: {user_info.age}")
#> Name: John, Age: 20
This example demonstrates:
- A pre-execution hook that logs all kwargs passed to the function.
- An exception hook that logs any exceptions that occur during execution.
The hooks provide valuable insights into the function's inputs and any errors, enhancing debugging and monitoring capabilities.
import instructor
from anthropic import Anthropic
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_anthropic(Anthropic())
# note that client.chat.completions.create will also work
resp = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
system="You are a world class AI that excels at extracting user data from a sentence",
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Make sure to install cohere
and set your system environment variable with export CO_API_KEY=<YOUR_COHERE_API_KEY>
.
pip install cohere
import instructor
import cohere
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_cohere(cohere.Client())
# note that client.chat.completions.create will also work
resp = client.chat.completions.create(
model="command-r-plus",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
Make sure you install the Google AI Python SDK. You should set a GOOGLE_API_KEY
environment variable with your API key.
Gemini tool calling also requires jsonref
to be installed.
pip install google-generativeai jsonref
import instructor
import google.generativeai as genai
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
# genai.configure(api_key=os.environ["API_KEY"]) # alternative API key configuration
client = instructor.from_gemini(
client=genai.GenerativeModel(
model_name="models/gemini-1.5-flash-latest", # model defaults to "gemini-pro"
),
mode=instructor.Mode.GEMINI_JSON,
)
Alternatively, you can call Gemini from the OpenAI client. You'll have to setup gcloud
, get setup on Vertex AI, and install the Google Auth library.
pip install google-auth
import google.auth
import google.auth.transport.requests
import instructor
from openai import OpenAI
from pydantic import BaseModel
creds, project = google.auth.default()
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)
# Pass the Vertex endpoint and authentication to the OpenAI SDK
PROJECT = 'PROJECT_ID'
LOCATION = (
'LOCATION' # https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations
)
base_url = f'https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT}/locations/{LOCATION}/endpoints/openapi'
client = instructor.from_openai(
OpenAI(base_url=base_url, api_key=creds.token), mode=instructor.Mode.JSON
)
# JSON mode is req'd
class User(BaseModel):
name: str
age: int
resp = client.chat.completions.create(
model="google/gemini-1.5-flash-001",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
import instructor
from openai import OpenAI
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_perplexity(OpenAI(base_url="https://api.perplexity.ai"))
resp = client.chat.completions.create(
model="sonar",
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
import instructor
from litellm import completion
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_litellm(completion)
resp = client.chat.completions.create(
model="claude-3-opus-20240229",
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Extract Jason is 25 years old.",
}
],
response_model=User,
)
assert isinstance(resp, User)
assert resp.name == "Jason"
assert resp.age == 25
This was the dream of Instructor but due to the patching of OpenAI, it wasn't possible for me to get typing to work well. Now, with the new client, we can get typing to work well! We've also added a few create_*
methods to make it easier to create iterables and partials, and to access the original completion.
import openai
import instructor
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_openai(openai.OpenAI())
user = client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
Now if you use an IDE, you can see the type is correctly inferred.
This will also work correctly with asynchronous clients.
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.AsyncOpenAI())
class User(BaseModel):
name: str
age: int
async def extract():
return await client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
Notice that simply because we return the create
method, the extract()
function will return the correct user type.
You can also return the original completion object
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
user, completion = client.chat.completions.create_with_completion(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
In order to handle streams, we still support Iterable[T]
and Partial[T]
but to simplify the type inference, we've added create_iterable
and create_partial
methods as well!
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
user_stream = client.chat.completions.create_partial(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create a user"},
],
response_model=User,
)
for user in user_stream:
print(user)
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name=None age=None
#> name='John Doe' age=None
#> name='John Doe' age=None
#> name='John Doe' age=None
#> name='John Doe' age=30
#> name='John Doe' age=30
# name=None age=None
# name='' age=None
# name='John' age=None
# name='John Doe' age=None
# name='John Doe' age=30
Notice now that the type inferred is Generator[User, None]
We get an iterable of objects when we want to extract multiple objects.
import openai
import instructor
from pydantic import BaseModel
client = instructor.from_openai(openai.OpenAI())
class User(BaseModel):
name: str
age: int
users = client.chat.completions.create_iterable(
model="gpt-4-turbo-preview",
messages=[
{"role": "user", "content": "Create 2 users"},
],
response_model=User,
)
for user in users:
print(user)
#> name='John Doe' age=30
#> name='Jane Doe' age=28
# User(name='John Doe', age=30)
# User(name='Jane Smith', age=25)
We invite you to contribute to evals in pytest
as a way to monitor the quality of the OpenAI models and the instructor
library. To get started check out the evals for Anthropic and OpenAI and contribute your own evals in the form of pytest tests. These evals will be run once a week and the results will be posted.
We welcome contributions to Instructor! Whether you're fixing bugs, adding features, improving documentation, or writing blog posts, your help is appreciated.
If you're new to the project, check out issues marked as good-first-issue
or help-wanted
. These could be anything from code improvements, a guest blog post, or a new cookbook.
-
Fork and clone the repository
git clone https://github.com/YOUR-USERNAME/instructor.git cd instructor
-
Set up the development environment
We use
uv
to manage dependencies, which provides faster package installation and dependency resolution than traditional tools. If you don't haveuv
installed, install it first.# Create and activate a virtual environment uv venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate # Install dependencies with all extras # You can specify specific groups if needed uv sync --all-extras --group dev # Or for a specific integration # uv sync --all-extras --group dev,anthropic
-
Install pre-commit hooks
We use pre-commit hooks to ensure code quality:
uv pip install pre-commit pre-commit install
This will automatically run Ruff formatters and linting checks before each commit, ensuring your code meets our style guidelines.
Tests help ensure that your contributions don't break existing functionality:
# Run all tests
uv run pytest
# Run specific tests
uv run pytest tests/path/to/test_file.py
# Run tests with coverage reporting
uv run pytest --cov=instructor
When submitting a PR, make sure to write tests for any new functionality and verify that all tests pass locally.
We maintain high code quality standards to keep the codebase maintainable and consistent:
-
Formatting and Linting: We use
ruff
for code formatting and linting, andpyright
for type checking.# Check code formatting uv run ruff format --check # Apply formatting uv run ruff format # Run linter uv run ruff check # Fix auto-fixable linting issues uv run ruff check --fix
-
Type Hints: All new code should include proper type hints.
-
Documentation: Code should be well-documented with docstrings and comments where appropriate.
Make sure these checks pass when you submit a PR:
- Linting:
uv run ruff check
- Formatting:
uv run ruff format
- Type checking:
uv run pyright
-
Create a branch for your changes
git checkout -b feature/your-feature-name
-
Make your changes and commit them
git add . git commit -m "Your descriptive commit message"
-
Keep your branch updated with the main repository
git remote add upstream https://github.com/instructor-ai/instructor.git git fetch upstream git rebase upstream/main
-
Push your changes
git push origin feature/your-feature-name
-
Create a Pull Request from your fork to the main repository.
-
Fill out the PR template with a description of your changes, relevant issue numbers, and any other information that would help reviewers understand your contribution.
-
Address review feedback and make any requested changes.
-
Wait for CI checks to pass. The PR will be reviewed by maintainers once all checks are green.
-
Merge: Once approved, a maintainer will merge your PR.
We encourage contributions to our evaluation tests. See the Evals documentation for details on writing and running evaluation tests.
We use pre-commit hooks to ensure code quality. To set up pre-commit hooks:
- Install pre-commit:
pip install pre-commit
- Set up the hooks:
pre-commit install
This will automatically run Ruff formatters and linting checks before each commit, ensuring your code meets our style guidelines.
We also provide some added CLI functionality for easy convenience:
-
instructor jobs
: This helps with the creation of fine-tuning jobs with OpenAI. Simple useinstructor jobs create-from-file --help
to get started creating your first fine-tuned GPT-3.5 model -
instructor files
: Manage your uploaded files with ease. You'll be able to create, delete and upload files all from the command line -
instructor usage
: Instead of heading to the OpenAI site each time, you can monitor your usage from the CLI and filter by date and time period. Note that usage often takes ~5-10 minutes to update from OpenAI's side
This project is licensed under the terms of the MIT License.
If you use Instructor in your research, please cite it using the following BibTeX:
@software{liu2024instructor,
author = {Jason Liu and Contributors},
title = {Instructor: A library for structured outputs from large language models},
url = {https://github.com/instructor-ai/instructor},
year = {2024},
month = {3}
}