Structured outputs from LLMs
Instructor is used to extract structured, validated data from LLM responses by leveraging Pydantic models as output schemas. It handles retries on validation failure, automatic prompt injection of the schema, and works with function calling or JSON mode to reliably get typed Python objects from any LLM provider.
Install with `pip install instructor` and patch your OpenAI client with `client = instructor.from_openai(OpenAI())`. Define a Pydantic model for your desired output structure, then call `client.chat.completions.create()` with `response_model=YourModel`. The library handles schema injection and validation automatically.
$ pip install instructor` and patch your OpenAI client with `client = instructor Be the first to share a Instructor case study and get discovered by clients.
Submit a case studyThought leaders
Follow for insights, tutorials, and thought leadership
Submit a brief and we'll match you with vetted specialists who have proven Instructor experience.