OpenAI Policies & Instructions
OpenAI Policies & Instructions
OpenAI's system instructions are not made publicly available for multiple reasons, including valid concerns about security and misuse.
Open AI Usage Policies
Describes what is and isn’t allowed when using OpenAI models.
openai.com/policies/usage-policies
Safety & Alignment
High-level explanations of safety approach, red-teaming, and risk mitigation.
Model Spec
Describes intended model behavior at a high level.
Privacy Policy
Explains data handling, retention, and training use.
openai.com/policies/privacy-policy
Security & Compliance
Covers SOC 2, ISO certifications, and enterprise controls.
System Instructions
What is published by OpenAI
The main document related to system instructions that OpenAI in the Model Spec, linked above.
These are OpenAI’s own documents describing the Model Spec, which defines the internal instruction hierarchy and the rules that govern all GPT models (including root/system-level constraints):
- OpenAI Model Spec (Root & System Instruction Authority Levels) — Model Spec documentation outlining the hierarchy of instructions (root/system/developer/user). https://model-spec.openai.com/2025-12-18.html
- OpenAI Model Release Notes (Authority Level Clarification) — Notes updating how “Root” and “System” levels operate and cannot be overridden by lower levels. https://help.openai.com/en/articles/9624314-model-release-notes
Third-Party & Publicly Observed / Leaked Information
There is a long list of third-party and publicly observed or potentially leaked information about OpenAI’s system instructions that can be found online. As always, it is important to note that these can be hard to verify and can change at any time.