# OpenAI Node API Library [![NPM version](https://img.shields.io/npm/v/openai.svg)](https://npmjs.org/package/openai) This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript. It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/). To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs). ## Installation ```sh npm install openai ``` You can import in Deno via: ```ts import OpenAI from 'https://deno.land/x/openai@v4.47.1/mod.ts'; ``` ## Usage The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API. ```js import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted }); async function main() { const chatCompletion = await openai.chat.completions.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo', }); } main(); ``` ## Streaming responses We provide support for streaming responses using Server Sent Events (SSE). ```ts import OpenAI from 'openai'; const openai = new OpenAI(); async function main() { const stream = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'Say this is a test' }], stream: true, }); for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); } } main(); ``` If you need to cancel a stream, you can `break` from the loop or call `stream.controller.abort()`. ### Request & Response types This library includes TypeScript definitions for all request params and response fields. You may import and use them like so: ```ts import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted }); async function main() { const params: OpenAI.Chat.ChatCompletionCreateParams = { messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo', }; const chatCompletion: OpenAI.Chat.ChatCompletion = await openai.chat.completions.create(params); } main(); ``` Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors. > [!IMPORTANT] > Previous versions of this SDK used a `Configuration` class. See the [v3 to v4 migration guide](https://github.com/openai/openai-node/discussions/217). ### Polling Helpers When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes helper functions which will poll the status until it reaches a terminal state and then return the resulting object. If an API method results in an action which could benefit from polling there will be a corresponding version of the method ending in 'AndPoll'. For instance to create a Run and poll until it reaches a terminal state you can run: ```ts const run = await openai.beta.threads.runs.createAndPoll(thread.id, { assistant_id: assistantId, }); ``` More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle) ### Bulk Upload Helpers When creating and interacting with vector stores, you can use the polling helpers to monitor the status of operations. For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once. ```ts const fileList = [ createReadStream('/home/data/example.pdf'), ... ]; const batch = await openai.vectorStores.fileBatches.uploadAndPoll(vectorStore.id, fileList); ``` ### Streaming Helpers The SDK also includes helpers to process streams and handle the incoming events. ```ts const run = openai.beta.threads.runs .stream(thread.id, { assistant_id: assistant.id, }) .on('textCreated', (text) => process.stdout.write('\nassistant > ')) .on('textDelta', (textDelta, snapshot) => process.stdout.write(textDelta.value)) .on('toolCallCreated', (toolCall) => process.stdout.write(`\nassistant > ${toolCall.type}\n\n`)) .on('toolCallDelta', (toolCallDelta, snapshot) => { if (toolCallDelta.type === 'code_interpreter') { if (toolCallDelta.code_interpreter.input) { process.stdout.write(toolCallDelta.code_interpreter.input); } if (toolCallDelta.code_interpreter.outputs) { process.stdout.write('\noutput >\n'); toolCallDelta.code_interpreter.outputs.forEach((output) => { if (output.type === 'logs') { process.stdout.write(`\n${output.logs}\n`); } }); } } }); ``` More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md) ### Streaming responses This library provides several conveniences for streaming chat completions, for example: ```ts import OpenAI from 'openai'; const openai = new OpenAI(); async function main() { const stream = await openai.beta.chat.completions.stream({ model: 'gpt-4', messages: [{ role: 'user', content: 'Say this is a test' }], stream: true, }); stream.on('content', (delta, snapshot) => { process.stdout.write(delta); }); // or, equivalently: for await (const chunk of stream) { process.stdout.write(chunk.choices[0]?.delta?.content || ''); } const chatCompletion = await stream.finalChatCompletion(); console.log(chatCompletion); // {id: "…", choices: […], …} } main(); ``` Streaming with `openai.beta.chat.completions.stream({…})` exposes [various helpers for your convenience](helpers.md#events) including event handlers and promises. Alternatively, you can use `openai.chat.completions.create({ stream: true, … })` which only returns an async iterable of the chunks in the stream and thus uses less memory (it does not build up a final chat completion object for you). If you need to cancel a stream, you can `break` from a `for await` loop or call `stream.abort()`. ### Automated function calls We provide the `openai.beta.chat.completions.runTools({…})` convenience helper for using function tool calls with the `/chat/completions` endpoint which automatically call the JavaScript functions you provide and sends their results back to the `/chat/completions` endpoint, looping as long as the model requests tool calls. If you pass a `parse` function, it will automatically parse the `arguments` for you and returns any parsing errors to the model to attempt auto-recovery. Otherwise, the args will be passed to the function you provide as a string. If you pass `tool_choice: {function: {name: …}}` instead of `auto`, it returns immediately after calling that function (and only loops to auto-recover parsing errors). ```ts import OpenAI from 'openai'; const client = new OpenAI(); async function main() { const runner = client.beta.chat.completions .runTools({ model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: 'How is the weather this week?' }], tools: [ { type: 'function', function: { function: getCurrentLocation, parameters: { type: 'object', properties: {} }, }, }, { type: 'function', function: { function: getWeather, parse: JSON.parse, // or use a validation library like zod for typesafe parsing. parameters: { type: 'object', properties: { location: { type: 'string' }, }, }, }, }, ], }) .on('message', (message) => console.log(message)); const finalContent = await runner.finalContent(); console.log(); console.log('Final content:', finalContent); } async function getCurrentLocation() { return 'Boston'; // Simulate lookup } async function getWeather(args: { location: string }) { const { location } = args; // … do lookup … return { temperature, precipitation }; } main(); // {role: "user", content: "How's the weather this week?"} // {role: "assistant", tool_calls: [{type: "function", function: {name: "getCurrentLocation", arguments: "{}"}, id: "123"} // {role: "tool", name: "getCurrentLocation", content: "Boston", tool_call_id: "123"} // {role: "assistant", tool_calls: [{type: "function", function: {name: "getWeather", arguments: '{"location": "Boston"}'}, id: "1234"}]} // {role: "tool", name: "getWeather", content: '{"temperature": "50degF", "preciptation": "high"}', tool_call_id: "1234"} // {role: "assistant", content: "It's looking cold and rainy - you might want to wear a jacket!"} // // Final content: "It's looking cold and rainy - you might want to wear a jacket!" ``` Like with `.stream()`, we provide a variety of [helpers and events](helpers.md#events). Note that `runFunctions` was previously available as well, but has been deprecated in favor of `runTools`. Read more about various examples such as with integrating with [zod](helpers.md#integrate-with-zod), [next.js](helpers.md#integrate-wtih-next-js), and [proxying a stream to the browser](helpers.md#proxy-streaming-to-a-browser). ## File uploads Request parameters that correspond to file uploads can be passed in many different forms: - `File` (or an object with the same structure) - a `fetch` `Response` (or an object with the same structure) - an `fs.ReadStream` - the return value of our `toFile` helper ```ts import fs from 'fs'; import fetch from 'node-fetch'; import OpenAI, { toFile } from 'openai'; const openai = new OpenAI(); // If you have access to Node `fs` we recommend using `fs.createReadStream()`: await openai.files.create({ file: fs.createReadStream('input.jsonl'), purpose: 'fine-tune' }); // Or if you have the web `File` API you can pass a `File` instance: await openai.files.create({ file: new File(['my bytes'], 'input.jsonl'), purpose: 'fine-tune' }); // You can also pass a `fetch` `Response`: await openai.files.create({ file: await fetch('https://somesite/input.jsonl'), purpose: 'fine-tune' }); // Finally, if none of the above are convenient, you can use our `toFile` helper: await openai.files.create({ file: await toFile(Buffer.from('my bytes'), 'input.jsonl'), purpose: 'fine-tune', }); await openai.files.create({ file: await toFile(new Uint8Array([0, 1, 2]), 'input.jsonl'), purpose: 'fine-tune', }); ``` ## Handling errors When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of `APIError` will be thrown: ```ts async function main() { const job = await openai.fineTuning.jobs .create({ model: 'gpt-3.5-turbo', training_file: 'file-abc123' }) .catch(async (err) => { if (err instanceof OpenAI.APIError) { console.log(err.status); // 400 console.log(err.name); // BadRequestError console.log(err.headers); // {server: 'nginx', ...} } else { throw err; } }); } main(); ``` Error codes are as followed: | Status Code | Error Type | | ----------- | -------------------------- | | 400 | `BadRequestError` | | 401 | `AuthenticationError` | | 403 | `PermissionDeniedError` | | 404 | `NotFoundError` | | 422 | `UnprocessableEntityError` | | 429 | `RateLimitError` | | >=500 | `InternalServerError` | | N/A | `APIConnectionError` | ## Microsoft Azure OpenAI To use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI` class instead of the `OpenAI` class. > [!IMPORTANT] > The Azure API shape differs from the core API shape which means that the static types for responses / params > won't always be correct. ```ts const openai = new AzureOpenAI(); const result = await openai.chat.completions.create({ model: 'gpt-4-1106-preview', messages: [{ role: 'user', content: 'Say hello!' }], }); console.log(result.choices[0]!.message?.content); ``` ### Retries Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default. You can use the `maxRetries` option to configure or disable this: ```js // Configure the default for all requests: const openai = new OpenAI({ maxRetries: 0, // default is 2 }); // Or, configure per-request: await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in Node.js?' }], model: 'gpt-3.5-turbo' }, { maxRetries: 5, }); ``` ### Timeouts Requests time out after 10 minutes by default. You can configure this with a `timeout` option: ```ts // Configure the default for all requests: const openai = new OpenAI({ timeout: 20 * 1000, // 20 seconds (default is 10 minutes) }); // Override per-request: await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-3.5-turbo' }, { timeout: 5 * 1000, }); ``` On timeout, an `APIConnectionTimeoutError` is thrown. Note that requests which time out will be [retried twice by default](#retries). ## Auto-pagination List methods in the OpenAI API are paginated. You can use `for await … of` syntax to iterate through items across all pages: ```ts async function fetchAllFineTuningJobs(params) { const allFineTuningJobs = []; // Automatically fetches more pages as needed. for await (const fineTuningJob of openai.fineTuning.jobs.list({ limit: 20 })) { allFineTuningJobs.push(fineTuningJob); } return allFineTuningJobs; } ``` Alternatively, you can make request a single page at a time: ```ts let page = await openai.fineTuning.jobs.list({ limit: 20 }); for (const fineTuningJob of page.data) { console.log(fineTuningJob); } // Convenience methods are provided for manually paginating: while (page.hasNextPage()) { page = page.getNextPage(); // ... } ``` ## Advanced Usage ### Accessing raw Response data (e.g., headers) The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return. You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data. ```ts const openai = new OpenAI(); const response = await openai.chat.completions .create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' }) .asResponse(); console.log(response.headers.get('X-My-Header')); console.log(response.statusText); // access the underlying Response object const { data: chatCompletion, response: raw } = await openai.chat.completions .create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' }) .withResponse(); console.log(raw.headers.get('X-My-Header')); console.log(chatCompletion); ``` ### Making custom/undocumented requests This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used. #### Undocumented endpoints To make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs. Options on the client, such as retries, will be respected when making these requests. ```ts await client.post('/some/path', { body: { some_prop: 'foo' }, query: { some_query_arg: 'bar' }, }); ``` #### Undocumented request params To make requests using undocumented parameters, you may use `// @ts-expect-error` on the undocumented parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you send will be sent as-is. ```ts client.foo.create({ foo: 'my_param', bar: 12, // @ts-expect-error baz is not yet public baz: 'undocumented option', }); ``` For requests with the `GET` verb, any extra params will be in the query, all other requests will send the extra param in the body. If you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request options. #### Undocumented response properties To access undocumented response properties, you may access the response object with `// @ts-expect-error` on the response object, or cast the response object to the requisite type. Like the request params, we do not validate or strip extra properties from the response from the API. ### Customizing the fetch client By default, this library uses `node-fetch` in Node, and expects a global `fetch` function in other environments. If you would prefer to use a global, web-standards-compliant `fetch` function even in a Node environment, (for example, if you are running Node with `--experimental-fetch` or using NextJS which polyfills with `undici`), add the following import before your first import `from "OpenAI"`: ```ts // Tell TypeScript and the package to use the global web fetch instead of node-fetch. // Note, despite the name, this does not add any polyfills, but expects them to be provided if needed. import 'openai/shims/web'; import OpenAI from 'openai'; ``` To do the inverse, add `import "openai/shims/node"` (which does import polyfills). This can also be useful if you are getting the wrong TypeScript types for `Response` ([more details](https://github.com/openai/openai-node/tree/master/src/_shims#readme)). ### Logging and middleware You may also provide a custom `fetch` function when instantiating the client, which can be used to inspect or alter the `Request` or `Response` before/after each request: ```ts import { fetch } from 'undici'; // as one example import OpenAI from 'openai'; const client = new OpenAI({ fetch: async (url: RequestInfo, init?: RequestInit): Promise => { console.log('About to make a request', url, init); const response = await fetch(url, init); console.log('Got response', response); return response; }, }); ``` Note that if given a `DEBUG=true` environment variable, this library will log all requests and responses automatically. This is intended for debugging purposes only and may change in the future without notice. ### Configuring an HTTP(S) Agent (e.g., for proxies) By default, this library uses a stable agent for all http/https requests to reuse TCP connections, eliminating many TCP & TLS handshakes and shaving around 100ms off most requests. If you would like to disable or customize this behavior, for example to use the API behind a proxy, you can pass an `httpAgent` which is used for all requests (be they http or https), for example: ```ts import http from 'http'; import { HttpsProxyAgent } from 'https-proxy-agent'; // Configure the default for all requests: const openai = new OpenAI({ httpAgent: new HttpsProxyAgent(process.env.PROXY_URL), }); // Override per-request: await openai.models.list({ httpAgent: new http.Agent({ keepAlive: false }), }); ``` ## Semantic versioning This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: 1. Changes that only affect static types, without breaking runtime behavior. 2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals)_. 3. Changes that we do not expect to impact the vast majority of users in practice. We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-node/issues) with questions, bugs, or suggestions. ## Requirements TypeScript >= 4.5 is supported. The following runtimes are supported: - Node.js 18 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions. - Deno v1.28.0 or higher, using `import OpenAI from "npm:openai"`. - Bun 1.0 or later. - Cloudflare Workers. - Vercel Edge Runtime. - Jest 28 or greater with the `"node"` environment (`"jsdom"` is not supported at this time). - Nitro v2.6 or greater. Note that React Native is not supported at this time. If you are interested in other runtime environments, please open or upvote an issue on GitHub.