Now that the OpenAI API is accessible by anyone, it is time to start using it! As usual, I’ll be using Python for that. If you want to access the API, you’ll need to create an account with OpenAI first. If you have an account to access ChatGPT, then you already have that. To access the API, you need to convert your account to a paid account for the OpenAI platform (I clicked a button for that on my platform account billing overview page). Note that a paid ChatGPT account is something different, and cannot be used to access the API.
Authentication
As for access to the TwitterX API, you need a secret API key for authentication (I clicked a button for that on the platform account API keys page). Save the key somewhere securely, because you won’t be able to access it again from the OpenAI website (the type of key that OpenAI uses is a so-called bearer token).
The API key can be used to access the API directly through the HTTP interface, or through one of the libraries that wrap the HTTP interface in a programming language interface, where OpenAI itself provides official versions for Python and Node.js. For example, to access the models endpoint using the command line and assuming that you’ve put your key in an environment variable OPENAI_API_KEY
, use the command
wget <a href="https://api.openai.com/v1/models">https://api.openai.com/v1/models</a> -O - \ --header="Authorization: Bearer $OPENAI_API_KEY"
to print a list of all the models that are available through the API. There are many different models that you can use, such as GPT-3.5, GPT-4, DALL-E, Whisper, etc. In this article, I will concentrate on GPT-3.5 and show how you can ask a question as you would do with ChatGPT.
Python Bindings
To use the API key with Python, you’ll need so-called Python bindings. OpenAI provides official Python bindings, which can be installed with
pip install openai
if you are using a standard Python install, or
conda install openai
if you are using the Anaconda Python distribution (which is what I always use).
Python Code
As an example, let’s use the chat completion endpoint to write a short program that prompts you for a question to ask to GPT-3.5:
import openai from os import environ openai.api_key = environ['OPENAI_API_KEY'] messages = [{'role': 'system', 'content': 'You are a helpful assistant.'}] message = input('message> ') messages.append(dict(role='user', content=message)) response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=messages) print(response.choices[0].message['content'])
Note that the Python program also assumes that you’ve put your key in an environment variable OPENAI_API_KEY
. The response that is returned by the call to the API is a somewhat elaborate data structure, from which I only print the field that contains the answer text. If you run this program, you’ll get something like this:
message> What is the largest prime that is smaller than 100? The largest prime number that is smaller than 100 is 97.
Refer to the API Reference for all the details on the many possibilities that the API provides. For more info on the endpoint that I’ve used in the example script, go to the API reference on Chat.
There is a follow-up article where I add the possibility to have a continuing conversation using the chat completion endpoint, instead of asking a single question.
Was cpy-pasted this example, with valid OPEN_AI_KEY. However, got message
messages = messages)
^
Add new comment