To call OpenAI's API asynchronously in Python, you can use the aiohttp
library, which allows you to perform HTTP requests without blocking the execution of your program. Asynchronous programming is useful when you need to make multiple API calls efficiently, as it enables your application to handle other tasks while waiting for responses. First, install the aiohttp
package if you haven’t done so by running pip install aiohttp
.
Begin by importing the necessary libraries and setting up your asynchronous function. You'll want to create an asynchronous function that uses aiohttp.ClientSession
for making the requests. Inside this function, you can craft your API call to OpenAI's endpoint. For example, you’ll need to specify the model you want to use, such as gpt-3.5-turbo
, and pass the required parameters in the request body. Here’s a basic structure:
import aiohttp
import asyncio
async def call_openai_api(prompt):
async with aiohttp.ClientSession() as session:
headers = {
'Authorization': f'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
}
data = {
'model': 'gpt-3.5-turbo',
'messages': [{'role': 'user', 'content': prompt}]
}
async with session.post('https://api.openai.com/v1/chat/completions', headers=headers, json=data) as response:
return await response.json()
Next, create an event loop to run your asynchronous function. You may want to call this function multiple times if you’re querying different prompts or models. Here's how to run the loop:
async def main():
prompts = ["Hello!", "Tell me about Python.", "What's the weather like?"]
tasks = [call_openai_api(prompt) for prompt in prompts]
responses = await asyncio.gather(*tasks)
for response in responses:
print(response)
asyncio.run(main())
In this example, we define a list of prompts and create a list of tasks to call the OpenAI API for each prompt. The asyncio.gather
function is used to run all tasks concurrently, waiting for all of them to complete and collecting their results. Finally, you can process or display the API responses as needed. Overall, using aiohttp
for calling OpenAI’s API asynchronously can greatly improve the efficiency of your application, especially when making multiple requests.