A simple wrapper for OpenAI's API.
中文文档移步这里:README_zh-CN.md
using ChatAPICall
setapikey("sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
Or set OPENAI_API_KEY
in ~/.bashrc
to automatically load the API key when using the package:
# Add the following code to ~/.bashrc
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
using ChatAPICall
# Set proxy(example)
proxy_on(http="127.0.0.1:7890", https="socks://127.0.0.1:7891")
# Check the current proxy
proxy_status()
# Turn off proxy
proxy_off()
Or you might want to use a proxy url (It is https://api.openai.com
by default):
using ChatAPICall
setbaseurl("https://api.example.com")
Example 1, send prompt and return information:
using ChatAPICall
# Check if API key is set
showapikey()
# Check if proxy is enabled
proxy_status()
# Send prompt and return response
chat = Chat("Hello, GPT-3.5!")
resp = getresponse(chat)
Example 2, customize the message template and return the information and the number of consumed tokens:
using ChatAPICall
# Customize the sending template
function ChatAPICall.defaultprompt(msg)
[
Dict("role"=>"system", "content"=>"Please help me translate the following text."),
Dict("role"=>"user", "content"=>msg)
]
end
chat = Chat("Hello!")
# Set the number of retries to Inf
response = getresponse(chat; temperature=0.5, maxrequests=-1)
println("Number of consumed tokens: ", response.total_tokens)
println("Returned content: ", response.content)
Continue chatting based on the last response:
# first call
chat = Chat("Hello, GPT-3.5!")
resp = getresponse!(chat) # update chat history
println(resp.content)
# continue chatting
adduser!(chat, "How are you?")
next_resp = getresponse!(chat)
println(next_resp.content)
# fake response
adduser!(chat, "What's your name?")
addassistant!(chat, "My name is GPT-3.5.")
# print chat history
print(chat)
This package is licensed under the MIT license. See the LICENSE file for more details.
- TODO