A Couple of days ago, the people at OpenAI released ChatGPT which is an AI-based system optimized for dialogue. The service is available for free preview here –> ChatGPT (openai.com) 

The service itself uses large language models trained by OpenAI and uses deep learning to generate natural language text based on the input. The service has support for a wide range of different languages such as:

English, Spanish, French, German, Italian, Portuguese, Dutch, Norwegian, Swedish, Danish, Finnish, Polish, Czech, Russian, Arabic, Chinese, Japanese, Korean, Hindi, Bengali, Urdu, Tamil, Telugu, Marathi, Gujarati, Kannada, Malayalam, Punjabi, Oriya, Assamese, Mongolian, Armenian, Turkish, Greek, Hebrew, Persian, Latin, Icelandic, Basque, Catalan, Galician, Welsh, Irish, Manx, Scots, Cornish, Breton, Provencal, Corsican, Quechua, Aymara, Tupi, Greenlandic, Albanian, Serbian, Croatian

And the language database is already several hundred GB already. Now the service itself has several use cases. Think about instead of building APIs or integration you can use this service to interpret content and build what you need of it. 

For instance, we can send an API call containing a certain payload and as create a table out of it for us, as seen in the example below.

In addition, think about how Chatbots would work powered by these engines.

ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that uses human demonstrations to guide the model toward desired behavior.
You can read more about GPT 3-5 here –> Model index for researchers – OpenAI API

You however have different models that you can interact with and use from an automation perspective, depending on the use case.

MODELS DESCRIPTION
GPT-3 A set of models that can understand and generate natural language
Codex

Limited beta
A set of models that can understand and generate code, including translating natural language to code
Content filter A fine-tuned model that can detect whether text may be sensitive or unsafe

Within GPT-3 you have different models, the most used and the most versatile is
text-davinci-003 which contains data points up to June 2021. 

Since ChatGPT is only available as a web service and not as an API, you cannot directly build applications around it, however, since ChatGPT is built upon OpenAI you can use the services there to interact with it.

To interact with the OpenAI API you need to have an account, which you can create here –> OpenAI API then you need to create an API key which is used to authenticate to the API under your account settings.

There are different ways to interact with the API, either using Python, Node, REST APIs which allows use to PowerShell and others. Below is an example of a Python request against the Completion API.

import os
import openai
openai.api_key = "apikey"

response = openai.Completion.create(
  model="text-davinci-003",
  prompt="What is the meaning of life?",
  temperature=0.9,
  max_tokens=1200,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0
)

print(response.choices[0].text)

You can either use it as a native python script or use the interactive Python CLI openai-cli which can be downloaded from here –> peterdemin/openai-cli: Command-line client for OpenAI APIs (github.com) This provides you with an interactive Shell to OpenAI

This can be triggered by running openai.exe repl –token APITOKEN

You can also use the same API to interact with the DALL-E API as well to generate picture, with a similar configuration, just a different endpoint

import os
import openai
openai.api_key = "APIKEY"
response = openai.Image.create(
  prompt="cyberpunk 2077 city background",
  n=1,
  size="1024x1024"
)
image_url = response['data'][0]['url']
print(response)

You can use the image API to do different changes to a picture either mask to make variants of existing images or create completely new pictures.

Now for the API calls, there are some different parameters that you can configure.

model= (What kind of model to use)
prompt= (The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.)
temperature= (Higher values mean the model will take more risks) defaults to 1.
max_tokens= the maximum number of tokens to generate in the completion. You can see the screenshot below which shows the relationship between characters and tokens. Tokens are also used as a billing unit.
top_p= (An alternative to temperature, a value of 0.1 means only the tokens comprising the top 10% probability mass are considered.) defaults to 1. 

This was one example showing how you can use the API calls to interact with the OpenAPI engine to build more automation.

 

 

 

 

Categories: Uncategorized

0 Comments

Leave a Reply

Your email address will not be published.