Project

General

Profile

Feature #3772

Colab and GCF changes to use OpenAI chat endpoint

Added by Ram Kordale 2 days ago.

Status:
New
Priority:
Urgent
Assignee:
-
Start date:
11/26/2024
Due date:
% Done:

0%

Estimated time:

Description

3771 colab uses our GCF that uses the OpenAI Assistants end point (https://platform.openai.com/docs/api-reference/assistants/createAssistant). Clone 3771 colab to ensure the new colab does all that 3771 does except that it uses a new GCF (to be built as part of this ticket) that uses the OpenAI Chat endpoint (https://platform.openai.com/docs/api-reference/chat/create).

However, the support required is minimal and is described below.

POST call needs to support only 2 inputs:
- model
- messages. However, we will not have "system" role messages. We will only have "user" and "assistant" role messages. This call returns a 'chat completion' object and not a streamed sequence. We need to process only three fields from the response:
--id: just print this.
--choices array. Process only the first choice. Print the 'finish_reason' and retrieve the "content" of the message for further processing as this is the actual response to our prompt.
--usage: please print the fields.

The above doc page (https://platform.openai.com/docs/api-reference/chat/create) contains samples that can be used for dummy testing.

The chat endpoint does not have an equivalent of threads (and therefore thread_id). Instead, replace thread_id with a string variable thread.

So, wherever thread_id is "" in 3771 colab, make thread's value "new". After making the openai call, make thread's value as "old".

When thread's value is "new", the messages input should have only one part. See example message in the input below: {
"model": "<model>",
"messages": [ {
"role": "user",
"content": [ {
"type": "text",
"text": "<prompt-1>"
}
]
}
]
}

and the message in the response will contain: {
...
"message": {
"role": "assistant",
"content": "<response-1>"
}
...
}

When the value of threads is "old", messages should contain the chat so far. So, when we continue the above example configuration, {
"model": "<model>",
"messages": [ {
"role": "user",
"content": [ {
"type": "text",
"text": "<prompt-1>"
}
]
}, {
"role": "assistant",
"content": [ {
"type": "text",
"text": "<response-1>"
}
]
}, {
"role": "user",
"content": [ {
"type": "text",
"text": "<prompt-2>"
}
]
}
]
}

and the message in the response will contain: {
...
"message": {
"role": "assistant",
"content": "<response-2>"
}
...
}

No data to display

Also available in: Atom PDF