Task #3353
Updated by Ram Kordale 11 months ago
Change needed is only in ourPrompt and max_tokens. Current call is: POST /v1/completions { "model": "text-davinci-003", "prompt": <ourPrompt>, "temperature": 0, "max_tokens": 90, 60, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 } Replace "text-davinci-003" with "gpt-3.5-turbo-instruct"