Project

General

Profile

Bug #3260

Updated by Ram Kordale about 1 year ago

Change needed is only in ourPrompt and max_tokens. ourPrompt. 

 Current: 
 POST /v1/completions { 
 "model": "text-davinci-003", 
 "prompt": <ourPrompt>, 
 "temperature": 0, 
 "max_tokens": 60, 
 "top_p": 1, 
 "frequency_penalty": 0, 
 "presence_penalty": 0 
 } 

 ourPrompt = concat("explain", <what user typed>, "with one example in 30 words or less") 

 Change needed: 
 ourPrompt = concat("Only if you know the answer, explain", <what user typed>, "with one example in 50 words or less. Do not make up an answer.") 
 max_tokens: change from 60 to 90

Back