chore: Improve Openai json rendering (#8666)
We have been observing JSON parsing errors for responses from GPT. Switching to the gpt-4-1106-preview model along with using response_format has significantly improved the responses from OpenAI, hence making the switch in code. ref: https://openai.com/blog/new-models-and-developer-products-announced-at-devday fixes: #CW-2931
This commit is contained in:
@@ -4,7 +4,7 @@ class ChatGpt
|
||||
end
|
||||
|
||||
def initialize(context_sections = '')
|
||||
@model = 'gpt-4'
|
||||
@model = 'gpt-4-1106-preview'
|
||||
@messages = [system_message(context_sections)]
|
||||
end
|
||||
|
||||
@@ -53,7 +53,7 @@ class ChatGpt
|
||||
|
||||
def request_gpt
|
||||
headers = { 'Content-Type' => 'application/json', 'Authorization' => "Bearer #{ENV.fetch('OPENAI_API_KEY')}" }
|
||||
body = { model: @model, messages: @messages }.to_json
|
||||
body = { model: @model, messages: @messages, response_format: { type: 'json_object' } }.to_json
|
||||
Rails.logger.info "Requesting Chat GPT with body: #{body}"
|
||||
response = HTTParty.post("#{self.class.base_uri}/v1/chat/completions", headers: headers, body: body)
|
||||
Rails.logger.info "Chat GPT response: #{response.body}"
|
||||
|
||||
Reference in New Issue
Block a user