From 67dc21ea5fc6cda0102fe07a0aca39fb2a14466c Mon Sep 17 00:00:00 2001
From: Aakash Bakhle <48802744+aakashb95@users.noreply.github.com>
Date: Thu, 4 Dec 2025 20:32:46 +0530
Subject: [PATCH] fix: Hardcoded 500 in AI api error response(#13005)
## Description
Please include a summary of the change and issue(s) fixed. Also, mention
relevant motivation, context, and any dependencies that this change
requires.
Fixes false new relic alerts set due to hardcoding an error code
## Type of change
Bug fix (non-breaking change which fixes an issue)
## How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide
instructions so we can reproduce. Please also list any relevant details
for your test configuration.
Before
the 500 was hardcoded.
RubyLLM doesn't send any error codes, so i removed the error code
argument and just pass the error message
Langfuse gets just the error message
local logs only show error
Better fix is to handle each case and show the user wherever necessary
## Checklist:
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my code
- [x] I have commented on my code, particularly in hard-to-understand
areas
- [] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my
feature works
- [x] New and existing unit tests pass locally with my changes
- [x] Any dependent changes have been merged and published in downstream
modules
---------
Co-authored-by: aakashb95
---
lib/integrations/llm_base_service.rb | 2 +-
lib/integrations/llm_instrumentation_helpers.rb | 4 +---
spec/lib/integrations/llm_instrumentation_spec.rb | 10 ++++------
3 files changed, 6 insertions(+), 10 deletions(-)
diff --git a/lib/integrations/llm_base_service.rb b/lib/integrations/llm_base_service.rb
index e8811121f..ca9459fc8 100644
--- a/lib/integrations/llm_base_service.rb
+++ b/lib/integrations/llm_base_service.rb
@@ -164,6 +164,6 @@ class Integrations::LlmBaseService
end
def build_error_response_from_exception(error, messages)
- { error: error.message, error_code: 500, request_messages: messages }
+ { error: error.message, request_messages: messages }
end
end
diff --git a/lib/integrations/llm_instrumentation_helpers.rb b/lib/integrations/llm_instrumentation_helpers.rb
index 1fc73f3f0..c03e3e9c7 100644
--- a/lib/integrations/llm_instrumentation_helpers.rb
+++ b/lib/integrations/llm_instrumentation_helpers.rb
@@ -32,10 +32,8 @@ module Integrations::LlmInstrumentationHelpers
error = result[:error] || result['error']
return if error.blank?
- error_code = result[:error_code] || result['error_code']
span.set_attribute(ATTR_GEN_AI_RESPONSE_ERROR, error.to_json)
- span.set_attribute(ATTR_GEN_AI_RESPONSE_ERROR_CODE, error_code) if error_code
- span.status = OpenTelemetry::Trace::Status.error("API Error: #{error_code}")
+ span.status = OpenTelemetry::Trace::Status.error(error.to_s.truncate(1000))
end
def set_metadata_attributes(span, params)
diff --git a/spec/lib/integrations/llm_instrumentation_spec.rb b/spec/lib/integrations/llm_instrumentation_spec.rb
index f567a91b7..97a45a414 100644
--- a/spec/lib/integrations/llm_instrumentation_spec.rb
+++ b/spec/lib/integrations/llm_instrumentation_spec.rb
@@ -200,17 +200,15 @@ RSpec.describe Integrations::LlmInstrumentation do
result = instance.instrument_llm_call(params) do
{
- error: { message: 'API rate limit exceeded' },
- error_code: 'rate_limit_exceeded'
+ error: 'API rate limit exceeded'
}
end
- expect(result[:error_code]).to eq('rate_limit_exceeded')
+ expect(result[:error]).to eq('API rate limit exceeded')
expect(mock_span).to have_received(:set_attribute)
- .with('gen_ai.response.error', '{"message":"API rate limit exceeded"}')
- expect(mock_span).to have_received(:set_attribute).with('gen_ai.response.error_code', 'rate_limit_exceeded')
+ .with('gen_ai.response.error', '"API rate limit exceeded"')
expect(mock_span).to have_received(:status=).with(mock_status)
- expect(OpenTelemetry::Trace::Status).to have_received(:error).with('API Error: rate_limit_exceeded')
+ expect(OpenTelemetry::Trace::Status).to have_received(:error).with('API rate limit exceeded')
end
end