Langfuse logging improvements ## Description Please include a summary of the change and issue(s) fixed. Also, mention relevant motivation, context, and any dependencies that this change requires. Fixes # (issue) For reply suggestion: the errors are being stored inside output field, but observations should be marked as errors. For assistant: add credit_used metadata to filter handoffs from ai-replies For langfuse tool call: add `observation_type=tool` ## Type of change - [x] Bug fix (non-breaking change which fixes an issue) ## How Has This Been Tested? Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration. before: <img width="1028" height="57" alt="image" src="https://github.com/user-attachments/assets/70f6a36e-6c33-444c-a083-723c7c9e823a" /> after: <img width="872" height="69" alt="image" src="https://github.com/user-attachments/assets/1b6b6f5f-5384-4e9c-92ba-f56748fec6dd" /> `credit_used` to filter handoffs from AI replies that cause credit usage <img width="1082" height="672" alt="image" src="https://github.com/user-attachments/assets/90914227-553a-4c03-bc43-56b2018ac7c1" /> set `observation_type` to `tool` <img width="726" height="1452" alt="image" src="https://github.com/user-attachments/assets/e639cc9b-1c6c-4427-887e-23e5523bf64f" /> ## Checklist: - [x] My code follows the style guidelines of this project - [x] I have performed a self-review of my code - [x] I have commented on my code, particularly in hard-to-understand areas - [ ] I have made corresponding changes to the documentation - [x] My changes generate no new warnings - [ ] I have added tests that prove my fix is effective or that my feature works - [x] New and existing unit tests pass locally with my changes - [x] Any dependent changes have been merged and published in downstream modules
93 lines
3.2 KiB
Ruby
93 lines
3.2 KiB
Ruby
# frozen_string_literal: true
|
|
|
|
require 'opentelemetry_config'
|
|
|
|
module Integrations::LlmInstrumentationSpans
|
|
include Integrations::LlmInstrumentationConstants
|
|
|
|
def tracer
|
|
@tracer ||= OpentelemetryConfig.tracer
|
|
end
|
|
|
|
def start_llm_turn_span(params)
|
|
return unless ChatwootApp.otel_enabled?
|
|
|
|
span = tracer.start_span(params[:span_name])
|
|
set_llm_turn_request_attributes(span, params)
|
|
set_llm_turn_prompt_attributes(span, params[:messages]) if params[:messages]
|
|
|
|
@pending_llm_turn_spans ||= []
|
|
@pending_llm_turn_spans.push(span)
|
|
rescue StandardError => e
|
|
Rails.logger.warn "Failed to start LLM turn span: #{e.message}"
|
|
end
|
|
|
|
def end_llm_turn_span(message)
|
|
return unless ChatwootApp.otel_enabled?
|
|
|
|
span = @pending_llm_turn_spans&.pop
|
|
return unless span
|
|
|
|
set_llm_turn_response_attributes(span, message) if message
|
|
span.finish
|
|
rescue StandardError => e
|
|
Rails.logger.warn "Failed to end LLM turn span: #{e.message}"
|
|
end
|
|
|
|
def start_tool_span(tool_call)
|
|
return unless ChatwootApp.otel_enabled?
|
|
|
|
tool_name = tool_call.name.to_s
|
|
span = tracer.start_span(format(TOOL_SPAN_NAME, tool_name))
|
|
span.set_attribute(ATTR_LANGFUSE_OBSERVATION_TYPE, 'tool')
|
|
span.set_attribute(ATTR_LANGFUSE_OBSERVATION_INPUT, tool_call.arguments.to_json)
|
|
|
|
@pending_tool_spans ||= []
|
|
@pending_tool_spans.push(span)
|
|
rescue StandardError => e
|
|
Rails.logger.warn "Failed to start tool span: #{e.message}"
|
|
end
|
|
|
|
def end_tool_span(result)
|
|
return unless ChatwootApp.otel_enabled?
|
|
|
|
span = @pending_tool_spans&.pop
|
|
return unless span
|
|
|
|
output = result.is_a?(String) ? result : result.to_json
|
|
span.set_attribute(ATTR_LANGFUSE_OBSERVATION_OUTPUT, output)
|
|
span.finish
|
|
rescue StandardError => e
|
|
Rails.logger.warn "Failed to end tool span: #{e.message}"
|
|
end
|
|
|
|
private
|
|
|
|
def set_llm_turn_request_attributes(span, params)
|
|
provider = determine_provider(params[:model])
|
|
span.set_attribute(ATTR_GEN_AI_PROVIDER, provider)
|
|
span.set_attribute(ATTR_GEN_AI_REQUEST_MODEL, params[:model]) if params[:model]
|
|
span.set_attribute(ATTR_GEN_AI_REQUEST_TEMPERATURE, params[:temperature]) if params[:temperature]
|
|
end
|
|
|
|
def set_llm_turn_prompt_attributes(span, messages)
|
|
messages.each_with_index do |msg, idx|
|
|
span.set_attribute(format(ATTR_GEN_AI_PROMPT_ROLE, idx), msg[:role])
|
|
span.set_attribute(format(ATTR_GEN_AI_PROMPT_CONTENT, idx), msg[:content])
|
|
end
|
|
span.set_attribute(ATTR_LANGFUSE_OBSERVATION_INPUT, messages.to_json)
|
|
end
|
|
|
|
def set_llm_turn_response_attributes(span, message)
|
|
span.set_attribute(ATTR_GEN_AI_COMPLETION_ROLE, message.role.to_s) if message.respond_to?(:role)
|
|
span.set_attribute(ATTR_GEN_AI_COMPLETION_CONTENT, message.content.to_s) if message.respond_to?(:content)
|
|
set_llm_turn_usage_attributes(span, message)
|
|
span.set_attribute(ATTR_LANGFUSE_OBSERVATION_OUTPUT, message.content.to_s) if message.respond_to?(:content)
|
|
end
|
|
|
|
def set_llm_turn_usage_attributes(span, message)
|
|
span.set_attribute(ATTR_GEN_AI_USAGE_INPUT_TOKENS, message.input_tokens) if message.respond_to?(:input_tokens) && message.input_tokens
|
|
span.set_attribute(ATTR_GEN_AI_USAGE_OUTPUT_TOKENS, message.output_tokens) if message.respond_to?(:output_tokens) && message.output_tokens
|
|
end
|
|
end
|