fix: stream attachment handling in workers (#12870)

We’ve been watching Sidekiq workers climb from ~600 MB at boot to
1.4–1.5 GB after an hour whenever attachment-heavy jobs run. This PR is
an experiment to curb that growth by streaming attachments instead of
loading the whole blob into Ruby: reply-mailer inline attachments,
Telegram uploads, and audio transcriptions now read/write in chunks. If
this keeps RSS stable in production we’ll keep it; otherwise we’ll roll
it back and keep digging
This commit is contained in:
Sojan Jose
2025-12-05 13:02:53 -08:00
committed by GitHub
parent a971ff00f8
commit cc86b8c7f1
12 changed files with 203 additions and 74 deletions

View File

@@ -121,8 +121,6 @@ class Integrations::Slack::SendOnSlackService < Base::SendOnChannelService
end
def upload_files
return unless message.attachments.any?
files = build_files_array
return if files.empty?
@@ -136,6 +134,8 @@ class Integrations::Slack::SendOnSlackService < Base::SendOnChannelService
Rails.logger.info "slack_upload_result: #{result}"
rescue Slack::Web::Api::Errors::SlackError => e
Rails.logger.error "Failed to upload files: #{e.message}"
ensure
files.each { |file| file[:content]&.clear }
end
end
@@ -143,14 +143,31 @@ class Integrations::Slack::SendOnSlackService < Base::SendOnChannelService
message.attachments.filter_map do |attachment|
next unless attachment.with_attached_file?
{
filename: attachment.file.filename.to_s,
content: attachment.file.download,
title: attachment.file.filename.to_s
}
build_file_payload(attachment)
end
end
def build_file_payload(attachment)
content = download_attachment_content(attachment)
return if content.blank?
{
filename: attachment.file.filename.to_s,
content: content,
title: attachment.file.filename.to_s
}
end
def download_attachment_content(attachment)
buffer = +''
attachment.file.blob.open do |file|
while (chunk = file.read(64.kilobytes))
buffer << chunk
end
end
buffer
end
def sender_name(sender)
sender.try(:name) ? "#{sender.try(:name)} (#{sender_type(sender)})" : sender_type(sender)
end