-
Notifications
You must be signed in to change notification settings - Fork 545
docs: Update compression support for AWS plugins #2359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughDocumentation updates for Firehose, Kinesis, and S3 outputs: per-record compression options expanded to Changes
Sequence Diagram(s)(omitted — changes are documentation-only and do not introduce new runtime control flow) Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pipeline/outputs/firehose.md`:
- Line 20: The `compression` description is misleading: update the `compression`
field documentation to replace "Each log record is individually compressed and
sent to Firehose" with a clear explanation that while Fluent Bit can apply
record-level compression before sending, AWS Firehose buffers multiple records
and compresses the entire buffered output file when delivering to destinations
(e.g., S3), so consumers receive and must decompress complete compressed files
(not individually compressed records); keep the supported values (`gzip`,
`zstd`, `snappy`, `arrow`) and recommendation about Arrow vs Firehose Parquet
conversion and the default of no compression.
Signed-off-by: Shelby Hagman <shelbyzh@amazon.com>
9b7a5d4 to
c086b86
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pipeline/outputs/firehose.md`:
- Around line 94-101: Update the "Compression" section to clarify that
file-level compression is applied only when the destination's compression
setting is enabled (it's destination-configured, not automatically applied by
Firehose); mention that Fluent Bit compresses each record using the
`compression` parameter before upload, Firehose buffers records into files, and
if the destination has file-level compression enabled the delivered files will
be compressed — consumers must first decompress the file (if applicable) and
then decompress each compressed record inside it. Reference the "Compression"
heading, the `compression` parameter, and mentions of "Fluent Bit" and
"Firehose" when making the wording change.
Signed-off-by: Shelby Hagman <shelbyzh@amazon.com>
Signed-off-by: Shelby Hagman <shelbyzh@amazon.com>
c086b86 to
04d0eef
Compare
Summary
Related PR - fluent/fluent-bit#11400
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.