fix: preserve charset in content-type for non-zero byte files#975
fix: preserve charset in content-type for non-zero byte files#975SushanthMusham wants to merge 2 commits into
Conversation
|
Hey @ferhatelmas, could you take a quick look at this? Thanks! |
|
@SushanthMusham can we add some tests? also, we shouldn't have any diff in package lock |
|
I had made a change in #981 so that CI can run for contributors. Please rebase your change to get this |
|
Thanks for the review, @ferhatelmas! I will rebase my branch now and revert the unintended changes to the package-lock.json file. I'll also try adding the test coverage for the charset preservation and push those updates shortly. |
1bd34fe to
9f8d2cc
Compare
|
Hey @ferhatelmas, so sorry for the delay—I had my 6th sem exams.
|
|
@ferhatelmas Hi , any issues with the code ? |
What kind of change does this PR introduce?
Bug fix
What is the current behavior?
When uploading a non-zero byte file with a Content-Type that includes a charset parameter (e.g. text/markdown; charset=UTF-8), the charset is silently dropped. This happens because bufferedMultipartUpload calls headObject after upload, and S3 normalizes the Content-Type by stripping the charset before returning it.
Fixes #816
What is the new behavior?
The original contentType parameter (which contains the full Content-Type including charset) is now used directly in the return value, instead of relying on the normalized response from S3's headObject. Falls back to metadata.mimetype if contentType is empty.
// Before
mimetype: metadata.mimetype
// After
mimetype: contentType || metadata.mimetype
Additional context
Zero-byte files were already handled correctly because they bypass headObject entirely. This fix brings non-zero byte files in line with the same behaviour.