feat(Storage): Enable full object checksum validation for resumable uploads#15395
feat(Storage): Enable full object checksum validation for resumable uploads#15395mahendra-google wants to merge 15 commits into
Conversation
Summary of ChangesHello @mahendra-google, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces robust data integrity checks for resumable uploads by integrating CRC32C checksum validation. This enhancement ensures that multi-chunk transfers to Google Cloud Storage are verified end-to-end, preventing data corruption. The changes also update the client's error handling to align with the server's response to checksum mismatches. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enables full object checksum validation for resumable uploads by introducing a Crc32cHashInterceptor. This interceptor calculates the CRC32C hash of the upload stream and attaches it to the final chunk of a resumable upload, ensuring end-to-end data integrity. The implementation is solid, with proper handling of the interceptor's lifecycle. The integration tests have been correctly updated to reflect the new server-side validation behavior, where the server now rejects corrupted uploads with a BadRequest status. Additionally, LengthOnlyStream has been updated to be a more compliant Stream implementation. My review includes a couple of suggestions to optimize performance by avoiding string allocations in the new parsing logic.
f443772 to
d4e4802
Compare
|
Can you use UploadStreamInterceptor which already calculates the hash and use it here instead of recomputing the hash on final chunk? |
|
8086e46 to
a371f67
Compare
99c6617 to
54d1d1c
Compare
| { | ||
| _hashingStream = ContentStream as HashingStream; | ||
| _interceptor = new Crc32cHashInterceptor(this, _hashingStream, _service); | ||
| _service?.HttpClient?.MessageHandler?.AddExecuteInterceptor(_interceptor); |
There was a problem hiding this comment.
The same service may be used in parallel for many different requests, including for many different uploads. You'll have interceptors that will be applied to all requests???
Even when you check the URI, assuming that check is enough in all cases, you are adding latency to all requests.
What am I missing?
There was a problem hiding this comment.
You make a very fair point regarding the service-wide scope of the interceptor.
While I have included logic to remove the interceptor upon upload completion or failure (to prevent long-term memory leaks on line 94 of the CustomMediaUpload class), I recognize that during high-concurrency periods, the interceptor chain could grow. This would indeed force every request—even those unrelated to this upload—to perform the URI check.
I initially assumed the latency of a reference comparison would be negligible, but I see how this architectural choice could be suboptimal for shared service instances. Since I want to ensure this implementation aligns with our performance standards, I am very open to your suggestions.
There was a problem hiding this comment.
Updated the URI comparison to include a reference equality check (ReferenceEquals) before performing a value-based .Equals(). This introduces an fail fast at the CPU register level by comparing memory addresses rather than performing a character-by-character string or component validation.This bypasses the URI equality check, significantly reducing CPU cycles and latency for the majority of requests where the object references do not match.
There was a problem hiding this comment.
My recommendation here is that you add an event on the base upload type that triggers on the last request only. This would be in Google.Apis. (Note that this is different to the initial proposal in Google.Apis which meant changing upload logic for all Discovery based libraries).
54d1d1c to
8b894ee
Compare
fix(Storage): Check for disabled checksumming before wrapping stream
Mark UploadValidationException as [Obsolete] to signal the transition from client-side to server-side upload validation. Server-side validation now returns a GoogleApiException with a 400 (Bad Request) status code instead of throwing this custom exception. This change maintains binary compatibility while providing a migration path for existing catch blocks.
… scenarios Add tests for retry scenarios
…e client implementation Modified the internal logic of the upload helper class to remove duplicate code for hashing based on upload validation mode. Also Object Exceute and Object ExecuteAsync methods are modified accordingly.
Removes the IsFinalChunk(string) helper method and replaces it with a direct check on HashingStream.EndOfStreamReached. By leveraging the fact that the HashingStream wrapper already observes when 0 bytes are read from the source, we can accurately identify the terminal request without redundant string parsing or range header calculations.
… for uploadUri This change introduces a ReferenceEquals check for the _uploadUri. Since URI objects are typically stable during the request lifecycle, this provides an O(1) fast-path to bypass the interceptor logic for unrelated traffic, minimizing service-wide latency. - Added ReferenceEquals check for immediate filtering - Reduced CPU cycles for non-upload service calls - Addressed PR feedback regarding parallel request overhead
361aff2 to
e4ee73a
Compare
| { | ||
| _hashingStream = ContentStream as HashingStream; | ||
| _interceptor = new Crc32cHashInterceptor(this, _hashingStream, _service); | ||
| _service?.HttpClient?.MessageHandler?.AddExecuteInterceptor(_interceptor); |
There was a problem hiding this comment.
My recommendation here is that you add an event on the base upload type that triggers on the last request only. This would be in Google.Apis. (Note that this is different to the initial proposal in Google.Apis which meant changing upload logic for all Discovery based libraries).
Reverts the simplified EndOfStreamReached check introduced as that flag does not reset during stream rewinds or session resumes. This caused incorrect x-goog-hash headers to be attached to retried chunks or resumed sessions. Restores Content-Range parsing logic to dynamically determine if a request represents the final chunk of the upload.
…erceptor cleanup The Crc32cHashInterceptor is registered globally on the shared HttpClient.MessageHandler. Previously, cleanup only occurred on successful or failed upload events. If an upload was cancelled or aborted, the interceptor remained in the handler's interceptor list indefinitely, causing a memory leak. Added IDisposable to CustomMediaUpload to ensure the interceptor is explicitly removed from the MessageHandler even if the upload does not reach a terminal state.
…prevent memory leaks - Implement `IDisposable` on the nested `Crc32cHashInterceptor` class to explicitly unsubscribe from `UploadSessionData` and `ProgressChanged` events on disposal. - Invoke the interceptor's `Dispose` method during `CustomMediaUpload.Dispose()` to ensure robust cleanup and release resources during upload cancellation or early exit.
This PR enables full object checksum validation (specifically CRC32C) for JSON-based resumable uploads, ensuring end-to-end data integrity for multi-chunk transfers. Please see b/461996128