Skip to content

feat(Storage): Enable full object checksum validation for resumable uploads#15395

Open
mahendra-google wants to merge 15 commits into
googleapis:mainfrom
mahendra-google:enable-json-full-object-checksums
Open

feat(Storage): Enable full object checksum validation for resumable uploads#15395
mahendra-google wants to merge 15 commits into
googleapis:mainfrom
mahendra-google:enable-json-full-object-checksums

Conversation

@mahendra-google
Copy link
Copy Markdown
Contributor

This PR enables full object checksum validation (specifically CRC32C) for JSON-based resumable uploads, ensuring end-to-end data integrity for multi-chunk transfers. Please see b/461996128

@mahendra-google mahendra-google requested review from a team January 21, 2026 10:57
@product-auto-label product-auto-label Bot added the api: storage Issues related to the Cloud Storage API. label Jan 21, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @mahendra-google, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces robust data integrity checks for resumable uploads by integrating CRC32C checksum validation. This enhancement ensures that multi-chunk transfers to Google Cloud Storage are verified end-to-end, preventing data corruption. The changes also update the client's error handling to align with the server's response to checksum mismatches.

Highlights

  • Enabled CRC32C Checksum Validation: Implemented end-to-end CRC32C checksum validation for JSON-based resumable uploads, ensuring data integrity for multi-chunk transfers.
  • Updated Upload Validation Behavior: Modified integration tests to reflect the new behavior where invalid checksums during resumable uploads now result in a GoogleApiException with a BadRequest status, indicating server-side rejection.
  • Enhanced Stream Handling for Checksum Calculation: Improved the LengthOnlyStream implementation to support seeking and reading, which is crucial for calculating the full CRC32C hash of the stream before the final chunk is sent.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables full object checksum validation for resumable uploads by introducing a Crc32cHashInterceptor. This interceptor calculates the CRC32C hash of the upload stream and attaches it to the final chunk of a resumable upload, ensuring end-to-end data integrity. The implementation is solid, with proper handling of the interceptor's lifecycle. The integration tests have been correctly updated to reflect the new server-side validation behavior, where the server now rejects corrupted uploads with a BadRequest status. Additionally, LengthOnlyStream has been updated to be a more compliant Stream implementation. My review includes a couple of suggestions to optimize performance by avoiding string allocations in the new parsing logic.

Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
@krishnamd-jkp
Copy link
Copy Markdown

Can you use UploadStreamInterceptor which already calculates the hash and use it here instead of recomputing the hash on final chunk?

@mahendra-google
Copy link
Copy Markdown
Contributor Author

Can you use UploadStreamInterceptor which already calculates the hash and use it here instead of recomputing the hash on final chunk?

@krishnamd-jkp UploadStreamInterceptor does not calculate hash . Please see https://github.com/googleapis/google-api-dotnet-client/blob/main/Src/Support/Google.Apis/Upload/ResumableUpload.cs#L207

@mahendra-google mahendra-google requested review from a team as code owners March 18, 2026 10:30
@mahendra-google mahendra-google force-pushed the enable-json-full-object-checksums branch from 8086e46 to a371f67 Compare March 31, 2026 07:53
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
@mahendra-google mahendra-google force-pushed the enable-json-full-object-checksums branch from 99c6617 to 54d1d1c Compare April 24, 2026 05:10
{
_hashingStream = ContentStream as HashingStream;
_interceptor = new Crc32cHashInterceptor(this, _hashingStream, _service);
_service?.HttpClient?.MessageHandler?.AddExecuteInterceptor(_interceptor);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same service may be used in parallel for many different requests, including for many different uploads. You'll have interceptors that will be applied to all requests???

Even when you check the URI, assuming that check is enough in all cases, you are adding latency to all requests.

What am I missing?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You make a very fair point regarding the service-wide scope of the interceptor.

While I have included logic to remove the interceptor upon upload completion or failure (to prevent long-term memory leaks on line 94 of the CustomMediaUpload class), I recognize that during high-concurrency periods, the interceptor chain could grow. This would indeed force every request—even those unrelated to this upload—to perform the URI check.

I initially assumed the latency of a reference comparison would be negligible, but I see how this architectural choice could be suboptimal for shared service instances. Since I want to ensure this implementation aligns with our performance standards, I am very open to your suggestions.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the URI comparison to include a reference equality check (ReferenceEquals) before performing a value-based .Equals(). This introduces an $O(1)$ short-circuit optimization especially for requests other than upload . For unrelated requests , we can now fail fast at the CPU register level by comparing memory addresses rather than performing a character-by-character string or component validation.This bypasses the $O(n)$ complexity of a full URI equality check, significantly reducing CPU cycles and latency for the majority of requests where the object references do not match.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My recommendation here is that you add an event on the base upload type that triggers on the last request only. This would be in Google.Apis. (Note that this is different to the initial proposal in Google.Apis which meant changing upload logic for all Discovery based libraries).

Comment thread apis/Google.Cloud.Storage.V1/Google.Cloud.Storage.V1/CustomMediaUpload.cs Outdated
@mahendra-google mahendra-google force-pushed the enable-json-full-object-checksums branch from 54d1d1c to 8b894ee Compare May 5, 2026 09:10
fix(Storage): Check for disabled checksumming before wrapping stream
Mark UploadValidationException as [Obsolete] to signal the transition
from client-side to server-side upload validation.

Server-side validation now returns a GoogleApiException with a
400 (Bad Request) status code instead of throwing this custom exception.
This change maintains binary compatibility while providing a migration
path for existing catch blocks.
…e client implementation

Modified the internal logic of the upload helper class to remove duplicate code for hashing based on upload validation mode. Also Object Exceute and Object ExecuteAsync methods are modified accordingly.
Removes the IsFinalChunk(string) helper method and replaces it with a
direct check on HashingStream.EndOfStreamReached.

By leveraging the fact that the HashingStream wrapper already observes
when 0 bytes are read from the source, we can accurately identify the
terminal request without redundant string parsing or range header
calculations.
… for uploadUri

This change introduces a ReferenceEquals check for the _uploadUri.
Since URI objects are typically stable during the request lifecycle,
this provides an O(1) fast-path to bypass the interceptor logic
for unrelated traffic, minimizing service-wide latency.

- Added ReferenceEquals check for immediate filtering
- Reduced CPU cycles for non-upload service calls
- Addressed PR feedback regarding parallel request overhead
@mahendra-google mahendra-google force-pushed the enable-json-full-object-checksums branch from 361aff2 to e4ee73a Compare May 7, 2026 08:59
{
_hashingStream = ContentStream as HashingStream;
_interceptor = new Crc32cHashInterceptor(this, _hashingStream, _service);
_service?.HttpClient?.MessageHandler?.AddExecuteInterceptor(_interceptor);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My recommendation here is that you add an event on the base upload type that triggers on the last request only. This would be in Google.Apis. (Note that this is different to the initial proposal in Google.Apis which meant changing upload logic for all Discovery based libraries).

@amanda-tarafa amanda-tarafa self-requested a review May 11, 2026 19:26
@amanda-tarafa amanda-tarafa dismissed their stale review May 11, 2026 19:26

My review shouldn't be binding.

Reverts the simplified EndOfStreamReached check introduced
as that flag does not reset during stream rewinds or session resumes.
This caused incorrect x-goog-hash headers to be attached to retried
chunks or resumed sessions.

Restores Content-Range parsing logic to dynamically determine if a
request represents the final chunk of the upload.
…erceptor cleanup

The Crc32cHashInterceptor is registered globally on the shared
HttpClient.MessageHandler. Previously, cleanup only occurred on
successful or failed upload events. If an upload was cancelled or
aborted, the interceptor remained in the handler's interceptor list
indefinitely, causing a memory leak.

Added IDisposable to CustomMediaUpload to ensure the interceptor is
explicitly removed from the MessageHandler even if the upload does
not reach a terminal state.
…prevent memory leaks

- Implement `IDisposable` on the nested `Crc32cHashInterceptor` class to explicitly unsubscribe from `UploadSessionData` and `ProgressChanged` events on disposal.
- Invoke the interceptor's `Dispose` method during `CustomMediaUpload.Dispose()` to ensure robust cleanup and release resources during upload cancellation or early exit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: storage Issues related to the Cloud Storage API.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants