-
Notifications
You must be signed in to change notification settings - Fork 104
fix(metrics): prevent thread leak by ensuring singleton initialization #1492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @sinhasubham, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical issue in the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a critical thread and memory leak by ensuring the metrics subsystem is initialized only once. The approach uses a global flag to track initialization, which is a good start. However, the current implementation is not thread-safe and could still lead to multiple initializations under concurrent Client instantiations. I've provided a suggestion to add a threading.Lock to make the initialization truly a singleton. Additionally, I found a minor code duplication in one of the tests.
1ef3da1 to
67c682e
Compare
google/cloud/spanner_v1/client.py
Outdated
| if not _metrics_monitor_initialized: | ||
| with _metrics_monitor_lock: | ||
| if not _metrics_monitor_initialized: | ||
| meter_provider = metrics.NoOpMeterProvider() | ||
| try: | ||
| if not _get_spanner_emulator_host(): | ||
| meter_provider = MeterProvider( | ||
| metric_readers=[ | ||
| PeriodicExportingMetricReader( | ||
| CloudMonitoringMetricsExporter( | ||
| project_id=project, | ||
| credentials=credentials, | ||
| ), | ||
| export_interval_millis=METRIC_EXPORT_INTERVAL_MS, | ||
| ), | ||
| ] | ||
| ) | ||
| metrics.set_meter_provider(meter_provider) | ||
| SpannerMetricsTracerFactory() | ||
| _metrics_monitor_initialized = True | ||
| except Exception as e: | ||
| log.warning( | ||
| "Failed to initialize Spanner built-in metrics. Error: %s", | ||
| e, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: could this potentially be moved to a separate function to keep the init function a bit shorter/cleaner?
tests/unit/test_metrics.py
Outdated
| client = Client( | ||
| project="test", | ||
| credentials=TestCredentials(), | ||
| # client_options={"api_endpoint": "none"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove
519755b to
1341f21
Compare
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request provides a crucial fix for a thread and memory leak issue caused by repeated initialization of OpenTelemetry providers. The implementation correctly uses a singleton pattern with double-checked locking for the metrics provider initialization, ensuring it only runs once. The switch from a global mutable tracer to a thread-safe contextvars.ContextVar is well-executed and effectively isolates metric tracing across concurrent operations, as demonstrated by the new concurrency tests. The accompanying fix for batch commit idempotency is also a valuable improvement. The test suite has been significantly enhanced with new concurrency tests and refactoring of existing ones, providing strong confidence in the correctness of these changes. I have one suggestion for improving the API clarity in the SpannerMetricsTracerFactory. Overall, this is an excellent and well-tested contribution that addresses a critical issue.
google/cloud/spanner_v1/metrics/spanner_metrics_tracer_factory.py
Outdated
Show resolved
Hide resolved
olavloite
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks overall good to me, with a few small nits/questions.
| from .spanner_metrics_tracer_factory import SpannerMetricsTracerFactory | ||
|
|
||
|
|
||
| from contextvars import Token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: group this with the other import
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to check whether the resources have already been set for this tracer, and if so, skip the extraction and setting them again? (Is there any possibility that they would ever change during the lifetime of a tracer, and if that even is possible, is it something that we would want?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense. Added a check to skip extraction if tracer.client_attributes are already populated.
google/cloud/spanner_v1/batch.py
Outdated
| getattr(database, "_next_nth_request", 0), | ||
| 1, | ||
| nth_request, | ||
| attempt.increment(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the existing code was correct. There are two different things that can be retried in Spanner:
- Aborted transactions: When a read/write transaction is aborted, then the entire transaction is retried. This should not cause
attemptto be increased, even in this case, where the entire transaction is just a singleCommitcall. - Unavailable: A single RPC can fail due to network errors, server temporarily being down etc. This is normally retried by Gax. In this case, only a single RPC (so not the entire transaction) is retried. It is only in these cases that
attemptshould be increased.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification. I've moved attempt back inside wrapped_method so it resets on Aborted retries, but kept nth_request outside to ensure the logical request ID remains stable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.... I think that there is still a misunderstanding here. The original code was (AFAICT) correct, meaning that there should be no changes here. Could you otherwise explain why this is being changed here / what issue is being fixed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put another way: In my opinion, the logical request ID should not remain the same during these retries. These requests should be considered new requests, as the reason that it is being retried, is that the transaction was aborted. Transactions that are being retried due to the original attempt being aborted by Spanner should be considered new requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in that case, previous logic makes sense. I was under impression request id should be preserved in abort cases. Reverted to old logic.
google/cloud/spanner_v1/client.py
Outdated
| ): | ||
| warnings.warn(_EMULATOR_HOST_HTTP_SCHEME) | ||
| # Check flag to enable Spanner builtin metrics | ||
| global _metrics_monitor_initialized |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still needed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
| @@ -0,0 +1,13 @@ | |||
| import pytest | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add copyright header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| @@ -0,0 +1,80 @@ | |||
| import threading | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add copyright header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
cdf3d0e to
6c01381
Compare
6c01381 to
390ef1d
Compare
Summary:
This PR fixes a critical memory and thread leak in the google-cloud-spanner client when built-in metrics are enabled (default behavior).
Previously, the Client constructor unconditionally initialized a new OpenTelemetry MeterProvider and PeriodicExportingMetricReader on every instantiation. Each reader spawned a new background thread for metric exporting that was never cleaned up or reused. In environments where Client objects are frequently created (e.g., Cloud Functions, web servers, or data pipelines), this caused a linear accumulation of threads, leading to RuntimeError: can't start new thread and OOM crashes.
Fix Implementation:
Refactored Metrics Initialization (Thread Safety & Memory Leak Fix):
Implemented a Singleton pattern for the OpenTelemetry MeterProvider using threading.Lock to prevent infinite background thread creation (memory leak).
Moved metrics initialization logic to a cleaner helper function _initialize_metrics in client.py.
Replaced global mutable state in SpannerMetricsTracerFactory with contextvars.ContextVar to ensure thread-safe, isolated metric tracing across concurrent requests.
Updated MetricsInterceptor and MetricsCapture to correctly use the thread-local tracer.
Fixed Batch.commit Idempotency (AlreadyExists Regression):
Modified Batch.commit to initialize nth_request and the attempt counter outside the retry loop.
This ensures that retries (e.g., on ABORTED) reuse the same Request ID, allowing Cloud Spanner to correctly deduplicate requests and preventing spurious AlreadyExists (409) errors.
Verification:
Added tests/unit/test_metrics_concurrency.py to verify tracer isolation and thread safety.
Cleaned up tests/unit/test_metrics.py and consolidated mocks in conftest.py.