Use presigned URLs for all S3 operations (#1022)#2190
Use presigned URLs for all S3 operations (#1022)#2190lancepioch wants to merge 3 commits intomainfrom
Conversation
Some S3-compatible providers (e.g., Hetzner) require presigned requests. Previously only GetObject and UploadPart were presigned; the remaining operations (CreateMultipartUpload, ListParts, CompleteMultipartUpload, AbortMultipartUpload, DeleteObject) used header-based auth. This adds an executeS3Command() method to S3Filesystem that sends all commands via presigned URLs with manual XML response parsing, giving universal compatibility with all S3-compatible providers.
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@app/Extensions/Filesystem/S3Filesystem.php`:
- Around line 54-55: The code instantiates a new GuzzleClient for every call and
calls send($presignedRequest) without timeouts; change S3Filesystem to reuse an
injected or cached Guzzle client configured with sensible timeouts (e.g.,
timeout and connect_timeout) instead of creating $guzzle = new GuzzleClient()
inline, add a private http client property (e.g., $this->httpClient) set via the
constructor or lazy-initialized once, and replace
$guzzle->send($presignedRequest) with $this->httpClient->send($presignedRequest)
so requests will be reused and will not hang indefinitely.
🧹 Nitpick comments (1)
app/Extensions/Filesystem/S3Filesystem.php (1)
55-63: Consider wrapping Guzzle exceptions for consistent error handling.When Guzzle throws
RequestExceptionfor HTTP 4xx/5xx errors, these will bubble up without context about which S3 command failed. While functional, wrapping these exceptions would provide better diagnostics.Proposed enhancement
+use GuzzleHttp\Exception\GuzzleException; + public function executeS3Command(CommandInterface $command): Result { $presignedRequest = $this->client->createPresignedRequest($command, '+60 minutes'); - $guzzle = new GuzzleClient(); - $response = $guzzle->send($presignedRequest); + try { + $response = $this->getGuzzleClient()->send($presignedRequest); + } catch (GuzzleException $e) { + throw new RuntimeException( + "S3 request failed for {$command->getName()}: {$e->getMessage()}", + $e->getCode(), + $e + ); + } $body = (string) $response->getBody();
| $guzzle = new GuzzleClient(); | ||
| $response = $guzzle->send($presignedRequest); |
There was a problem hiding this comment.
Instantiating a new GuzzleClient on every call is inefficient and lacks timeout configuration.
Creating a new HTTP client for each S3 command adds overhead. More critically, the request has no timeout, which could cause indefinite hangs if the S3 endpoint is unresponsive.
Proposed fix: inject or cache the client with timeout
+ private ?GuzzleClient $guzzle = null;
+
+ private function getGuzzleClient(): GuzzleClient
+ {
+ if ($this->guzzle === null) {
+ $this->guzzle = new GuzzleClient([
+ 'timeout' => 30,
+ 'connect_timeout' => 10,
+ ]);
+ }
+
+ return $this->guzzle;
+ }
+
public function executeS3Command(CommandInterface $command): Result
{
$presignedRequest = $this->client->createPresignedRequest($command, '+60 minutes');
- $guzzle = new GuzzleClient();
- $response = $guzzle->send($presignedRequest);
+ $response = $this->getGuzzleClient()->send($presignedRequest);🤖 Prompt for AI Agents
In `@app/Extensions/Filesystem/S3Filesystem.php` around lines 54 - 55, The code
instantiates a new GuzzleClient for every call and calls send($presignedRequest)
without timeouts; change S3Filesystem to reuse an injected or cached Guzzle
client configured with sensible timeouts (e.g., timeout and connect_timeout)
instead of creating $guzzle = new GuzzleClient() inline, add a private http
client property (e.g., $this->httpClient) set via the constructor or
lazy-initialized once, and replace $guzzle->send($presignedRequest) with
$this->httpClient->send($presignedRequest) so requests will be reused and will
not hang indefinitely.
Some S3-compatible providers (e.g., Hetzner) require presigned requests. Previously only GetObject and UploadPart were presigned; the remaining operations (CreateMultipartUpload, ListParts, CompleteMultipartUpload, AbortMultipartUpload, DeleteObject) used header-based auth. This adds an executeS3Command() method to S3Filesystem that sends all commands via presigned URLs with manual XML response parsing, giving universal compatibility with all S3-compatible providers.
Fixes #1022