Skip to content

Conversation

@AmanVarshney01
Copy link
Member

@AmanVarshney01 AmanVarshney01 commented Feb 8, 2026

Adds a new Postgres IaC section and guides for Terraform, Pulumi, and Alchemy.

Key points:

  • Terraform: project/database/connection + outputs + Prisma ORM wiring
  • Pulumi: Terraform-bridge SDK workflow (pulumi package add terraform-provider ...) + Prisma ORM wiring
  • Alchemy: Project/Database/Connection + Hyperdrive example + note that ALCHEMY_PASSWORD is required when secrets are stored

Verification:

  • bun run build
  • Real end-to-end create+destroy verification for all 3 flows using a service token (checked output URL schemes and cleanup).

Summary by CodeRabbit

  • Documentation
    • Added comprehensive Infrastructure as Code (IaC) guides for Prisma Postgres covering Terraform, Pulumi, and Alchemy.
    • Guides include conceptual overviews, step‑by‑step setup, provisioning examples for projects/databases/connections, retrieving and exporting connection details, deployment and cleanup instructions, production notes (state/credentials), and troubleshooting.
    • Added an IaC landing page and navigation/category metadata for discoverability.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 8, 2026

Dangerous URL check

No absolute URLs to prisma.io/docs found.
No local URLs found.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 8, 2026

Walkthrough

Adds a new Infrastructure as Code (IaC) section for Prisma Postgres with three provider guides (Terraform, Pulumi, Alchemy), an index page, and a category descriptor file; all changes are documentation-only.

Changes

Cohort / File(s) Summary
IaC Provider Guides
content/250-postgres/360-iac/100-terraform.mdx, content/250-postgres/360-iac/200-pulumi.mdx, content/250-postgres/360-iac/300-alchemy.mdx
Three new MDX guides covering Terraform, Pulumi, and Alchemy integrations for Prisma Postgres — conceptual models, prerequisites, setup examples, Prisma integration, outputs/usage, cleanup, production notes, and troubleshooting.
IaC Section Setup
content/250-postgres/360-iac/_category_.json, content/250-postgres/360-iac/index.mdx
Adds a category metadata file and an index page to surface the new IaC subsection in the docs.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and specifically describes the main change: adding Infrastructure as Code guides for three tools (Terraform, Pulumi, Alchemy) to the Postgres documentation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 8, 2026

Redirect check

This PR probably requires the following redirects to be added to static/_redirects:

  • This PR does not change any pages in a way that would require a redirect.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Feb 8, 2026

Deploying docs with  Cloudflare Pages  Cloudflare Pages

Latest commit: c566655
Status: ✅  Deploy successful!
Preview URL: https://1cfba8ab.docs-51g.pages.dev
Branch Preview URL: https://add-pulumi-terraform-alchemy.docs-51g.pages.dev

View logs

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@content/250-postgres/360-iac/100-terraform.mdx`:
- Around line 139-146: Remove the url and directUrl entries from the datasource
block in prisma/schema.prisma (leave only provider = "postgresql"); instead
configure datasource URLs in prisma.config.ts by using defineConfig and env() to
set datasource.url (and, if needed, an unpooled connection env var) so Prisma v7
no longer fails with P1012 and matches the v7 configuration pattern.

In `@content/250-postgres/360-iac/200-pulumi.mdx`:
- Around line 1-11: The doc is failing cspell because "Pulumi" isn't
whitelisted; add a local cspell word directive for "Pulumi" near the top of the
file (e.g., directly above the frontmatter or immediately after it) so the
spellchecker recognizes the term; update the file that contains the title/meta
lines (references: the 'title' value "Pulumi" and the metaTitle "Manage Prisma
Postgres with Pulumi") by inserting a cspell directive that includes the word
Pulumi to unblock CI.
- Around line 157-164: Remove the datasource URLs from the Prisma schema to
avoid P1012: edit the datasource block in schema.prisma to contain only provider
= "postgresql" (remove url and directUrl entries) and ensure the connection
strings remain defined and referenced from prisma.config.ts (keep
env("DATABASE_URL")/env("DIRECT_URL") usage only in prisma.config.ts); update
any docs text to match that schema.prisma no longer contains url or directUrl so
Prisma v7 reads configuration exclusively from prisma.config.ts.

In `@content/250-postgres/360-iac/300-alchemy.mdx`:
- Around line 148-155: The Prisma schema example incorrectly includes url and
directUrl in the datasource block; remove those fields so the datasource only
contains provider to match Prisma v7 and the shown prisma.config.ts; update the
example in the prisma schema snippet (the datasource block in the Prisma schema
file) by deleting url and directUrl entries and leaving only provider =
"postgresql" to avoid the P1012 error.

In `@content/250-postgres/360-iac/index.mdx`:
- Around line 1-9: Add a cspell ignore for the term "Pulumi" so the MDX
spellcheck stops failing on the occurrences in metaDescription and the body;
insert a local directive at the top of the file (e.g., an HTML comment cspell
ignore) that references "Pulumi" to avoid CI failures instead of changing
content or removing the word, or alternatively add "Pulumi" to the shared cspell
dictionary if you prefer a global fix; ensure the directive covers the token as
used in metaDescription and body.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 8, 2026

🍈 Lychee Link Check Report

Note: Links are cached for 5 minutes. Failed links (timeouts, rate limits) are retried in a second run with longer timeout.

📊 Results Overview

Status Count
🔍 Total 2336
✅ Successful 2299
⏳ Timeouts 0
🔀 Redirected 8
👻 Excluded 26
❓ Unknown 0
🚫 Errors 2
⛔ Unsupported 1

Errors per input

Errors in 200-orm/050-overview/100-introduction/250-should-you-use-prisma.mdx

Errors in 200-orm/200-prisma-client/300-client-extensions/140-shared-extensions/100-permit-rbac.mdx

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@content/250-postgres/360-iac/300-alchemy.mdx`:
- Around line 159-176: The fetch handler creates a pg Client but never closes
it; update the async fetch function to ensure client.end() is always called by
wrapping the connect/query/response logic in try/catch and adding a finally
block that calls client.end() (and optionally awaits it and catches/logs any
error) so the Client created by new Client(...) is properly torn down after each
request; reference the fetch function, the Client instance named client, the
client.connect() call, and add a finally that invokes client.end().

Comment on lines +159 to +176
export default {
async fetch(_request: Request, env: typeof worker.Env): Promise<Response> {
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});

try {
await client.connect();
const result = await client.query("SELECT * FROM pg_tables");

return Response.json({
success: true,
result: result.rows,
});
} catch (error: any) {
return new Response(`Database error: ${error.message}`, { status: 500 });
}
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Locate the file and verify content
fd "300-alchemy.mdx" --type f

Repository: prisma/docs

Length of output: 99


🏁 Script executed:

# If found, read the specific lines to confirm the snippet
if [ -f "content/250-postgres/360-iac/300-alchemy.mdx" ]; then
  wc -l "content/250-postgres/360-iac/300-alchemy.mdx"
fi

Repository: prisma/docs

Length of output: 103


🏁 Script executed:

# Search for other Worker examples in the codebase that use pg client
rg -l "new Client" --type mdx

Repository: prisma/docs

Length of output: 82


🏁 Script executed:

# Search for connection closing patterns in Worker examples
rg "client\.end\(\)" --type mdx -A 2 -B 2

Repository: prisma/docs

Length of output: 82


🏁 Script executed:

# Read the specific section of the file
sed -n '150,180p' "content/250-postgres/360-iac/300-alchemy.mdx"

Repository: prisma/docs

Length of output: 747


🏁 Script executed:

# Search for all client.end() patterns in the file
grep -n "client\.end" "content/250-postgres/360-iac/300-alchemy.mdx"

Repository: prisma/docs

Length of output: 37


🏁 Script executed:

# Search for pg or Client usage patterns in the file
grep -n "new Client\|client\." "content/250-postgres/360-iac/300-alchemy.mdx" | head -20

Repository: prisma/docs

Length of output: 196


🏁 Script executed:

# Search for other fetch handler examples in the codebase to see patterns
grep -r "async fetch" content/250-postgres --include="*.mdx" -A 10 | head -60

Repository: prisma/docs

Length of output: 886


🏁 Script executed:

# Check if there are any Hyperdrive-specific examples showing connection cleanup
grep -r "Hyperdrive" content/ --include="*.mdx" -B 2 -A 10 | head -80

Repository: prisma/docs

Length of output: 6310


🏁 Script executed:

# Look for any finally blocks in Worker examples
grep -n "finally" "content/250-postgres/360-iac/300-alchemy.mdx"

Repository: prisma/docs

Length of output: 37


🏁 Script executed:

# Get more context around the snippet to see if cleanup happens elsewhere
sed -n '155,185p' "content/250-postgres/360-iac/300-alchemy.mdx"

Repository: prisma/docs

Length of output: 852


🏁 Script executed:

# Search for any documentation or context about connection cleanup in Cloudflare Workers
grep -r "client\.end\|finally" content/250-postgres --include="*.mdx" -B 2 -A 2

Repository: prisma/docs

Length of output: 498


🌐 Web query:

pg client connection leak Cloudflare Workers best practices

💡 Result:

To avoid pg client connection leaks in Cloudflare Workers, use these practices:

  1. Prefer Hyperdrive (recommended) instead of managing raw TCP connections yourself
    Hyperdrive provides built-in connection pooling (and optional caching), reducing repeated connection setup and avoiding runaway connection counts. [1][2][3]

  2. If you connect directly from Workers using pg, always create/close the client per request

    • Use a Client inside your fetch() handler.
    • Ensure client.end() runs in a finally block (even on errors).
    • Cloudflare’s Postgres tutorial also notes you should be on pg v8.16.3+. [4][5]
    import { Client } from "pg";
    
    export default {
      async fetch(req: Request, env: any) {
        const client = new Client({ connectionString: env.DB_URL, ssl: true });
        try {
          await client.connect();
          const res = await client.query("SELECT 1");
          return Response.json(res.rows);
        } finally {
          await client.end(); // prevents leaked connections
        }
      }
    };
  3. Don’t try to share TCP sockets across requests (and don’t create them in global scope)
    Workers’ TCP sockets must be created within a handler (not global scope), and you should close sockets when done. This is a common source of “leaks” if you’re doing anything lower-level than Hyperdrive. [5]

  4. Avoid long-lived transactions meant to “hold session state” when using Hyperdrive
    Hyperdrive explicitly warns that wrapping many operations in a single transaction to preserve SET state hurts pooling/scaling because that connection can’t be reused. Keep transactions minimal. [1]

Sources: Hyperdrive pooling + best practices [1][2], Workers database connectivity guidance [3], Workers Postgres + pg version requirement [4], Workers TCP sockets lifecycle constraints [5].


🌐 Web query:

Hyperdrive Cloudflare connection pooling pg client

💡 Result:

Cloudflare Hyperdrive already does the connection pooling for your PostgreSQL database, so your Worker can use a normal Postgres driver (like pg / node-postgres) and simply point it at Hyperdrive’s connection string. Hyperdrive’s pooler runs in transaction pooling mode (connections are held for the duration of a transaction, then returned to the pool; session state like SET is reset when returned). [2]

Using pg (node-postgres) with Hyperdrive (Workers)

Requirements

  • pg >= 8.16.3 [1]
  • Workers nodejs_compat + a Hyperdrive binding in wrangler.jsonc [1]

wrangler.jsonc

{
  "compatibility_flags": ["nodejs_compat"],
  "compatibility_date": "2026-02-08",
  "hyperdrive": [{ "binding": "HYPERDRIVE", "id": "<your-hyperdrive-id>" }]
}

[1]

Worker code

import { Client } from "pg";

export default {
  async fetch(req, env) {
    // Creating a new Client per request is fine because Hyperdrive maintains the pool.
    const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
    await client.connect();
    const result = await client.query("SELECT * FROM pg_tables");
    return Response.json(result.rows);
  }
}

[1]

Pooling notes / gotchas

  • Hyperdrive pools origin DB connections (there’s a configurable min/soft max; and plan-based limits). [2][3]
  • Avoid long-running transactions: they keep a pooled connection busy and can exhaust the pool. [2][3]

Sources: Cloudflare Hyperdrive docs on node-postgres usage and required versions/config, Hyperdrive connection pooling behavior (transaction mode), and connection limits/tuning. [1][2][3]


Add explicit client cleanup to prevent resource buildup in Worker requests.

Even though Hyperdrive manages backend pooling, the pg Client object itself should be closed per request to follow the library's lifecycle contract and ensure clean resource teardown within each request handler. Without a finally block, the client can hold file descriptors or memory longer than necessary, especially under sustained load.

Suggested fix
   async fetch(_request: Request, env: typeof worker.Env): Promise<Response> {
     const client = new Client({
       connectionString: env.HYPERDRIVE.connectionString,
     });

     try {
       await client.connect();
       const result = await client.query("SELECT * FROM pg_tables");

       return Response.json({
         success: true,
         result: result.rows,
       });
     } catch (error: any) {
       return new Response(`Database error: ${error.message}`, { status: 500 });
+    } finally {
+      await client.end();
     }
   },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export default {
async fetch(_request: Request, env: typeof worker.Env): Promise<Response> {
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
try {
await client.connect();
const result = await client.query("SELECT * FROM pg_tables");
return Response.json({
success: true,
result: result.rows,
});
} catch (error: any) {
return new Response(`Database error: ${error.message}`, { status: 500 });
}
},
export default {
async fetch(_request: Request, env: typeof worker.Env): Promise<Response> {
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString,
});
try {
await client.connect();
const result = await client.query("SELECT * FROM pg_tables");
return Response.json({
success: true,
result: result.rows,
});
} catch (error: any) {
return new Response(`Database error: ${error.message}`, { status: 500 });
} finally {
await client.end();
}
},
🤖 Prompt for AI Agents
In `@content/250-postgres/360-iac/300-alchemy.mdx` around lines 159 - 176, The
fetch handler creates a pg Client but never closes it; update the async fetch
function to ensure client.end() is always called by wrapping the
connect/query/response logic in try/catch and adding a finally block that calls
client.end() (and optionally awaits it and catches/logs any error) so the Client
created by new Client(...) is properly torn down after each request; reference
the fetch function, the Client instance named client, the client.connect() call,
and add a finally that invokes client.end().

@AmanVarshney01
Copy link
Member Author

Superseded by #7474 (clean single-commit branch + spellcheck fix). Closing this PR to avoid duplicate review.

@AmanVarshney01
Copy link
Member Author

Superseded by #7474.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant