-
Notifications
You must be signed in to change notification settings - Fork 876
feat: new docs #7477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: new docs #7477
Conversation
WalkthroughThis PR restructures repository configuration and CI/CD workflows while adding extensive blog content. It removes legacy config files ( Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Rationale: This diff spans multiple concerns requiring separate review reasoning: (1) non-trivial workflow restructuring (Lychee refactor with new reporting logic, permission changes, and output path updates); (2) addition of new linting/formatting tools (Oxfmt, Oxlint configs) that establish new code standards; (3) removal of legacy configs and GitHub automation suggesting architectural shifts; (4) extensive blog content additions that individually require accuracy verification for technical claims and code examples. While much of the content is additive and lower-risk, the heterogeneity of configuration changes, workflow logic updates, and the volume of technical documentation necessitate careful, multi-faceted review rather than simple pattern matching. Possibly related issues
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
🍈 Lychee Link Check Report3657 links: ✅ All links are working!Full Statistics Table
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In @.github/workflows/lychee.yml:
- Around line 10-13: The workflow job named "check" currently uses the
overly-permissive `permissions: write-all`; change this to least-privilege
permissions needed for the job (checkout + comment on PRs) by replacing
`permissions: write-all` with a minimal permissions map granting `contents:
read` and `pull-requests: write` (and remove write-all); update the `check`
job's permissions stanza accordingly so the workflow can read repository
contents and write PR comments but nothing more.
In `@apps/blog/content/blog/accelerate-preview-release-ab229e69ed2/index.mdx`:
- Line 80: The anchor tags using target="_blank" (e.g., the link with text
"Database Caching: A Double-Edged Sword? Examining the Pros and Cons" and the
other two anchor occurrences mentioned) are missing rel attributes; update each
<a ... target="_blank"> element in the MDX to include rel="noopener noreferrer"
to prevent reverse-tabnabbing, ensuring every anchor that opens a new tab
contains rel="noopener noreferrer".
In `@apps/blog/content/blog/announcing-accelerate-usrvpi6sfkv4/index.mdx`:
- Around line 18-35: Update the three broken documentation links in the blog
post: replace the Data Proxy link (anchor text "Data Proxy") from
https://www.prisma.io/docs/data-platform/data-proxy to /docs/accelerate/,
replace the Query Console link (anchor text "Query Console") from
https://www.prisma.io/docs/data-platform/query-console to /docs/console/, and
change the Data Browser link (anchor text "Data Browser") from
https://www.prisma.io/docs/data-platform/data-browser to the most appropriate
current page—likely /docs/console/ (or /docs/studio/ if that better matches the
functionality); verify the chosen target pages before committing to ensure
readers land on the correct modern docs.
In `@apps/blog/content/blog/announcing-prisma-6-19-0/index.mdx`:
- Around line 59-65: The snippet uses an undefined env variable causing a
runtime ReferenceError; update the PrismaPg instantiation in the block that
creates adapter and prisma (symbols: PrismaPg, adapter, PrismaClient) to use a
defined source for the DB URL — either replace env.DATABASE_URL with
process.env.DATABASE_URL or import the project's env helper (e.g., import env
from 'prisma/config') and call env('DATABASE_URL'); ensure the chosen approach
matches the rest of the examples (prisma.config.ts) so the adapter receives a
valid connection string.
In
`@apps/blog/content/blog/announcing-prisma-postgres-for-ai-coding-agents/index.mdx`:
- Around line 83-99: The MCP server config uses the wrong CLI invocation: in the
JSON under "mcpServers" -> "Prisma" the "args" array includes "platform" and
"--early-access" which makes the command invalid; update the "args" for the
"Prisma" entry (keys "command" and "args") to invoke the top-level Prisma MCP
command (i.e., use arguments that produce "npx prisma mcp") and remove
"platform" and the "--early-access" flag so the MCP server runs as intended.
In
`@apps/blog/content/blog/announcing-typedsql-make-your-raw-sql-queries-type-safe-with-prisma-orm/index.mdx`:
- Line 19: In the paragraph that currently contains the corrupted fragment
starting with "or rCopy the port number..." replace the accidental insertion
(the port/pg_restore text) with the intended wording so the sentence reads
"...or require maximum performance — we have provided a lower level API to write
raw SQL." Locate this sentence in the MDX content of the blog post (the
paragraph containing "or rCopy...pg_restore") and remove the stray paste,
correcting "rCopy" to "require" and restoring the original phrase flow.
In
`@apps/blog/content/blog/backend-prisma-typescript-orm-with-postgresql-auth-mngp1ps7kip4/index.mdx`:
- Line 386: The default export references a non-existent identifier `plugin`;
update the export to use the actual variable name `authPlugin` (or rename the
declared `authPlugin` to `plugin`) so the module exports the real plugin
instance; locate the export line `export default plugin` and replace it with
`export default authPlugin` (or rename `authPlugin` declaration) to avoid the
ReferenceError.
In
`@apps/blog/content/blog/backend-prisma-typescript-orm-with-postgresql-data-modeling-tsjs1ps7kip1/index.mdx`:
- Around line 73-74: Update the note that currently links to "hosted PostgreSQL
database on Heroku" to remove the Heroku free-tier recommendation and replace it
with a current free/low-friction alternative and a stronger local-first
suggestion: remove the "[hosted PostgreSQL database on Heroku]" text/link and
instead suggest options such as Supabase (free tier), Neon (free tier),
ElephantSQL (free plan) or Postgres.app for local macOS, and update the sentence
to say Heroku retired free Postgres on Nov 28, 2022 so readers should choose a
listed provider or local setup; ensure the visible anchor text and link(s) in
that Note are updated accordingly.
- Around line 442-444: Update the Prisma migrate command shown (currently using
the npx prisma migrate dev invocation that includes the deprecated
--preview-feature and the unnecessary --skip-generate flags) by removing both
flags so the migration runs with the standard migrate dev invocation and the
existing --name "init" argument; change the line that contains "npx prisma
migrate dev --preview-feature --skip-generate --name \"init\"" to the
simplified, current form without those flags.
In
`@apps/blog/content/blog/backend-prisma-typescript-orm-with-postgresql-rest-api-validation-dcba1ps7kip3/index.mdx`:
- Around line 1076-1089: The PUT /users route is using the wrong validator: in
the server.route configuration where updateUserHandler is assigned, replace the
createUserValidator with updateUserValidator in the options.validate.payload so
the update endpoint accepts partial/optional fields; locate the route definition
that calls server.route([...]) and update the payload validator reference from
createUserValidator to updateUserValidator to match the intended update
behavior.
- Around line 447-452: In the onPostStop hook registered via server.ext, replace
the non-existent synchronous call server.app.prisma.disconnect() with the Prisma
client's async disconnect method: await server.app.prisma.$disconnect(); i.e.,
update the method in the onPostStop handler (the async method passed to
server.ext) to call and await server.app.prisma.$disconnect() to ensure proper
cleanup of connections before shutdown.
In `@apps/blog/content/blog/bfg/index.mdx`:
- Line 33: The performance claims in the sentence containing "Up to 1000 times
faster" and "as little as 5 milliseconds" must be qualified or sourced: edit the
paragraph in apps/blog/content/blog/bfg/index.mdx to either add a short
qualifier (e.g., "in specific workloads/benchmarks", "depending on workload and
configuration") after those phrases or insert a parenthetical citation/link to
the benchmark or Prisma Accelerate docs that support the numbers; ensure you
reference the exact quoted phrases so readers can see the scope or provenance of
the measurement.
In
`@apps/blog/content/blog/build-an-app-with-svelte-and-typescript-pzdy3t93qatd/index.mdx`:
- Around line 427-431: Update the outdated/deprecated Prisma package name in the
npm install command: replace any instance of "@prisma/cli" and the typo
"@prismac/cli" with the current unscoped "prisma" package in the install line
shown in the diff (the npm install -D ... line) so readers run "prisma" instead
of the deprecated scoped package; ensure both occurrences are changed
consistently.
In
`@apps/blog/content/blog/cloudflare-unikernels-and-bare-metal-life-of-a-prisma-postgres-query/index.mdx`:
- Around line 72-80: The example declares const users but calls
prisma.post.findMany, causing a naming mismatch; rename the variable to posts
(e.g., const posts = await prisma.post.findMany(...)) so the identifier matches
the queried model, or alternatively change prisma.post.findMany to
prisma.user.findMany if you intended to fetch users; update any subsequent
references accordingly (look for the variable users and the call
prisma.post.findMany).
- Line 86: The text describing TTL and SWR is swapped relative to the example
code: update the prose to match the fields used in the sample (ttl and swr) so
the durations align; specifically, change "fresh for 30 seconds (TTL)" to "fresh
for 60 seconds (TTL)" and "for the next 60 seconds (SWR)" to "for the next 30
seconds (SWR)" to match the example's ttl: 60 and swr: 30 values.
In
`@apps/blog/content/blog/connections-edges-nodes-in-relay-758d358aa4c7/index.mdx`:
- Around line 35-47: The GraphQL example is missing the final closing brace for
the outer query; update the code block that queries movie(title: "Inception") {
... } (around the movie and actors fields) by adding the trailing `}` before the
closing code fence so the query is a valid, balanced GraphQL document.
In `@apps/blog/content/blog/convergence/index.mdx`:
- Around line 145-148: Remove the duplicate c.comment_data entry from the SELECT
list in the SQL example (it currently appears twice and lacks a comma), leaving
only the COALESCE(c.comment_data, '[]'::json) AS comment_data projection; update
the SELECT line that lists p.id, p.title, p.content, u.name as author_name,
u.email as author_email, COALESCE(c.comment_data, '[]'::json) AS comment_data so
it contains no second c.comment_data reference.
In `@apps/blog/README.md`:
- Around line 15-27: The README's "Explore" section lists incorrect file paths
and routes; update all references to match the actual project layout under src:
replace `lib/source.ts` with `src/lib/source.tsx` (and ensure the loader()
reference still matches the exported function), remove or replace
`lib/layout.shared.tsx` with the actual `src/app/layout.tsx`, and update the
route table to reflect the real route groups and handlers (remove `app/(home)`,
`app/docs`, and `app/api/search/route.ts` entries or replace them with the
correct `src/app/...` routes present in the codebase); ensure the README uses
the `src/` prefix consistently and that each listed file or route corresponds to
an existing symbol in the repository.
| check: | ||
| name: Check Links | ||
| runs-on: ubuntu-latest | ||
| permissions: | ||
| pull-requests: write | ||
| permissions: write-all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: permissions: write-all is overly permissive.
Using write-all grants the workflow full write access to all scopes. This violates the principle of least privilege. Based on this workflow's needs (checkout code, comment on PRs), you only need:
check:
name: Check Links
runs-on: ubuntu-latest
- permissions: write-all
+ permissions:
+ contents: read
+ pull-requests: writeThis reduces the blast radius if the workflow is compromised or if a dependency has a vulnerability.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| check: | |
| name: Check Links | |
| runs-on: ubuntu-latest | |
| permissions: | |
| pull-requests: write | |
| permissions: write-all | |
| check: | |
| name: Check Links | |
| runs-on: ubuntu-latest | |
| permissions: | |
| contents: read | |
| pull-requests: write |
🤖 Prompt for AI Agents
In @.github/workflows/lychee.yml around lines 10 - 13, The workflow job named
"check" currently uses the overly-permissive `permissions: write-all`; change
this to least-privilege permissions needed for the job (checkout + comment on
PRs) by replacing `permissions: write-all` with a minimal permissions map
granting `contents: read` and `pull-requests: write` (and remove write-all);
update the `check` job's permissions stanza accordingly so the workflow can read
repository contents and write PR comments but nothing more.
|
|
||
| To comply with regulations regarding the storage of personally identifiable information (PII) like phone numbers, social security numbers, and credit card numbers, you may need to avoid caching query results. Excluding the `cacheStrategy` from your queries provides a straightforward way to opt out of caching your query results. | ||
|
|
||
| > To understand the advantages and drawbacks associated with caching database query results, read the blog post <a href="https://www.prisma.io/blog/benefits-and-challenges-of-caching-database-query-results-x2s9ei21e8kq" target="_blank">Database Caching: A Double-Edged Sword? Examining the Pros and Cons</a>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add rel for links opened in new tabs.
When using target="_blank", include rel="noopener noreferrer" to prevent reverse‑tabnabbing.
🔒 Suggested edit
-> To understand the advantages and drawbacks associated with caching database query results, read the blog post <a href="https://www.prisma.io/blog/benefits-and-challenges-of-caching-database-query-results-x2s9ei21e8kq" target="_blank">Database Caching: A Double-Edged Sword? Examining the Pros and Cons</a>.
+> To understand the advantages and drawbacks associated with caching database query results, read the blog post <a href="https://www.prisma.io/blog/benefits-and-challenges-of-caching-database-query-results-x2s9ei21e8kq" target="_blank" rel="noopener noreferrer">Database Caching: A Double-Edged Sword? Examining the Pros and Cons</a>.
-> Learn more about the serverless connection management challenge <a href="https://www.prisma.io/docs/guides/performance-and-optimization/connection-management#serverless-environments-faas" target="_blank">here</a>.
+> Learn more about the serverless connection management challenge <a href="https://www.prisma.io/docs/guides/performance-and-optimization/connection-management#serverless-environments-faas" target="_blank" rel="noopener noreferrer">here</a>.
-Get started to supercharge your application with Prisma Accelerate! Try it out and share your experience with us on <a href="https://pris.ly/x" target="_blank">Twitter</a> or join the conversation on <a href="https://pris.ly/discord" target="_blank">Discord.</a>
+Get started to supercharge your application with Prisma Accelerate! Try it out and share your experience with us on <a href="https://pris.ly/x" target="_blank" rel="noopener noreferrer">Twitter</a> or join the conversation on <a href="https://pris.ly/discord" target="_blank" rel="noopener noreferrer">Discord.</a>Also applies to: 88-88, 120-120
🤖 Prompt for AI Agents
In `@apps/blog/content/blog/accelerate-preview-release-ab229e69ed2/index.mdx` at
line 80, The anchor tags using target="_blank" (e.g., the link with text
"Database Caching: A Double-Edged Sword? Examining the Pros and Cons" and the
other two anchor occurrences mentioned) are missing rel attributes; update each
<a ... target="_blank"> element in the MDX to include rel="noopener noreferrer"
to prevent reverse-tabnabbing, ensuring every anchor that opens a new tab
contains rel="noopener noreferrer".
| In late 2021 we embarked on a journey to deliver a platform for building the next generation of data-driven apps. June 2022 saw the GA release of the [Prisma Data Platform](https://cloud.prisma.io/) with the first round of great features. The [Data Proxy](https://www.prisma.io/docs/data-platform/data-proxy), with managed connection pooling; the [Query Console](https://www.prisma.io/docs/data-platform/query-console), which empowers you to run Prisma queries against a database directly from your browser; and the [Data Browser](https://www.prisma.io/docs/data-platform/data-browser), which grants easy, visual access to your databases from anywhere. | ||
|
|
||
| With more than 1200 projects launched on the Data Platform and the Data Proxy serving more than 380,000,000 CPU ms/mo, we're excited to announce the evolution of the Data Proxy into "Accelerate," a fully-fledged Data CDN. | ||
|
|
||
| ## Make your app more responsive with a single line of code | ||
|
|
||
| Accelerate includes everything you knew and loved from the Data Proxy, such as managed connection pooling for your Serverless apps, and adds a globally distributed cache that powers up to 1000x faster database queries and drives query latency down to as little as 5ms. | ||
|
|
||
| Deployed globally in close to 280 locations, caching always happens as close to your application as possible. Best of all, it works with your existing database, and you can control the cache behavior straight from your Prisma queries. | ||
|
|
||
| Accelerate is in [Early Access](https://prisma-data.typeform.com/to/WwPDKEQ5), and we're working hard to release it to a GA audience by mid-2023. This release is our next step toward realizing a Data Platform that empowers engineers everywhere to unlock productivity and make it more delightful to work with their data. | ||
|
|
||
| We're so excited to have you join us on this journey, and we can't wait to hear what you think! | ||
|
|
||
| <br /> | ||
| <br /> | ||
|
|
||
| [Get on the waitlist](https://www.prisma.io/data-platform/accelerate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify external links in the blog post are still accessible
urls=(
"https://cloud.prisma.io/"
"https://www.prisma.io/docs/data-platform/data-proxy"
"https://www.prisma.io/docs/data-platform/query-console"
"https://www.prisma.io/docs/data-platform/data-browser"
"https://www.prisma.io/data-platform/accelerate"
"https://prisma-data.typeform.com/to/WwPDKEQ5"
)
for url in "${urls[@]}"; do
echo "Checking: $url"
status=$(curl -s -o /dev/null -w "%{http_code}" -L "$url" --max-time 10)
redirect=$(curl -s -L -w "%{redirect_url}" -o /dev/null "$url" --max-time 10)
echo " Status: $status"
if [ -n "$redirect" ] && [ "$redirect" != "$url" ]; then
echo " Redirected to: $redirect"
fi
doneRepository: prisma/docs
Length of output: 476
🏁 Script executed:
#!/bin/bash
# Search for references to these features in the docs repository
echo "Searching for data-proxy documentation:"
find . -name "*.mdx" -o -name "*.md" | xargs grep -l "data-proxy\|Data Proxy" | head -20
echo -e "\nSearching for query-console documentation:"
find . -name "*.mdx" -o -name "*.md" | xargs grep -l "query-console\|Query Console" | head -20
echo -e "\nSearching for data-browser documentation:"
find . -name "*.mdx" -o -name "*.md" | xargs grep -l "data-browser\|Data Browser" | head -20
echo -e "\nSearching for accelerate documentation:"
find . -name "*.mdx" -o -name "*.md" | xargs grep -l "accelerate\|Accelerate" | head -20Repository: prisma/docs
Length of output: 4303
🏁 Script executed:
#!/bin/bash
# Check the docs directory structure
echo "Contents of docs directory:"
find ./apps/docs/content/docs -type d -name "*data*" | head -20
echo -e "\nAll top-level docs directories:"
ls -la ./apps/docs/content/docs/ | grep "^d"
echo -e "\nChecking for data-platform directory:"
ls -la ./apps/docs/content/docs/ | grep -i platform
echo -e "\nChecking current Accelerate docs location:"
ls -la ./apps/docs/content/docs/accelerate/ 2>/dev/null | head -20 || echo "Directory not found"Repository: prisma/docs
Length of output: 2359
🏁 Script executed:
#!/bin/bash
# Check console directory
echo "Contents of console directory:"
ls -la ./apps/docs/content/docs/console/
echo -e "\nContents of studio directory:"
ls -la ./apps/docs/content/docs/studio/
echo -e "\nSearching for 'Query Console' or 'query-console':"
grep -r "query-console\|Query Console" ./apps/docs/content/docs/ --include="*.mdx" | head -5
echo -e "\nSearching for 'Data Browser':"
grep -r "data-browser\|Data Browser" ./apps/docs/content/docs/ --include="*.mdx" | head -5Repository: prisma/docs
Length of output: 1046
🏁 Script executed:
#!/bin/bash
# Check console features subdirectory
echo "Contents of console/features:"
ls -la ./apps/docs/content/docs/console/features/
# Check if there are any references to "console" or similar in the blog post context
echo -e "\nSearching for current references to Query or Browser features:"
grep -i "browser\|console" ./apps/docs/content/docs/console/index.mdx | head -20
# Check the meta.json files to understand the doc structure
echo -e "\nConsole meta.json:"
cat ./apps/docs/content/docs/console/meta.json
echo -e "\nStudio meta.json:"
cat ./apps/docs/content/docs/studio/meta.jsonRepository: prisma/docs
Length of output: 2315
Fix broken documentation links in the blog post.
The review comment's concern is valid. Three documentation links are returning 404 errors and need to be updated:
https://www.prisma.io/docs/data-platform/data-proxy→ Should link to/docs/accelerate/(Data Proxy evolved into Accelerate)https://www.prisma.io/docs/data-platform/query-console→ Should link to/docs/console/(Query Console is now part of the unified Console)https://www.prisma.io/docs/data-platform/data-browser→ Functionality likely moved to/docs/console/or/docs/studio/
The /docs/data-platform/ section no longer exists in the current documentation structure. The documentation has been reorganized to consolidate these features under the Console and Accelerate sections. Since this is a historical post being migrated, verify which current documentation pages best serve readers looking for information about these features, then update the links accordingly.
🧰 Tools
🪛 LanguageTool
[grammar] ~24-~24: Ensure spelling is correct
Context: ...ives query latency down to as little as 5ms. Deployed globally in close to 280 loc...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In `@apps/blog/content/blog/announcing-accelerate-usrvpi6sfkv4/index.mdx` around
lines 18 - 35, Update the three broken documentation links in the blog post:
replace the Data Proxy link (anchor text "Data Proxy") from
https://www.prisma.io/docs/data-platform/data-proxy to /docs/accelerate/,
replace the Query Console link (anchor text "Query Console") from
https://www.prisma.io/docs/data-platform/query-console to /docs/console/, and
change the Data Browser link (anchor text "Data Browser") from
https://www.prisma.io/docs/data-platform/data-browser to the most appropriate
current page—likely /docs/console/ (or /docs/studio/ if that better matches the
functionality); verify the chosen target pages before committing to ensure
readers land on the correct modern docs.
| ```tsx | ||
| import { PrismaClient } from './generated/prisma' | ||
| import { PrismaPg } from '@prisma/adapter-pg' | ||
|
|
||
| const adapter = new PrismaPg({ connectionString: env.DATABASE_URL }) | ||
| const prisma = new PrismaClient({ adapter }) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined env variable in code snippet will cause runtime errors.
In this PrismaClient setup example, env.DATABASE_URL is used but env is never imported or defined. Users copy-pasting this code would encounter a ReferenceError.
This differs from the prisma.config.ts example below (line 84) which correctly imports env from 'prisma/config' and uses it as a function: env('DATABASE_URL').
For the adapter setup, you likely want process.env.DATABASE_URL:
import { PrismaClient } from './generated/prisma'
import { PrismaPg } from '@prisma/adapter-pg'
-const adapter = new PrismaPg({ connectionString: env.DATABASE_URL })
+const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL })
const prisma = new PrismaClient({ adapter })🤖 Prompt for AI Agents
In `@apps/blog/content/blog/announcing-prisma-6-19-0/index.mdx` around lines 59 -
65, The snippet uses an undefined env variable causing a runtime ReferenceError;
update the PrismaPg instantiation in the block that creates adapter and prisma
(symbols: PrismaPg, adapter, PrismaClient) to use a defined source for the DB
URL — either replace env.DATABASE_URL with process.env.DATABASE_URL or import
the project's env helper (e.g., import env from 'prisma/config') and call
env('DATABASE_URL'); ensure the chosen approach matches the rest of the examples
(prisma.config.ts) so the adapter receives a valid connection string.
| ``` | ||
| npx prisma init --db | ||
| ``` | ||
| Prisma Postgres also integrates directly with your favorite LLM and AI coding environment via the Model Context Protocol (MCP). | ||
|
|
||
| Simply add it to the MCP configuration of your desired AI tool using this JSON snippet: | ||
|
|
||
| ```json | ||
| { | ||
| "mcpServers": { | ||
| "Prisma": { | ||
| "command": "npx", | ||
| "args": ["-y", "prisma", "platform", "mcp", "--early-access"] | ||
| } | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for Prisma CLI commands and MCP configuration references
echo "=== Searching for Prisma init usage ==="
rg "prisma init" --type=md --type=mdx -C 2
echo -e "\n=== Searching for MCP-related references ==="
rg -i "mcp" --type=md --type=mdx --type=json -C 2
echo -e "\n=== Searching for Prisma platform commands ==="
rg "prisma platform" -C 2
echo -e "\n=== Checking for early-access flag usage ==="
rg "early-access" -C 2
echo -e "\n=== Checking for --db flag usage ==="
rg "\-\-db" -C 2Repository: prisma/docs
Length of output: 50443
🌐 Web query:
Prisma init --db flag official documentation 2025
💡 Result:
In the official Prisma docs, the --db flag for prisma init is documented in the Prisma CLI reference:
prisma init --dbis shorthand forprisma init --datasource-provider prisma+postgres, and it creates a new Prisma Postgres instance (requires authenticating via Prisma Console/Platform). [1]
Prisma also highlights this command on the main documentation landing page as the “one command” to try Prisma: npx prisma init --db. [2]
🌐 Web query:
Prisma Model Context Protocol MCP integration documentation
💡 Result:
- Main docs / setup guide (Prisma MCP Server): Prisma’s official “Prisma MCP Server” page (includes “Read Docs” / “Add MCP Server” flows and tool-specific setup entry points). [1]
- Reference implementation + full integration examples:
prisma/mcpGitHub repo (local vs remote server, tool list, and ready-to-paste MCP config snippets for VS Code/Cursor/Windsurf/Warp/Claude Code, etc.). [2] - Background + version note: Announcement post (notes the Prisma CLI includes a built-in MCP server as of Prisma CLI v6.6.0 and points to the docs for tool-specific integration steps). [3]
Most-used config snippets (from Prisma’s MCP docs):
Remote Prisma MCP server (hosted):
{
"mcpServers": {
"Prisma-Remote": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://mcp.prisma.io/mcp"]
}
}
}[2]
Local Prisma MCP server (via Prisma CLI):
{
"mcpServers": {
"Prisma-Local": {
"command": "npx",
"args": ["-y", "prisma", "mcp"]
}
}
}[2]
Sources: [1] [2] [3]
🌐 Web query:
Prisma CLI platform mcp early-access command
💡 Result:
There isn’t a prisma platform mcp subcommand. mcp is a top-level Prisma CLI command.
-
Start the local Prisma MCP server (recommended/current docs):
npx -y prisma mcp
[1]
-
If you’re following material that mentions Early Access, it may show:
npx -y prisma mcp --early-access
(Referenced in Prisma’s MCP blog post.) [2]
-
--early-accessis required for Prisma Data Platform (“platform”) commands used by the MCP tools, e.g.:npx prisma platform auth login --early-access npx prisma platform auth show --early-access
[3][1]
Sources: [1] Prisma MCP repo README, [2] Prisma blog on building the MCP server, [3] Prisma Platform CLI commands docs.
Fix the MCP server configuration command syntax.
The npx prisma init --db command is correct. However, the MCP configuration command is incorrect—mcp is a top-level Prisma CLI command, not a subcommand under platform. Update the args array to remove "platform":
Corrected MCP configuration:
{
"mcpServers": {
"Prisma": {
"command": "npx",
"args": ["-y", "prisma", "mcp"]
}
}
}The command shown will fail because prisma platform mcp does not exist. Users will need npx prisma mcp (without "platform"). The --early-access flag is for Prisma Data Platform commands like prisma platform auth login, not for the mcp command itself.
🤖 Prompt for AI Agents
In
`@apps/blog/content/blog/announcing-prisma-postgres-for-ai-coding-agents/index.mdx`
around lines 83 - 99, The MCP server config uses the wrong CLI invocation: in
the JSON under "mcpServers" -> "Prisma" the "args" array includes "platform" and
"--early-access" which makes the command invalid; update the "args" for the
"Prisma" entry (keys "command" and "args") to invoke the top-level Prisma MCP
command (i.e., use arguments that produce "npx prisma mcp") and remove
"platform" and the "--early-access" flag so the MCP server runs as intended.
| ```ts | ||
| const users = await prisma.post.findMany({ | ||
| where: { published: true }, | ||
| cacheStrategy: { | ||
| ttl: 60, | ||
| swr: 30, | ||
| } | ||
| }) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix variable naming inconsistency in code example.
The variable is named users but the query fetches posts (prisma.post.findMany). This mismatch will confuse readers.
📝 Proposed fix
-const users = await prisma.post.findMany({
+const posts = await prisma.post.findMany({
where: { published: true },
cacheStrategy: {
ttl: 60,
swr: 30,
}
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ```ts | |
| const users = await prisma.post.findMany({ | |
| where: { published: true }, | |
| cacheStrategy: { | |
| ttl: 60, | |
| swr: 30, | |
| } | |
| }) | |
| ``` |
🤖 Prompt for AI Agents
In
`@apps/blog/content/blog/cloudflare-unikernels-and-bare-metal-life-of-a-prisma-postgres-query/index.mdx`
around lines 72 - 80, The example declares const users but calls
prisma.post.findMany, causing a naming mismatch; rename the variable to posts
(e.g., const posts = await prisma.post.findMany(...)) so the identifier matches
the queried model, or alternatively change prisma.post.findMany to
prisma.user.findMany if you intended to fetch users; update any subsequent
references accordingly (look for the variable users and the call
prisma.post.findMany).
| - [Time-To-Live](https://www.prisma.io/docs/accelerate/caching#time-to-live-ttl) (`ttl`): Determines how long cached data is considered _fresh_. When you set a TTL value, Prisma Postgres will serve the cached data for that duration without querying the database. | ||
| - [Stale-While-Revalidate](https://www.prisma.io/docs/accelerate/caching#stale-while-revalidate-swr) (`swr`): Allows Prisma Postgres to serve _stale_ cached data while fetching fresh data in the background. When you set an SWR value, Prisma Postgres will continue to serve the cached data for that duration, even if it's past the TTL, while simultaneously updating the cache with new data from the database. | ||
|
|
||
| In this example, the data will be considered fresh for 30 seconds (TTL). After that, for the next 60 seconds (SWR), Prisma Postgres's cache will serve the stale data while fetching fresh data in the background. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the TTL and SWR duration values.
The explanation contradicts the code example from lines 72-80:
- The text states "fresh for 30 seconds (TTL)" but the code shows
ttl: 60 - The text states "for the next 60 seconds (SWR)" but the code shows
swr: 30
The values are swapped.
📝 Proposed fix
-In this example, the data will be considered fresh for 30 seconds (TTL). After that, for the next 60 seconds (SWR), Prisma Postgres's cache will serve the stale data while fetching fresh data in the background.
+In this example, the data will be considered fresh for 60 seconds (TTL). After that, for the next 30 seconds (SWR), Prisma Postgres's cache will serve the stale data while fetching fresh data in the background.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| In this example, the data will be considered fresh for 30 seconds (TTL). After that, for the next 60 seconds (SWR), Prisma Postgres's cache will serve the stale data while fetching fresh data in the background. | |
| In this example, the data will be considered fresh for 60 seconds (TTL). After that, for the next 30 seconds (SWR), Prisma Postgres's cache will serve the stale data while fetching fresh data in the background. |
🤖 Prompt for AI Agents
In
`@apps/blog/content/blog/cloudflare-unikernels-and-bare-metal-life-of-a-prisma-postgres-query/index.mdx`
at line 86, The text describing TTL and SWR is swapped relative to the example
code: update the prose to match the fields used in the sample (ttl and swr) so
the durations align; specifically, change "fresh for 30 seconds (TTL)" to "fresh
for 60 seconds (TTL)" and "for the next 60 seconds (SWR)" to "for the next 30
seconds (SWR)" to match the example's ttl: 60 and swr: 30 values.
| ```graphql | ||
| { | ||
| movie(title: "Inception") { | ||
| releaseDate | ||
| actors(first: 10) { | ||
| edges { | ||
| node { | ||
| name | ||
| } | ||
| } | ||
| } | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the GraphQL example — missing closing brace for the query.
The snippet doesn’t close the outer { ... } query, so it’s not valid GraphQL. Please add the final } before the code fence.
🛠️ Proposed fix
movie(title: "Inception") {
releaseDate
actors(first: 10) {
edges {
node {
name
}
}
}
}
+}🤖 Prompt for AI Agents
In
`@apps/blog/content/blog/connections-edges-nodes-in-relay-758d358aa4c7/index.mdx`
around lines 35 - 47, The GraphQL example is missing the final closing brace for
the outer query; update the code block that queries movie(title: "Inception") {
... } (around the movie and actors fields) by adding the trailing `}` before the
closing code fence so the query is a valid, balanced GraphQL document.
| p.id, p.title, p.content, | ||
| u.name as author_name, u.email as author_email, | ||
| COALESCE(c.comment_data, '[]'::json) AS comment_data | ||
| c.comment_data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the file exists and read the relevant lines
FILE_PATH="apps/blog/content/blog/convergence/index.mdx"
# Verify file exists
if [ -f "$FILE_PATH" ]; then
echo "File found. Reading lines 145-150 for context:"
sed -n '140,150p' "$FILE_PATH" | cat -n
else
echo "File not found at $FILE_PATH"
# Try to locate it
echo "Searching for files matching the pattern..."
fd "convergence" --type f
fiRepository: prisma/docs
Length of output: 503
Remove the duplicate c.comment_data line in the SQL example.
The SELECT list has an invalid duplicate on line 148—c.comment_data appears twice without a comma separator, which will fail when readers run the query. The second occurrence should be removed.
🔧 Suggested fix
COALESCE(c.comment_data, '[]'::json) AS comment_data
- c.comment_data📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| p.id, p.title, p.content, | |
| u.name as author_name, u.email as author_email, | |
| COALESCE(c.comment_data, '[]'::json) AS comment_data | |
| c.comment_data | |
| p.id, p.title, p.content, | |
| u.name as author_name, u.email as author_email, | |
| COALESCE(c.comment_data, '[]'::json) AS comment_data |
🤖 Prompt for AI Agents
In `@apps/blog/content/blog/convergence/index.mdx` around lines 145 - 148, Remove
the duplicate c.comment_data entry from the SELECT list in the SQL example (it
currently appears twice and lacks a comma), leaving only the
COALESCE(c.comment_data, '[]'::json) AS comment_data projection; update the
SELECT line that lists p.id, p.title, p.content, u.name as author_name, u.email
as author_email, COALESCE(c.comment_data, '[]'::json) AS comment_data so it
contains no second c.comment_data reference.
| ## Explore | ||
|
|
||
| In the project, you can see: | ||
|
|
||
| - `lib/source.ts`: Code for content source adapter, [`loader()`](https://fumadocs.dev/docs/headless/source-api) provides the interface to access your content. | ||
| - `lib/layout.shared.tsx`: Shared options for layouts, optional but preferred to keep. | ||
|
|
||
| | Route | Description | | ||
| | ------------------------- | ------------------------------------------------------ | | ||
| | `app/(home)` | The route group for your landing page and other pages. | | ||
| | `app/docs` | The documentation layout and pages. | | ||
| | `app/api/search/route.ts` | The Route Handler for search. | | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify files and routes referenced in apps/blog/README.md
echo "=== Checking Referenced Files ==="
for file in \
"apps/blog/lib/source.ts" \
"apps/blog/lib/layout.shared.tsx"; do
if [[ -f "$file" ]]; then
echo "✓ $file"
else
echo "✗ Missing: $file"
fi
done
echo ""
echo "=== Checking Referenced Routes ==="
for dir in \
"apps/blog/app/(home)" \
"apps/blog/app/docs" \
"apps/blog/app/api/search"; do
if [[ -d "$dir" ]]; then
echo "✓ $dir (directory exists)"
elif [[ -f "${dir}.ts" ]] || [[ -f "${dir}.tsx" ]]; then
echo "✓ $dir (file exists)"
else
echo "✗ Missing: $dir"
fi
done
# Also check the full route file mentioned
echo ""
echo "=== Checking Specific Route Handler ==="
if [[ -f "apps/blog/app/api/search/route.ts" ]]; then
echo "✓ apps/blog/app/api/search/route.ts"
else
echo "✗ Missing: apps/blog/app/api/search/route.ts"
fi
# Show actual directory structure for context
echo ""
echo "=== Actual Directory Structure (apps/blog) ==="
if [[ -d "apps/blog" ]]; then
find "apps/blog" -maxdepth 3 -type f -name "*.ts" -o -name "*.tsx" 2>/dev/null | head -20
fiRepository: prisma/docs
Length of output: 993
Update all file and route paths in the documentation to match the actual project structure.
All referenced files and routes in lines 15-27 are incorrect. The actual structure uses a src/ directory prefix that the README omits, and file extensions differ:
lib/source.ts→ should besrc/lib/source.tsxlib/layout.shared.tsx→ this file doesn't exist; actual layout is atsrc/app/layout.tsxapp/(home),app/docs,app/api/search/route.ts→ these routes don't exist in the current structure
Update the README to reflect the actual directory layout under src/ and verify that the documented routes and file organization match what developers will find in the codebase. This will prevent confusion when developers try to locate these files.
🤖 Prompt for AI Agents
In `@apps/blog/README.md` around lines 15 - 27, The README's "Explore" section
lists incorrect file paths and routes; update all references to match the actual
project layout under src: replace `lib/source.ts` with `src/lib/source.tsx` (and
ensure the loader() reference still matches the exported function), remove or
replace `lib/layout.shared.tsx` with the actual `src/app/layout.tsx`, and update
the route table to reflect the real route groups and handlers (remove
`app/(home)`, `app/docs`, and `app/api/search/route.ts` entries or replace them
with the correct `src/app/...` routes present in the codebase); ensure the
README uses the `src/` prefix consistently and that each listed file or route
corresponds to an existing symbol in the repository.
Summary by CodeRabbit
New Features
Documentation
Chores