# Prisma Documentation - Full Content Feed This file contains the complete Prisma documentation in machine-readable format. Includes both v7 (current) and v6 documentation. --- # Introduction to Prisma (/docs) [**Prisma ORM**](/orm) is an open-source ORM that provides fast, type-safe access to Postgres, MySQL, SQLite, and other databases, and runs smoothly across Node.js, Bun, and Deno. npm pnpm yarn bun ```bash npx prisma init --db ``` ```bash pnpm dlx prisma init --db ``` ```bash yarn dlx prisma init --db ``` ```bash bunx --bun prisma init --db ``` [**Prisma Postgres**](/postgres) is a fully managed PostgreSQL database that scales to zero, integrates with [Prisma ORM](/orm) and [Prisma Studio](/studio), and includes a [generous free tier](https://www.prisma.io/pricing). npm pnpm yarn bun ```bash npx create-db ``` ```bash pnpm dlx create-db ``` ```bash yarn dlx create-db ``` ```bash bunx --bun create-db ``` } > **Need a database?** Get started with your favorite framework and Prisma Postgres. }> **Already have a database?** Use Prisma ORM for a type-safe developer experience and automated migrations. # Caching queries (/docs/accelerate/caching) Prisma Accelerate provides global caching for read queries using TTL, Stale-While-Revalidate (SWR), or a combination of both. It's included as part of Prisma Postgres, but can also be used with your own database by enabling Accelerate in the [Prisma Data Platform](https://console.prisma.io?utm_source=docs) and [configuring it with your database](/accelerate/getting-started). # Compare Accelerate (/docs/accelerate/compare) Prisma Accelerate supports products that serve a global audience, with a global caching system and connection pool that spans multiple regions, providing consistent access to data with low latency no matter where your user (or your database) is located in the world. The managed connection pool is designed to support serverless infrastructure, capable of handling high volumes of connections and adapting to traffic spikes with ease. Explore how Prisma Accelerate compares to other global cache and connection pool solutions on the market, and discover what sets it apart. What makes Accelerate unique? [#what-makes-accelerate-unique] Prisma Accelerate is chosen and loved by many for a number of key reasons which make Accelerate unique: * [**Query-Level policies**](/accelerate/compare#accelerate-global-cache): Accelerate is the only solution that offers query-level cache policies, allowing you to control the cache strategy for each query specifically. It is common to have some values that need to be cached for a long time, others that need caching for a short time, and some that should not be cached at all. With Accelerate you can do this, and even set different cache strategies per query. * [**Global by default**](/accelerate/compare#accelerate-global-cache): Accelerate is globally distributed by default. You never need to worry about where a user is located with respect to your database location. * [**Fully managed**](/accelerate/compare#management): You don't need to manage a server or worry about uptime. Accelerate is fully managed for you. * [**Auto-scaling**](/accelerate/compare#performance): Accelerate automatically adjusts resources to match workload demands, providing fast and consistent performance during traffic spikes. Accelerate global cache [#accelerate-global-cache] Prisma Accelerate offers a powerful global cache, so you can serve data to your users at the edge — the closest point to where the users are located — no matter where your database is hosted. This not only speeds up the experience for users, but also reduces read load on your database as well by avoiding roundtrips. | | Accelerate | Hyperdrive | PlanetScale Boost | | ----------------------------------- | ---------- | ---------- | ----------------- | | **Fully Managed** | ✅ | ✅ | ✅ | | **Globally distributed edge infra** | ✅ | ✅ | ✅ | | **Control cache policy from code** | ✅ | ❌ | ❌ | | **Query-level cache policies** | ✅ | ❌ | ❌ | | **Postgres compatible** | ✅ | ✅ | ❌ | | **MySQL compatible** | ✅ | ❌ | ✅ | | **MongoDB compatible** | ✅ | ❌ | ❌ | | **Automatic cache updates** | ❌ | ❌ | ✅ | **Why are these important?** * Since Accelerate extends the Prisma client, you can control caching policies directly from your codebase with just an extra line of code. Integration is seamless. Here is an example using the stale-while-revalidating caching strategy: ```jsx await prisma.user.findMany({ cacheStrategy: { swr: 60, }, }); ``` * Query level cache policies are critical for serious applications, so that you can control which queries are cached, and the characteristics of the policy. You may want certain data in your app to be cached for several days, other data to be cached for a just a few minutes, and other data to be not cached at all. This is only possible with Prisma Accelerate. * Automatic cache updates means that the cache is automatically updated when a change in the database occurs. With Accelerate, you are in control of how the cache is invalidated, using [various caching strategies](/accelerate/caching). Accelerate connection pool [#accelerate-connection-pool] Prisma Accelerate includes a globally hosted connection pooler, which allows you to handle peak loads without any problem. Using a connection pool is important especially for serverless infrastructure, which by nature is not able to control connection volume to the database on its own. Prisma Accelerate offers a fully managed, globally colocated option, which auto scales to support any workload. Management [#management] | | Accelerate | pgbouncer | pgcat | Digital Ocean (pgbouncer) | Neon (pgbouncer) | Supavisor | Hyperdrive | | ------------------------------ | ---------- | --------- | ----- | ------------------------- | ---------------- | --------- | ---------- | | **Fully managed** | ✅ | ❌ | ❌ | 🟠 | ✅ | ❌ | ✅ | | **Globally distributed** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | | **Integrated with ORM client** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **Authenticate with API key** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **Redundancy** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | **Why are these important?** * If you decide to manage a connection pooler yourself (eg. using pgbouncer or pgcat) you will also be responsible for managing its uptime. If the server crashes, your application may be down until you recover it. Accelerate, as a fully managed solution will be recovered for you transparently, in the unlikely case of any infrastructure issue. * The hosted pgbouncer option on Digital Ocean is semi-managed, you will need to set it up in your Digital Ocean account, and ensure it is running smoothly at all times. * Authenticating with an API key can be a helpful security measure, allowing you to decouple database credentials from application secrets. Easily rotate API keys as often as you like, without needing any credential changes in your database * Redundancy is helpful in the unlikely scenario that your connection pool service goes down. With Accelerate, it is automatically and seamlessly handed over to another server and recovered without any interruption. Performance [#performance] | | Accelerate | pgbouncer | pgcat | Digital Ocean (pgbouncer) | Neon (pgbouncer) | Supavisor | Hyperdrive | | ------------------------------- | ---------- | --------- | ----- | ------------------------- | ---------------- | --------- | ---------- | | **Auto scaling** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **Globally distributed** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | | **Optimized queries over HTTP** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | | **Isolated compute** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | **Why are these important?** * Accelerate will automatically scale up and down to suit your application workload, meaning you'll never run out of compute resource. Additionally, this provides important redundancy to protect against any single compute instance failing — in the unlikely event of an instance going down, Accelerate will automatically spawn a new instance. * Cross-region TCP handshakes between the application server and PgBouncer or the database are costly and time-consuming. If connections are reused only at the PgBouncer layer, the TCP handshake and connection setup still consume unnecessary time on every single request, which undermines the efficiency of connection reuse. Prisma Accelerate improves this by leveraging HTTP, which is more efficient for connection management. It reduces the overhead associated with TCP handshakes, resulting in faster, more responsive interactions between your application and the database. * Never worry about 'noisy neighbors' with isolated compute resources. Other customers never impact on your own performance. Database Support [#database-support] | | Accelerate | pgbouncer | pgcat | Digital Ocean (pgbouncer) | Neon (pgbouncer) | Supavisor | Hyperdrive | | --------------- | ---------- | --------- | ----- | ------------------------- | ---------------- | --------- | ---------- | | **PostgreSQL** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | **MySQL** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **PlanetScale** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **CockroachDB** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | | **MongoDB** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | # Connection Pooling (/docs/accelerate/connection-pooling) Accelerate provides built-in connection pooling to efficiently manage database connections. It's included as part of [Prisma Postgres](/postgres), but you can also use it with your own database by enabling Accelerate in the [Prisma Data Platform](https://console.prisma.io?utm_source=docs) and [connecting it to your database](/accelerate/getting-started). This page has moved, connection pooling in Prisma Accelerate is now documented in the [Prisma Postgres section](/postgres/database/connection-pooling). # Evaluating (/docs/accelerate/evaluating) Prisma Accelerate optimizes database interactions through advanced connection pooling and global edge caching. Its connection pooler is available in 16 regions and helps applications load-balance and scale database requests based on demand. Considering the information above, we recommend evaluating Accelerate with high volume to see it perform under load. How Accelerate's connection pool optimizes performance under load [#how-accelerates-connection-pool-optimizes-performance-under-load] Prisma Accelerate employs a dynamic, serverless connection pooling infrastructure. When a request is made, a connection pool is quickly provisioned for the project in the region assigned while configuring Prisma Accelerate. This connection pool remains active, serving many additional requests while reusing established database connections. The connection pool will disconnect after a period of inactivity, so it’s important to evaluate Prisma Accelerate with a consistent stream of traffic. **Key Benefits:** * **Optimized Query Performance:** The serverless connection pooler adapts to the query load, ensuring the database connections are managed efficiently during peak demand. > Prisma Accelerate’s connection pooler cannot improve the performance of queries in the database. In scenarios where query performance is an issue, we recommend optimizing the Prisma query, applying indexes, or utilizing Accelerate’s edge caching. * **Maximize Connection Reuse:** Executing a consistent volume of queries helps maintain active instances of Accelerate connection poolers. This increases connection reuse, ensuring faster response times for subsequent queries. By understanding and harnessing this mechanism, you can ensure that your database queries perform consistently and efficiently at scale. Evaluating Prisma Accelerate connection pooling performance [#evaluating-prisma-accelerate-connection-pooling-performance] Below you will find an example of how to evaluate Prisma Accelerate using a sample model: ```prisma model Notes { id Int @id @default(autoincrement()) title String createdAt DateTime @default(now()) updatedAt DateTime? @updatedAt } ``` ```typescript import { PrismaClient } from "@prisma/client"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient().$extends(withAccelerate()); function calculateStatistics(numbers: number[]): { average: number; p50: number; p75: number; p99: number; } { if (numbers.length === 0) { throw new Error("The input array is empty."); } // Sort the array in ascending order numbers.sort((a, b) => a - b); const sum = numbers.reduce((acc, num) => acc + num, 0); const count = numbers.length; const average = sum / count; const p50 = getPercentile(numbers, 50); const p75 = getPercentile(numbers, 75); const p99 = getPercentile(numbers, 99); return { average, p50, p75, p99 }; } function getPercentile(numbers: number[], percentile: number): number { if (percentile <= 0 || percentile >= 100) { throw new Error("Percentile must be between 0 and 100."); } const index = (percentile / 100) * (numbers.length - 1); if (Number.isInteger(index)) { // If the index is an integer, return the corresponding value return numbers[index]; } else { // If the index is not an integer, interpolate between two adjacent values const lowerIndex = Math.floor(index); const upperIndex = Math.ceil(index); const lowerValue = numbers[lowerIndex]; const upperValue = numbers[upperIndex]; const interpolationFactor = index - lowerIndex; return lowerValue + (upperValue - lowerValue) * interpolationFactor; } } async function main() { const timings = []; // fire a query before going to the loop await prisma.notes.findMany({ take: 20, }); // we recommend evaluating Prisma Accelerate with a large loop const LOOP_LENGTH = 10000; for (let i = 0; i < LOOP_LENGTH; i++) { const start = Date.now(); await prisma.notes.findMany({ take: 20, }); timings.push(Date.now() - start); } const statistics = calculateStatistics(timings); console.log("Average:", statistics.average); console.log("P50:", statistics.p50); console.log("P75:", statistics.p75); console.log("P99:", statistics.p99); } main() .then(async () => { await prisma.$disconnect(); }) .catch((e) => { await prisma.$disconnect(); process.exit(1); }); ``` Evaluating Prisma Accelerate caching performance [#evaluating-prisma-accelerate-caching-performance] Prisma Accelerate’s edge cache is also optimized for a high volume of queries. The cache automatically optimizes for repeated queries. As a result, the cache hit rate will increase as the query frequency does. Adding a query result to the cache is also non-blocking, so a short burst of queries might not utilize the cache or a sustained load. To evaluate Accelerate’s edge caching, you can modify the above script with the below: ```typescript import { PrismaClient } from "@prisma/client"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient().$extends(withAccelerate()); function calculateStatistics(numbers: number[]): { average: number; p50: number; p75: number; p99: number; } { if (numbers.length === 0) { throw new Error("The input array is empty."); } // Sort the array in ascending order numbers.sort((a, b) => a - b); const sum = numbers.reduce((acc, num) => acc + num, 0); const count = numbers.length; const average = sum / count; const p50 = getPercentile(numbers, 50); const p75 = getPercentile(numbers, 75); const p99 = getPercentile(numbers, 99); return { average, p50, p75, p99 }; } function getPercentile(numbers: number[], percentile: number): number { if (percentile <= 0 || percentile >= 100) { throw new Error("Percentile must be between 0 and 100."); } const index = (percentile / 100) * (numbers.length - 1); if (Number.isInteger(index)) { // If the index is an integer, return the corresponding value return numbers[index]; } else { // If the index is not an integer, interpolate between two adjacent values const lowerIndex = Math.floor(index); const upperIndex = Math.ceil(index); const lowerValue = numbers[lowerIndex]; const upperValue = numbers[upperIndex]; const interpolationFactor = index - lowerIndex; return lowerValue + (upperValue - lowerValue) * interpolationFactor; } } async function main() { const timings = []; // fire a query before going to the loop await prisma.notes.findMany({ take: 20, cacheStrategy: { ttl: 30, }, }); // we recommend evaluating Prisma Accelerate with a large loop const LOOP_LENGTH = 10000; for (let i = 0; i < LOOP_LENGTH; i++) { const start = Date.now(); await prisma.notes.findMany({ take: 20, cacheStrategy: { ttl: 30, }, }); timings.push(Date.now() - start); } const statistics = calculateStatistics(timings); console.log("Average:", statistics.average); console.log("P50:", statistics.p50); console.log("P75:", statistics.p75); console.log("P99:", statistics.p99); } main() .then(async () => { await prisma.$disconnect(); }) .catch((e) => { await prisma.$disconnect(); process.exit(1); }); ``` # Examples (/docs/accelerate/examples) Here is a list of ready-to-run example projects that demonstrate how to use Prisma Accelerate: | Demo | Description | | ------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | | [`nextjs-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/nextjs-starter) | A Next.js project using Prisma Accelerate's caching and connection pooling | | [`svelte-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/svelte-starter) | A SvelteKit project using Prisma Accelerate's caching and connection pooling | | [`solidstart-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/solidstart-starter) | A Solidstart project using Prisma Accelerate's caching and connection pooling | | [`remix-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/remix-starter) | A Remix project using Prisma Accelerate's caching and connection pooling | | [`nuxt-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/nuxtjs-starter) | A Nuxt.js project using Prisma Accelerate's caching and connection pooling | | [`astro-starter`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/astro-starter) | An Astro project using Prisma Accelerate's caching and connection pooling | | [`accelerate-hacker-news`](https://github.com/prisma/prisma-examples/tree/latest/accelerate/accelerate-hacker-news) | A simple Hacker News clone built with Prisma Accelerate, demonstrating the use of on-demand cache invalidation | | [`prisma-accelerate-invalidation`](https://github.com/prisma/prisma-accelerate-invalidation) | An app demonstrating how long it takes to invalidate a cached query result using on-demand cache invalidation. | # Getting started (/docs/accelerate/getting-started) Prerequisites [#prerequisites] To get started with Accelerate, you will need the following: * A [Prisma Data Platform account](https://console.prisma.io) * A project that uses [Prisma Client](/orm/prisma-client/setup-and-configuration/introduction) `4.16.1` or higher. If your project is using interactive transactions, you need to use `5.1.1` or higher. (We always recommend using the latest version of Prisma.) * A hosted PostgreSQL, MySQL/MariaDB, PlanetScale, CockroachDB, or MongoDB database 1. Enable Accelerate [#1-enable-accelerate] Navigate to your Prisma Data Platform project, choose an environment, and enable Accelerate by providing your database connection string and selecting the region nearest your database. If you require IP allowlisting or firewall configurations with trusted IP addresses, enable Static IP for enhanced security. Learn more on [how to enable static IP for Accelerate in the Platform Console](/accelerate/static-ip). 2. Add Accelerate to your application [#2-add-accelerate-to-your-application] 2.1. Update your database connection string [#21-update-your-database-connection-string] Once enabled, you'll be prompted to generate a connection string that you'll use to authenticate requests. Replace your direct database URL with your new Accelerate connection string. ```bash title=".env" # New Accelerate connection string with generated API_KEY DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=__API_KEY__" # Previous (direct) database connection string # DATABASE_URL="postgresql://user:password@host:port/db_name?schema=public" ``` Prisma Client reads the `prisma://` URL from `DATABASE_URL` at runtime, while Prisma CLI commands use the connection string defined in `prisma.config.ts`. Prisma Migrate and Introspection do not work with a `prisma://` connection string. In order to continue using these features add a new variable to the `.env` file named `DIRECT_DATABASE_URL` whose value is the direct database connection string: ```bash title=".env" DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=__API_KEY__" DIRECT_DATABASE_URL="postgresql://user:password@host:port/db_name?schema=public" # [!code ++] ``` Then point `prisma.config.ts` to the direct connection string: ```ts title="prisma.config.ts" showLineNumbers import "dotenv/config"; import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", datasource: { url: env("DIRECT_DATABASE_URL"), }, }); ``` Migrations and introspections will use the `directUrl` connection string rather than the one defined in `url` when this configuration is provided. > `directUrl` is useful for you to carry out migrations and introspections. However, you don't need `directUrl` to use Accelerate in your application. If you are using Prisma with PostgreSQL, there is no need for `directUrl`, as Prisma Migrate and Introspection work with the `prisma+postgres://` connection string. 2.2. Install the Accelerate Prisma Client extension [#22-install-the-accelerate-prisma-client-extension] 💡 Accelerate requires [Prisma Client](/orm/prisma-client/setup-and-configuration/introduction) version `4.16.1` or higher and [`@prisma/extension-accelerate`](https://www.npmjs.com/package/@prisma/extension-accelerate) version `1.0.0` or higher. 💡 Accelerate extension [`@prisma/extension-accelerate`](https://www.npmjs.com/package/@prisma/extension-accelerate) version `2.0.0` and above requires Node.js version `18` or higher. Install the latest version of Prisma Client and Accelerate Prisma Client extension npm pnpm yarn bun ```bash npm install @prisma/client@latest @prisma/extension-accelerate ``` ```bash pnpm add @prisma/client@latest @prisma/extension-accelerate ``` ```bash yarn add @prisma/client@latest @prisma/extension-accelerate ``` ```bash bun add @prisma/client@latest @prisma/extension-accelerate ``` 2.3. Generate Prisma Client for Accelerate [#23-generate-prisma-client-for-accelerate] If you're using Prisma version `5.2.0` or greater, Prisma Client will automatically determine how it should connect to the database depending on the protocol in the database connection string. If the connection string in the `DATABASE_URL` starts with `prisma://`, Prisma Client will try to connect to your database using Prisma Accelerate. When using Prisma Accelerate in long-running application servers, such as a server deployed on AWS EC2, you can generate the Prisma Client by executing the following command: npm pnpm yarn bun ```bash npx prisma generate ``` ```bash pnpm dlx prisma generate ``` ```bash yarn dlx prisma generate ``` ```bash bunx --bun prisma generate ``` When using Prisma Accelerate in a Serverless or an Edge application, we recommend you to run the following command to generate Prisma Client: npm pnpm yarn bun ```bash npx prisma generate --no-engine ``` ```bash pnpm dlx prisma generate --no-engine ``` ```bash yarn dlx prisma generate --no-engine ``` ```bash bunx --bun prisma generate --no-engine ``` The `--no-engine` flag prevents a Query Engine file from being included in the generated Prisma Client, this ensures the bundle size of your application remains small. If your Prisma version is below `5.2.0`, generate Prisma Client with the `--accelerate` option: npm pnpm yarn bun ```bash npx prisma generate --accelerate ``` ```bash pnpm dlx prisma generate --accelerate ``` ```bash yarn dlx prisma generate --accelerate ``` ```bash bunx --bun prisma generate --accelerate ``` If your Prisma version is below `5.0.0`, generate Prisma Client with the `--data-proxy` option: npm pnpm yarn bun ```bash npx prisma generate --data-proxy ``` ```bash pnpm dlx prisma generate --data-proxy ``` ```bash yarn dlx prisma generate --data-proxy ``` ```bash bunx --bun prisma generate --data-proxy ``` 2.4. Extend your Prisma Client instance with the Accelerate extension [#24-extend-your-prisma-client-instance-with-the-accelerate-extension] Add the following to extend your existing Prisma Client instance with the Accelerate extension: ```ts import { PrismaClient } from "@prisma/client"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient({ accelerateUrl: process.env.DATABASE_URL, }).$extends(withAccelerate()); ``` If you are going to deploy to an edge runtime (like Cloudflare Workers, Vercel Edge Functions, Deno Deploy, or Supabase Edge Functions), use our edge client instead: ```ts import { PrismaClient } from "@prisma/client/edge"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient({ accelerateUrl: process.env.DATABASE_URL, }).$extends(withAccelerate()); ``` If VS Code does not recognize the `$extends` method, refer to [this section](/accelerate/more/faq#vs-code-does-not-recognize-the-extends-method) on how to resolve the issue. Using the Accelerate extension with other extensions [#using-the-accelerate-extension-with-other-extensions] Since [extensions are applied one after another](/orm/prisma-client/client-extensions#conflicts-in-combined-extensions), make sure you apply them in the correct order. Extensions cannot share behavior and the last extension applied takes precedence. If you are using [Query Insights](/query-insights) in your application, make sure you apply it *before* the Accelerate extension. For example: ```ts const prisma = new PrismaClient({ accelerateUrl: process.env.DATABASE_URL, }) .$extends(withOptimize()) .$extends(withAccelerate()); ``` 2.5. Use Accelerate in your database queries [#25-use-accelerate-in-your-database-queries] The `withAccelerate` extension primarily does two things: * Gives you access to the `cacheStrategy` field within each applicable model method that allows you to define a cache strategy per-query. * Routes all of your queries through a connection pooler. No cache strategy to only use connection pool [#no-cache-strategy-to-only-use-connection-pool] If you simply want to take advantage of Accelerate's connection pooling feature without applying a cache strategy, you may run your query the same way you would have without Accelerate. By enabling Accelerate and supplying the Accelerate connection string, your queries now use the connection pooler by default. As of Prisma version `5.2.0` you can use Prisma Studio with the Accelerate connection string. Invalidate the cache and keep your cached query results up-to-date [#invalidate-the-cache-and-keep-your-cached-query-results-up-to-date] If your application requires real-time or near-real-time data, cache invalidation ensures that users see the most current data, even when using a large `ttl` (Time-To-Live) or `swr` (Stale-While-Revalidate) [cache strategy](/accelerate/caching). By invalidating your cache, you can bypass extended caching periods to show live data whenever it's needed. For example, if a dashboard displays customer information and a customer’s contact details change, cache invalidation allows you to refresh only that data instantly, ensuring support staff always see the latest information without waiting for the cache to expire. To invalidate a cached query result, you can add tags and then use the `$accelerate.invalidate` API. On-demand cache invalidation is available with our paid plans. For more details, please see our [pricing](https://www.prisma.io/pricing#accelerate). To invalidate the query below: ```ts await prisma.user.findMany({ where: { email: { contains: "alice@prisma.io", }, }, cacheStrategy: { swr: 60, ttl: 60, tags: ["emails_with_alice"], // [!code highlight] }, }); ``` You need to provide the cache tag in the `$accelerate.invalidate` API: ```ts try { await prisma.$accelerate.invalidate({ // [!code highlight] tags: ["emails_with_alice"], // [!code highlight] }); // [!code highlight] } catch (e) { if (e instanceof Prisma.PrismaClientKnownRequestError) { // The .code property can be accessed in a type-safe manner if (e.code === "P6003") { console.log("You've reached the cache invalidation rate limit. Please try again shortly."); } } throw e; } ``` # Prisma Accelerate (/docs/accelerate) [Prisma Accelerate](https://www.prisma.io/accelerate) is a fully managed global connection pool and caching layer for your existing database, enabling query-level cache policies directly from the Prisma ORM. With 15+ global regions, the connection pool scales your app for a global audience, particularly for serverless deployments that risk connection timeouts during peak times. Accelerate's global cache, hosted in 300+ locations, ensures a fast experience for users, regardless of your database's location. You can configure query-level caching strategies directly in your code with Prisma ORM, making setup and tuning easy. Together, the connection pool and cache allow you to scale effortlessly and handle traffic spikes without infrastructure concerns. Supported databases [#supported-databases] Accelerate works with the database you already have, whether it is publicly accessible, or via an IP allowlist. * PostgreSQL * MySQL * MariaDB * PlanetScale * CockroachDB * MongoDB Getting started [#getting-started] * [Getting started](/accelerate/getting-started) - Learn how to get up and running with Prisma Accelerate * [Local development](/accelerate/local-development) - Learn how to use Prisma Accelerate in a development environment * [Examples](/accelerate/examples) - Check out ready-to-run examples for Prisma Accelerate # Local development (/docs/accelerate/local-development) Prisma Accelerate efficiently scales production traffic with integrated connection pooling and a global database cache. In development environments, you may want to use a local database to minimize expenses. Furthermore, you may consider extending Prisma Client with the Accelerate client extension once so that you can use a local database in development and a hosted database with Accelerate’s connection pooling and caching enabled. This eliminates the need for conditional logic to switch clients between development and production. This guide will explain how to use Prisma Accelerate client extension in a development environment with a local database. Using Prisma Accelerate client extension in development and production [#using-prisma-accelerate-client-extension-in-development-and-production]
Using Prisma Accelerate client extension in development Accelerate does not work with a local database. However, in a development environment, you can still use Prisma Client with the Accelerate client extension. This setup will not provide Accelerate's connection pooling and caching features. The following steps outline how to use Prisma ORM and Prisma Accelerate with a local PostgreSQL database. 1. Update the `DATABASE_URL` environment variable with your local database's connection string: ```bash DATABASE_URL="postgres://username:password@127.0.0.1:5432/localdb" ``` 2. Generate a Prisma Client: npm pnpm yarn bun ```bash npx prisma generate ``` ```bash pnpm dlx prisma generate ``` ```bash yarn dlx prisma generate ``` ```bash bunx --bun prisma generate ``` 3. Set up Prisma Client with the Accelerate client extension: ```typescript import { PrismaClient } from "@prisma/client"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient().$extends(withAccelerate()); ``` > The extended instance of Prisma Client will use the local database. Hence, Prisma Accelerate will not be used in your development environment to respond to your Prisma Client queries. Using Prisma Accelerate client extension in production If an Accelerate connection string is used as the `DATABASE_URL` environment variable, Prisma Client will route your queries through Accelerate. Using Prisma Accelerate locally in an edge function [#using-prisma-accelerate-locally-in-an-edge-function] When using an edge function, e.g., [Vercel's edge runtime](https://vercel.com/docs/functions/runtimes/edge-runtime), for your development environment, update your Prisma Client import as follows: ```typescript import { PrismaClient } from "@prisma/client/edge"; ``` Generally, edge function environments lack native support for existing APIs enabling TCP-based database connections. Prisma Accelerate provides a connection string that allows querying your database over HTTP, a protocol supported in all edge runtimes. # Static IP (/docs/accelerate/static-ip) You can enable static IP for Accelerate when your security setup requires IP allowlisting or if you're implementing firewalls that only permit access from trusted IPs, ensuring controlled and secure database connections. Result of enabling static IP Accelerate with a database using IP allowlisting To enable static IP support for Accelerate within an existing or a new project environment, your workspace will need to be on our Pro or Business plans. Take a look at the [pricing page](https://www.prisma.io/pricing#accelerate) for more information. Enable static IP in Accelerate [#enable-static-ip-in-accelerate] You can opt-in to use static IP for Accelerate in the [Platform Console](https://pris.ly/pdp) in two ways: 1. When enabling Accelerate for your project environment: [#1-when-enabling-accelerate-for-your-project-environment] 1. Specify your database connection string and connection pool region. 2. Enable static IP by toggling the **Static IP** switch in the **Network restrictions** section. 3. Click on the **Enable Accelerate** button. 2. For projects already using Accelerate: [#2-for-projects-already-using-accelerate] 1. Navigate to the Accelerate **Settings** tab in the project environment. 2. Enable static IP by toggling the **Static IP** switch in the **Network restrictions** section. Enabling static IP for Accelerate will provide you with a list of static IPv4 and IPv6 addresses. Once you have these addresses, configure your database firewall to allow incoming connections only from these IPs and any other trusted IPs that need access to your database. Since you cannot enable static IP for an existing Accelerate-enabled environment, we recommend opting for static IP when enabling Accelerate in a new environment. Use the same database URL as your existing Accelerate environment to instantly access static IP support for Accelerate. # Build faster with Prisma + AI (/docs/ai) In the era of AI, where code is increasingly written by agents, ensuring clarity, type safety, and reliable infrastructure is essential. With 5+ years of leadership in the TypeScript ecosystem, Prisma ORM and Prisma Postgres provide the proven foundation for AI-assisted development. Get started [#get-started] Run the following command to bootstrap your database with a prompt: npm pnpm yarn bun ```bash npx prisma init --prompt "Create a habit tracker application" ``` ```bash pnpm dlx prisma init --prompt "Create a habit tracker application" ``` ```bash yarn dlx prisma init --prompt "Create a habit tracker application" ``` ```bash bunx --bun prisma init --prompt "Create a habit tracker application" ``` AI Coding Tools [#ai-coding-tools] Prisma ORM and Prisma Postgres integrate seamlessly with your AI coding tools. Check out our documentation with tips and tricks for working with Prisma in various AI editors. * [Cursor](/ai/tools/cursor) - Define project-specific rules and use your schema as context to generate accurate queries and code. * [Windsurf](/ai/tools/windsurf) - Automate your database workflows by generating schemas, queries, and seed data in this AI-powered editor. * [Github Copilot](/ai/tools/github-copilot) - Get Prisma-aware code suggestions, run CLI commands from chat, and query the Prisma docs. * [ChatGPT](/ai/tools/chatgpt) - Learn how to connect the Prisma MCP server to ChatGPT to manage your databases with natural language. Agent Skills [#agent-skills] AI agents often generate outdated Prisma v6 code. Install Prisma Skills to give your agent accurate, up-to-date v7 knowledge - CLI commands, Client API, upgrade guides, database setup, and Prisma Postgres workflows. npm pnpm yarn bun ```bash npx skills add prisma/skills ``` ```bash pnpm dlx skills add prisma/skills ``` ```bash yarn dlx skills add prisma/skills ``` ```bash bunx --bun skills add prisma/skills ``` * [Available skills and setup](/ai/tools/skills) - See all available skills and learn how to install them. MCP server [#mcp-server] With Prisma's MCP server, your AI tool can take database actions on your behalf: Provisioning a new Prisma Postgres instance, creating database backups and executing SQL queries are just a few of its capabilities. ```json title="Integrate in AI tool" { "mcpServers": { "Prisma-Remote": { "url": "https://mcp.prisma.io/mcp" } } } ``` * [Capabilities and tools](/ai/tools/mcp-server#tools) - Discover all the tools that make up the capabilities of the Prisma MCP server. * [Integrating in AI tools](/ai/tools/mcp-server#integrating-in-ai-tools) - Learn how to integrate Prisma's MCP server in your favorite AI tool, such as Cursor, Claude, Warp, and more. * [How we built it](https://www.prisma.io/blog/about-mcp-servers-and-how-we-built-one-for-prisma) - Read this technical deep dive about the MCP protocol and how we built the Prisma MCP server. Vibe Coding Tutorials [#vibe-coding-tutorials] Build complete, production-ready applications from scratch with AI assistance. * [Build a Linktree Clone SaaS](/ai/tutorials/linktree-clone) - A complete vibe coding tutorial: build a full Linktree clone SaaS with Next.js, Prisma Postgres, and Clerk auth using AI assistance. Resources [#resources] * [Vibe Coding with Limits](https://www.prisma.io/blog/vibe-coding-with-limits-how-to-build-apps-in-the-age-of-ai) - How to Build Apps in the Age of AI * [Vibe Coding an E-commerce App](https://www.prisma.io/blog/vibe-coding-with-prisma-mcp-and-nextjs) - with Prisma MCP and Next.js * [Integrating the Vercel AI SDK](/guides/integrations/ai-sdk) - in a Next.js application Integrations [#integrations] * [Automate with Pipedream](https://pipedream.com/apps/prisma-management-api) - Connect Prisma Postgres to 2,800+ apps for powerful automation * [Firebase Studio](/guides/postgres/idx) - Prompt your application with Firebase Studio & Prisma Postgres # debug (/docs/cli/debug) The `prisma debug` command prints information helpful for debugging and bug reports. Available from version 5.6.0 and newer. Usage [#usage] ```bash prisma debug [options] ``` Options [#options] | Option | Description | | -------------- | -------------------------------------- | | `-h`, `--help` | Display help message | | `--config` | Custom path to your Prisma config file | | `--schema` | Custom path to your Prisma schema | Examples [#examples] Display debug information [#display-debug-information] npm pnpm yarn bun ```bash npx prisma debug ``` ```bash pnpm dlx prisma debug ``` ```bash yarn dlx prisma debug ``` ```bash bunx --bun prisma debug ``` Output: ```text -- Prisma schema -- Path: /prisma/schema.prisma -- Local cache directory for engines files -- Path: /.cache/prisma -- Environment variables -- When not set, the line is dimmed and no value is displayed. When set, the line is bold and the value is inside the `` backticks. For general debugging - CI: - DEBUG: - NODE_ENV: - RUST_LOG: - RUST_BACKTRACE: - NO_COLOR: - TERM: `xterm-256color` - NODE_TLS_REJECT_UNAUTHORIZED: - NO_PROXY: - http_proxy: - HTTP_PROXY: - https_proxy: - HTTPS_PROXY: For hiding messages - PRISMA_DISABLE_WARNINGS: - PRISMA_HIDE_PREVIEW_FLAG_WARNINGS: - PRISMA_HIDE_UPDATE_MESSAGE: For downloading engines - PRISMA_ENGINES_MIRROR: - PRISMA_BINARIES_MIRROR (deprecated): - PRISMA_ENGINES_CHECKSUM_IGNORE_MISSING: - BINARY_DOWNLOAD_VERSION: For custom engines - PRISMA_SCHEMA_ENGINE_BINARY: - PRISMA_MIGRATION_ENGINE_BINARY: For Prisma Client - PRISMA_SHOW_ALL_TRACES: For Prisma Migrate - PRISMA_SCHEMA_DISABLE_ADVISORY_LOCK: For Prisma Studio - BROWSER: -- Terminal is interactive? -- true -- CI detected? -- false ``` Use with older versions [#use-with-older-versions] If using an older Prisma version: npm pnpm yarn bun ```bash npx prisma@latest debug ``` ```bash pnpm dlx prisma@latest debug ``` ```bash yarn dlx prisma@latest debug ``` ```bash bunx --bun prisma@latest debug ``` # format (/docs/cli/format) The `prisma format` command formats your Prisma schema file. It validates, formats, and persists the schema. Usage [#usage] ```bash prisma format [options] ``` Options [#options] | Option | Description | | -------------- | -------------------------------------- | | `-h`, `--help` | Display help message | | `--config` | Custom path to your Prisma config file | | `--schema` | Custom path to your Prisma schema | Examples [#examples] Format the default schema [#format-the-default-schema] npm pnpm yarn bun ```bash npx prisma format ``` ```bash pnpm dlx prisma format ``` ```bash yarn dlx prisma format ``` ```bash bunx --bun prisma format ``` Output on success: ```text Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Formatted prisma/schema.prisma in 116ms ``` Format a specific schema [#format-a-specific-schema] npm pnpm yarn bun ```bash npx prisma format --schema=./alternative/schema.prisma ``` ```bash pnpm dlx prisma format --schema=./alternative/schema.prisma ``` ```bash yarn dlx prisma format --schema=./alternative/schema.prisma ``` ```bash bunx --bun prisma format --schema=./alternative/schema.prisma ``` Error output [#error-output] If the schema has validation errors, formatting will fail: ```text Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Error: Schema validation error - Error (query-engine-node-api library) Error code: P1012 error: The preview feature "unknownFeatureFlag" is not known. Expected one of: [...] schema.prisma:3 | 2 | provider = "prisma-client" 3 | previewFeatures = ["unknownFeatureFlag"] | Validation Error Count: 1 ``` # generate (/docs/cli/generate) The `prisma generate` command generates assets like Prisma Client based on the [`generator`](/orm/prisma-schema/overview/generators) and [`data model`](/orm/prisma-schema/data-model/models) blocks defined in your `schema.prisma` file. Usage [#usage] ```bash prisma generate [options] ``` How it works [#how-it-works] 1. Inspects the current directory to find a Prisma schema 2. Generates a customized Prisma Client based on your schema into the output directory specified in the generator block Prerequisites [#prerequisites] Add a generator definition in your `schema.prisma` file: ```prisma generator client { provider = "prisma-client" output = "./generated" } ``` Options [#options] | Option | Description | | ------------------ | ------------------------------------------------------ | | `-h`, `--help` | Display help message | | `--config` | Custom path to your Prisma config file | | `--schema` | Custom path to your Prisma schema | | `--sql` | Generate typed SQL module | | `--watch` | Watch the Prisma schema and regenerate after changes | | `--generator` | Generator to use (can be provided multiple times) | | `--no-hints` | Hide hint messages (still outputs errors and warnings) | | `--require-models` | Do not allow generating a client without models | Examples [#examples] Generate Prisma Client [#generate-prisma-client] npm pnpm yarn bun ```bash npx prisma generate ``` ```bash pnpm dlx prisma generate ``` ```bash yarn dlx prisma generate ``` ```bash bunx --bun prisma generate ``` Output: ```text ✔ Generated Prisma Client to ./node_modules/.prisma/client in 61ms You can now start using Prisma Client in your code: import { PrismaClient } from '../prisma/generated/client' const prisma = new PrismaClient() ``` Generate with a custom schema path [#generate-with-a-custom-schema-path] npm pnpm yarn bun ```bash npx prisma generate --schema=./alternative/schema.prisma ``` ```bash pnpm dlx prisma generate --schema=./alternative/schema.prisma ``` ```bash yarn dlx prisma generate --schema=./alternative/schema.prisma ``` ```bash bunx --bun prisma generate --schema=./alternative/schema.prisma ``` Watch mode [#watch-mode] Automatically regenerate when the schema changes: npm pnpm yarn bun ```bash npx prisma generate --watch ``` ```bash pnpm dlx prisma generate --watch ``` ```bash yarn dlx prisma generate --watch ``` ```bash bunx --bun prisma generate --watch ``` Output: ```text Watching... /home/prismauser/prisma/schema.prisma ✔ Generated Prisma Client to ./node_modules/.prisma/client in 45ms ``` Generate specific generators [#generate-specific-generators] Run only specific generators: npm pnpm yarn bun ```bash npx prisma generate --generator client ``` ```bash pnpm dlx prisma generate --generator client ``` ```bash yarn dlx prisma generate --generator client ``` ```bash bunx --bun prisma generate --generator client ``` Multiple generators: npm pnpm yarn bun ```bash npx prisma generate --generator client --generator zod_schemas ``` ```bash pnpm dlx prisma generate --generator client --generator zod_schemas ``` ```bash yarn dlx prisma generate --generator client --generator zod_schemas ``` ```bash bunx --bun prisma generate --generator client --generator zod_schemas ``` Generated assets [#generated-assets] The `prisma-client` generator creates a customized client for working with your database. You can [customize the output folder](/orm/reference/prisma-schema-reference#fields-for-prisma-client-provider) using the `output` field in the generator block. # CLI Overview (/docs/cli) The Prisma CLI provides commands for: * **Project setup**: Initialize new Prisma projects * **Code generation**: Generate Prisma Client and other artifacts * **Database management**: Pull schemas, push changes, seed data * **Migrations**: Create, apply, and manage database migrations * **Development tools**: Local database servers, schema validation, formatting Installation [#installation] The Prisma CLI is available as an npm package. Install it as a development dependency: npm pnpm yarn bun ```bash npm install prisma --save-dev ``` ```bash pnpm add prisma --save-dev ``` ```bash yarn add prisma --dev ``` ```bash bun add prisma --dev ``` Usage [#usage] ```bash prisma [command] ``` Commands [#commands] | Command | Description | | --------------------------- | ---------------------------------------------------- | | [`init`](/cli/init) | Set up Prisma for your app | | [`dev`](/cli/dev) | Start a local Prisma Postgres server for development | | [`generate`](/cli/generate) | Generate artifacts (e.g. Prisma Client) | | [`db`](/cli/db) | Manage your database schema and lifecycle | | [`migrate`](/cli/migrate) | Migrate your database | | [`studio`](/cli/studio) | Browse your data with Prisma Studio | | [`validate`](/cli/validate) | Validate your Prisma schema | | [`format`](/cli/format) | Format your Prisma schema | | [`version`](/cli/version) | Display Prisma version info | | [`debug`](/cli/debug) | Display Prisma debug info | | [`mcp`](/cli/mcp) | Start an MCP server to use with AI development tools | Global flags [#global-flags] These flags are available for all commands: | Flag | Description | | ------------------- | ----------------------------------- | | `--help`, `-h` | Show help information for a command | | `--preview-feature` | Run Preview Prisma commands | Using a HTTP proxy [#using-a-http-proxy] Prisma CLI supports custom HTTP proxies. This is useful when behind a corporate firewall. Set one of these environment variables: * `HTTP_PROXY` or `http_proxy`: Proxy URL for HTTP traffic (e.g., `http://localhost:8080`) * `HTTPS_PROXY` or `https_proxy`: Proxy URL for HTTPS traffic (e.g., `https://localhost:8080`) # init (/docs/cli/init) The `prisma init` command bootstraps a fresh Prisma project within the current directory. Usage [#usage] ```bash prisma init [options] ``` The command creates a `prisma` directory containing a `schema.prisma` file. By default, the project is configured for [local Prisma Postgres](/postgres/database/local-development), but you can choose a different database using the `--datasource-provider` option. Options [#options] | Option | Description | | ----------------------- | --------------------------------------------------------------------------------------------------------- | | `-h`, `--help` | Display help message | | `--db` | Provision a fully managed Prisma Postgres database on the Prisma Data Platform | | `--datasource-provider` | Define the datasource provider: `postgresql`, `mysql`, `sqlite`, `sqlserver`, `mongodb`, or `cockroachdb` | | `--generator-provider` | Define the generator provider to use (default: `prisma-client-js`) | | `--preview-feature` | Define a preview feature to use (can be specified multiple times) | | `--output` | Define Prisma Client generator output path | | `--url` | Define a custom datasource URL | Flags [#flags] | Flag | Description | | -------------- | ----------------------------------------------- | | `--with-model` | Add an example model to the created schema file | Examples [#examples] Set up a new Prisma project (default) [#set-up-a-new-prisma-project-default] Sets up a new project configured for local Prisma Postgres: npm pnpm yarn bun ```bash npx prisma init ``` ```bash pnpm dlx prisma init ``` ```bash yarn dlx prisma init ``` ```bash bunx --bun prisma init ``` Specify a datasource provider [#specify-a-datasource-provider] Set up a new project with MySQL as the datasource provider: npm pnpm yarn bun ```bash npx prisma init --datasource-provider mysql ``` ```bash pnpm dlx prisma init --datasource-provider mysql ``` ```bash yarn dlx prisma init --datasource-provider mysql ``` ```bash bunx --bun prisma init --datasource-provider mysql ``` Specify a generator provider [#specify-a-generator-provider] Set up a project with a specific generator provider: npm pnpm yarn bun ```bash npx prisma init --generator-provider prisma-client-js ``` ```bash pnpm dlx prisma init --generator-provider prisma-client-js ``` ```bash yarn dlx prisma init --generator-provider prisma-client-js ``` ```bash bunx --bun prisma init --generator-provider prisma-client-js ``` Specify preview features [#specify-preview-features] Set up a project with specific preview features enabled: npm pnpm yarn bun ```bash npx prisma init --preview-feature metrics ``` ```bash pnpm dlx prisma init --preview-feature metrics ``` ```bash yarn dlx prisma init --preview-feature metrics ``` ```bash bunx --bun prisma init --preview-feature metrics ``` Multiple preview features: npm pnpm yarn bun ```bash npx prisma init --preview-feature views --preview-feature metrics ``` ```bash pnpm dlx prisma init --preview-feature views --preview-feature metrics ``` ```bash yarn dlx prisma init --preview-feature views --preview-feature metrics ``` ```bash bunx --bun prisma init --preview-feature views --preview-feature metrics ``` Specify a custom output path [#specify-a-custom-output-path] Set up a project with a custom output path for Prisma Client: npm pnpm yarn bun ```bash npx prisma init --output ./generated-client ``` ```bash pnpm dlx prisma init --output ./generated-client ``` ```bash yarn dlx prisma init --output ./generated-client ``` ```bash bunx --bun prisma init --output ./generated-client ``` Specify a custom datasource URL [#specify-a-custom-datasource-url] Set up a project with a specific database URL: npm pnpm yarn bun ```bash npx prisma init --url mysql://user:password@localhost:3306/mydb ``` ```bash pnpm dlx prisma init --url mysql://user:password@localhost:3306/mydb ``` ```bash yarn dlx prisma init --url mysql://user:password@localhost:3306/mydb ``` ```bash bunx --bun prisma init --url mysql://user:password@localhost:3306/mydb ``` Add an example model [#add-an-example-model] Set up a project with an example `User` model: npm pnpm yarn bun ```bash npx prisma init --with-model ``` ```bash pnpm dlx prisma init --with-model ``` ```bash yarn dlx prisma init --with-model ``` ```bash bunx --bun prisma init --with-model ``` Provision a Prisma Postgres database [#provision-a-prisma-postgres-database] Create a new project with a managed Prisma Postgres database: npm pnpm yarn bun ```bash npx prisma init --db ``` ```bash pnpm dlx prisma init --db ``` ```bash yarn dlx prisma init --db ``` ```bash bunx --bun prisma init --db ``` This requires authentication with the [Prisma Data Platform Console](https://console.prisma.io). Generated files [#generated-files] After running `prisma init`, you'll have the following files: prisma/schema.prisma [#prismaschemaprisma] The Prisma schema file where you define your data model: ```prisma generator client { provider = "prisma-client" output = "../generated/prisma" } datasource db { provider = "postgresql" } ``` prisma.config.ts [#prismaconfigts] A TypeScript configuration file for Prisma: ```typescript import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` .env [#env] Environment variables file for your project: ```bash DATABASE_URL="postgresql://user:password@localhost:5432/mydb" ``` .gitignore [#gitignore] Git ignore file configured for Prisma projects: ```bash node_modules .env /generated/prisma ``` # mcp (/docs/cli/mcp) The `prisma mcp` command starts a Model Context Protocol (MCP) server that enables AI development tools to interact with your Prisma project. Usage [#usage] ```bash prisma mcp ``` Overview [#overview] MCP (Model Context Protocol) is a standard for AI tools to interact with development environments. The Prisma MCP server exposes your Prisma schema and database context to AI assistants, enabling them to: * Understand your data model * Generate queries and migrations * Provide context-aware suggestions See also [#see-also] * [Prisma MCP Server](/ai/tools/mcp-server) # studio (/docs/cli/studio) The `prisma studio` command starts a local web server with a web app to interactively browse and manage your data. Usage [#usage] ```bash prisma studio [options] ``` Supported databases Prisma Studio currently supports PostgreSQL, MySQL, and SQLite. Support for CockroachDB and MongoDB is not available yet but may be added in future releases. Prerequisites [#prerequisites] Configure your database connection in `prisma.config.ts`: ```prisma file=schema.prisma generator client { provider = "prisma-client" output = "../generated/prisma" } datasource db { provider = "sqlite" } ``` ```typescript file=prisma.config.ts import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` Options [#options] | Option | Description | Default | | ----------------- | ---------------------------------------------------- | -------------- | | `-h`, `--help` | Display help message | | | `-p`, `--port` | Port number to start Studio on | `5555` | | `-b`, `--browser` | Browser to auto-open Studio in | System default | | `--config` | Custom path to your Prisma config file | | | `--url` | Database connection string (overrides Prisma config) | | Examples [#examples] Start Studio on the default port [#start-studio-on-the-default-port] npm pnpm yarn bun ```bash npx prisma studio ``` ```bash pnpm dlx prisma studio ``` ```bash yarn dlx prisma studio ``` ```bash bunx --bun prisma studio ``` Start Studio on a custom port [#start-studio-on-a-custom-port] npm pnpm yarn bun ```bash npx prisma studio --port 7777 ``` ```bash pnpm dlx prisma studio --port 7777 ``` ```bash yarn dlx prisma studio --port 7777 ``` ```bash bunx --bun prisma studio --port 7777 ``` Start Studio in a specific browser [#start-studio-in-a-specific-browser] npm pnpm yarn bun ```bash npx prisma studio --browser firefox ``` ```bash pnpm dlx prisma studio --browser firefox ``` ```bash yarn dlx prisma studio --browser firefox ``` ```bash bunx --bun prisma studio --browser firefox ``` Or using the `BROWSER` environment variable: ```bash BROWSER=firefox prisma studio ``` Start Studio without opening a browser [#start-studio-without-opening-a-browser] npm pnpm yarn bun ```bash npx prisma studio --browser none ``` ```bash pnpm dlx prisma studio --browser none ``` ```bash yarn dlx prisma studio --browser none ``` ```bash bunx --bun prisma studio --browser none ``` Start Studio with a custom config file [#start-studio-with-a-custom-config-file] npm pnpm yarn bun ```bash npx prisma studio --config=./prisma.config.ts ``` ```bash pnpm dlx prisma studio --config=./prisma.config.ts ``` ```bash yarn dlx prisma studio --config=./prisma.config.ts ``` ```bash bunx --bun prisma studio --config=./prisma.config.ts ``` Start Studio with a direct database connection string [#start-studio-with-a-direct-database-connection-string] npm pnpm yarn bun ```bash npx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash pnpm dlx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash yarn dlx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash bunx --bun prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` # validate (/docs/cli/validate) The `prisma validate` command validates the [Prisma Schema Language](/orm/prisma-schema/overview) of your Prisma schema file. Usage [#usage] ```bash prisma validate [options] ``` Options [#options] | Option | Description | | -------------- | -------------------------------------- | | `-h`, `--help` | Display help message | | `--config` | Custom path to your Prisma config file | | `--schema` | Custom path to your Prisma schema | Examples [#examples] Validate the default schema [#validate-the-default-schema] npm pnpm yarn bun ```bash npx prisma validate ``` ```bash pnpm dlx prisma validate ``` ```bash yarn dlx prisma validate ``` ```bash bunx --bun prisma validate ``` Output on success: ```text Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma The schema at /absolute/path/prisma/schema.prisma is valid ``` Validate a specific schema [#validate-a-specific-schema] npm pnpm yarn bun ```bash npx prisma validate --schema=./alternative/schema.prisma ``` ```bash pnpm dlx prisma validate --schema=./alternative/schema.prisma ``` ```bash yarn dlx prisma validate --schema=./alternative/schema.prisma ``` ```bash bunx --bun prisma validate --schema=./alternative/schema.prisma ``` Validate with a config file [#validate-with-a-config-file] npm pnpm yarn bun ```bash npx prisma validate --config=./prisma.config.ts ``` ```bash pnpm dlx prisma validate --config=./prisma.config.ts ``` ```bash yarn dlx prisma validate --config=./prisma.config.ts ``` ```bash bunx --bun prisma validate --config=./prisma.config.ts ``` Error output [#error-output] If the schema has validation errors: ```text Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Error: Schema validation error - Error (query-engine-node-api library) Error code: P1012 error: The preview feature "unknownFeatureFlag" is not known. Expected one of: [...] schema.prisma:3 | 2 | provider = "prisma-client" 3 | previewFeatures = ["unknownFeatureFlag"] | Validation Error Count: 1 ``` # version (/docs/cli/version) The `prisma version` command outputs information about your current Prisma version, platform, and engine binaries. Usage [#usage] ```bash prisma version [options] ``` Or use the shorthand: ```bash prisma -v [options] ``` Options [#options] | Option | Description | | -------------- | ----------------------------------------- | | `-h`, `--help` | Display help message | | `--json` | Output version information in JSON format | Examples [#examples] Display version information [#display-version-information] npm pnpm yarn bun ```bash npx prisma version ``` ```bash pnpm dlx prisma version ``` ```bash yarn dlx prisma version ``` ```bash bunx --bun prisma version ``` Output: ```text Environment variables loaded from .env prisma : 2.21.0-dev.4 @prisma/client : 2.21.0-dev.4 Current platform : windows Query Engine : query-engine 2fb8f444d9cdf7c0beee7b041194b42d7a9ce1e6 Migration Engine : migration-engine-cli 2fb8f444d9cdf7c0beee7b041194b42d7a9ce1e6 Format Binary : prisma-fmt 60ba6551f29b17d7d6ce479e5733c70d9c00860e Default Engines Hash : 60ba6551f29b17d7d6ce479e5733c70d9c00860e Studio : 0.365.0 ``` Display version using shorthand [#display-version-using-shorthand] npm pnpm yarn bun ```bash npx prisma -v ``` ```bash pnpm dlx prisma -v ``` ```bash yarn dlx prisma -v ``` ```bash bunx --bun prisma -v ``` Display version as JSON [#display-version-as-json] npm pnpm yarn bun ```bash npx prisma version --json ``` ```bash pnpm dlx prisma version --json ``` ```bash yarn dlx prisma version --json ``` ```bash bunx --bun prisma version --json ``` Output: ```json { "prisma": "2.21.0-dev.4", "@prisma/client": "2.21.0-dev.4", "current-platform": "windows", "query-engine": "query-engine 60ba6551f29b17d7d6ce479e5733c70d9c00860e", "migration-engine": "migration-engine-cli 60ba6551f29b17d7d6ce479e5733c70d9c00860e", "format-binary": "prisma-fmt 60ba6551f29b17d7d6ce479e5733c70d9c00860e", "default-engines-hash": "60ba6551f29b17d7d6ce479e5733c70d9c00860e", "studio": "0.365.0" } ``` # Concepts (/docs/console/concepts) The Console workflows are based on four main concepts: * [**User account**](#user-account): In order to use Prisma products, you need to have a Console user account. A *user* will typically create one user account to manage all their workspaces, projects and resources. The *user* can also be invited to join other workspaces to collaborate on the projects in that workspace. * [**Workspaces**](#workspace): A user account can belong to multiple workspaces. A workspace typically represents a *team* of individuals working together on one or more projects. **Billing is on a workspace level**, i.e. the invoice for a workspace at the end of the month captures all costs associated with the projects in that workspace. * [**Projects**](#project): A project belongs to a workspace. It typically represents the *application* or *service* a team is working on. * [**Resources**](#resources): Resources represent the actual services or databases within a project. For example, in Prisma Postgres, each project can contain multiple databases. For Accelerate, resources might correspond to different environments (like `Development`, `Staging`, or `Production`). **Connection strings are provisioned at the resource level**, and products are configured per resource as well (e.g., the database connection string used for Accelerate). Here is a visual illustration of how these concepts relate to each other: How the concepts of the Console (user account, workspaces, projects, and resources) relate to each other User account [#user-account] A user account is the prerequisite for any interactions with Prisma products. You can use it to manage your workspaces (and their projects). A user account can be invited to collaborate on workspaces created by other users as well. If you need to delete your user account, go [here](/console/more/support#deleting-your-pdp-account). Workspace [#workspace] You can create several workspaces. A workspace is an isolated space to host projects. A workspace can have multiple user accounts associated with it so that multiple users can collaborate on the projects in the workspace. In each workspace, you can: * view and manage all projects (and their resources) in that workspace. * manage billing, i.e. select a [subscription plan](https://www.prisma.io/pricing?utm_source=docs\&utm_medium=platform-docs), configure payment methods, or view the invoice history. * view the usage of your enabled Prisma products across all projects in that workspace. * invite other users to collaborate in the workspace. * access the [Optimize dashboard](https://console.prisma.io/optimize?utm_source=docs\&utm_medium=optimize-docs) to measure query performance and receive AI-powered recommendations. CLI commands [#cli-commands] List all workspaces: npm pnpm yarn bun ```bash npx prisma platform workspace show --early-access ``` ```bash pnpm dlx prisma platform workspace show --early-access ``` ```bash yarn dlx prisma platform workspace show --early-access ``` ```bash bunx --bun prisma platform workspace show --early-access ``` Project [#project] In each workspace, you can create several projects. A project typically represents an application (a product or service). You typically have one [Prisma schema](/orm/prisma-schema/overview) per project. In each project, you can: * view and manage all resources (like databases) in that project. The number of projects you can create in a workspace depends on the [subscription plan](https://www.prisma.io/pricing?utm_source=docs\&utm_medium=platform-docs) configured in that workspace. CLI commands [#cli-commands-1] List all projects in a workspace: npm pnpm yarn bun ```bash npx prisma platform project show --workspace $WORKSPACE_ID --early-access ``` ```bash pnpm dlx prisma platform project show --workspace $WORKSPACE_ID --early-access ``` ```bash yarn dlx prisma platform project show --workspace $WORKSPACE_ID --early-access ``` ```bash bunx --bun prisma platform project show --workspace $WORKSPACE_ID --early-access ``` Create a new project: npm pnpm yarn bun ```bash npx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash pnpm dlx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash yarn dlx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash bunx --bun prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` Delete a project: npm pnpm yarn bun ```bash npx prisma platform project delete --project $PROJECT_ID --early-access ``` ```bash pnpm dlx prisma platform project delete --project $PROJECT_ID --early-access ``` ```bash yarn dlx prisma platform project delete --project $PROJECT_ID --early-access ``` ```bash bunx --bun prisma platform project delete --project $PROJECT_ID --early-access ``` Resources [#resources] Resources represent the actual services or databases within a project. The type of resources available depends on the Prisma products you're using: * **For Prisma Postgres**: Each project can contain multiple databases. These databases are the primary resources you'll manage. * **For Accelerate**: Resources typically correspond to different deployment stages (like `Development`, `Staging`, or `Production`). In each project, you can: * Create and manage multiple resources (databases or environments) * Generate connection strings specific to each resource * Configure product-specific settings: * **For Prisma Postgres databases**: * View database metrics and performance * Configure connection settings * Manage database users and permissions * **For Accelerate resources**: * Set your database connection string * Configure the region for connection pooling * Adjust connection pool size and performance settings * Set query duration and response size limits * Enable static IP for secure connections The number of resources you can create in a project depends on your [subscription plan](https://www.prisma.io/pricing?utm_source=docs\&utm_medium=platform-docs). CLI commands [#cli-commands-2] List all environments (resources) in a project: npm pnpm yarn bun ```bash npx prisma platform environment show --project $PROJECT_ID --early-access ``` ```bash pnpm dlx prisma platform environment show --project $PROJECT_ID --early-access ``` ```bash yarn dlx prisma platform environment show --project $PROJECT_ID --early-access ``` ```bash bunx --bun prisma platform environment show --project $PROJECT_ID --early-access ``` Create a new environment: npm pnpm yarn bun ```bash npx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash pnpm dlx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash yarn dlx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash bunx --bun prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` Delete an environment: npm pnpm yarn bun ```bash npx prisma platform environment delete --environment $ENVIRONMENT_ID --early-access ``` ```bash pnpm dlx prisma platform environment delete --environment $ENVIRONMENT_ID --early-access ``` ```bash yarn dlx prisma platform environment delete --environment $ENVIRONMENT_ID --early-access ``` ```bash bunx --bun prisma platform environment delete --environment $ENVIRONMENT_ID --early-access ``` # Getting Started (/docs/console/getting-started) This guide walks you through setting up your Console account and creating your first project. Prerequisites [#prerequisites] * A GitHub account (for authentication) * A Prisma project (optional, but recommended) Step 1: Create your account [#step-1-create-your-account] 1. Go to [console.prisma.io/login](https://console.prisma.io/login) 2. Click **Sign in with GitHub** 3. Authorize Prisma Console to access your GitHub account You now have a Console account with a default workspace. Step 2: Set up a workspace [#step-2-set-up-a-workspace] When you create an account, a default workspace is automatically created for you. You can create additional workspaces for different teams or organizations. Create a workspace (optional) [#create-a-workspace-optional] To create an additional workspace: 1. Click the workspace dropdown in the top navigation 2. Click **Create Workspace** 3. Enter a name for your workspace 4. Click **Create** Using the CLI [#using-the-cli] List all workspaces: npm pnpm yarn bun ```bash npx prisma platform workspace show --early-access ``` ```bash pnpm dlx prisma platform workspace show --early-access ``` ```bash yarn dlx prisma platform workspace show --early-access ``` ```bash bunx --bun prisma platform workspace show --early-access ``` Step 3: Create a project [#step-3-create-a-project] Projects organize your databases and environments within a workspace. Using the Console web interface [#using-the-console-web-interface] 1. Navigate to your workspace 2. Click **Create Project** 3. Enter a project name 4. Click **Create** Using the CLI [#using-the-cli-1] npm pnpm yarn bun ```bash npx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash pnpm dlx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash yarn dlx prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` ```bash bunx --bun prisma platform project create --workspace $WORKSPACE_ID --name "My Project" --early-access ``` Step 4: Create a resource [#step-4-create-a-resource] Resources are the actual databases or environments within your project. For Prisma Postgres [#for-prisma-postgres] 1. Navigate to your project 2. Click **Create Database** 3. Enter a database name 4. Select a region 5. Click **Create** For Accelerate [#for-accelerate] 1. Navigate to your project 2. Click **Create Environment** 3. Enter an environment name (e.g., "production") 4. Click **Create** Using the CLI [#using-the-cli-2] npm pnpm yarn bun ```bash npx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash pnpm dlx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash yarn dlx prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` ```bash bunx --bun prisma platform environment create --project $PROJECT_ID --name "production" --early-access ``` Step 5: Generate a connection string [#step-5-generate-a-connection-string] Connection strings authenticate your application's requests to Prisma products. Using the Console web interface [#using-the-console-web-interface-1] 1. Navigate to your resource (database or environment) 2. Click **Connection Strings** tab 3. Click **Create Connection String** 4. Enter a name for the connection string 5. Copy the connection string and store it securely 6. Click **Done** Using the CLI [#using-the-cli-3] npm pnpm yarn bun ```bash npx prisma platform apikey create --environment $ENVIRONMENT_ID --name "production-key" --early-access ``` ```bash pnpm dlx prisma platform apikey create --environment $ENVIRONMENT_ID --name "production-key" --early-access ``` ```bash yarn dlx prisma platform apikey create --environment $ENVIRONMENT_ID --name "production-key" --early-access ``` ```bash bunx --bun prisma platform apikey create --environment $ENVIRONMENT_ID --name "production-key" --early-access ``` Step 6: Use the connection string in your application [#step-6-use-the-connection-string-in-your-application] Add the connection string to your `.env` file: ```bash # For Accelerate DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=YOUR_API_KEY" # For Optimize OPTIMIZE_API_KEY="YOUR_API_KEY" ``` Next steps [#next-steps] * Learn more about [Console concepts](/console/concepts) * Explore [database metrics](/console/features/metrics) * Check out the [CLI reference](/cli/console) # Console (/docs/console) Overview [#overview] The [Console](https://console.prisma.io/login) enables you to manage and configure your projects that use Prisma products, and helps you integrate them into your application: * [Query Insights](/query-insights): Inspect slow queries, connect Prisma calls to SQL, and apply focused fixes. * [Prisma Postgres](/postgres): A managed PostgreSQL database that is optimized for Prisma ORM. Getting started [#getting-started] To start using Prisma products, you'll need to: 1. Create a Console account 2. Set up a workspace for your team 3. Create a project for your application 4. Generate connection strings for your resources Learn more in the [Getting Started](/console/getting-started) guide. Core concepts [#core-concepts] The Console is organized around four main concepts: * **[User account](/console/concepts#user-account)**: Your personal account to manage workspaces and projects * **[Workspaces](/console/concepts#workspace)**: Team-level container where billing is managed * **[Projects](/console/concepts#project)**: Application-level container within a workspace * **[Resources](/console/concepts#resources)**: Actual services or databases within a project (databases for Prisma Postgres) Read more about [Console concepts](/console/concepts). Console CLI [#console-cli] In addition to the web interface, the Prisma CLI provides another way to interact with your Console account and manage Prisma products. This can be useful for programmatic access, such as integrating into CI workflows. Learn more about the [Console CLI commands](/cli/console). # Guides (/docs/guides) Welcome to the Guides section! Here you'll find practical, step-by-step guides to help you accomplish specific tasks with Prisma products, including Prisma ORM, Prisma Accelerate, Prisma Postgres, and more. Browse through our guides using the sidebar navigation or use the search to find specific topics. Getting started [#getting-started] * [Next.js](/guides/frameworks/nextjs) - Learn how to use Prisma ORM in a Next.js app and deploy it to Vercel * [Hono](/guides/frameworks/hono) - Learn how to use Prisma ORM in a Hono app * [SvelteKit](/guides/frameworks/sveltekit) - Learn how to use Prisma ORM in a SvelteKit app # Writing guides (/docs/guides/making-guides) Introduction [#introduction] This guide shows you how to write guides for Prisma ORM documentation. It covers the required structure, formatting, and style conventions to ensure consistency across all guides. You'll learn about frontmatter requirements, section organization, and writing style. Prerequisites [#prerequisites] Before writing a guide, make sure you have: * A clear understanding of the topic you're writing about * Access to the Prisma documentation repository * Familiarity with Markdown and MDX * Knowledge of the target audience for your guide Guide structure [#guide-structure] Required frontmatter [#required-frontmatter] Every guide must include the following frontmatter at the top of the file: ```mdx --- title: '[Descriptive title]' description: '[One-sentence summary of what the guide covers]' --- ``` * `title`: A clear, descriptive title (e.g., "Next.js", "Multiple databases", "GitHub Actions") * `description`: A one-sentence summary that describes what you'll learn or accomplish * `image`: A unique header image for social media sharing (coordinate with the design team) All frontmatter fields should use sentence case. Required sections [#required-sections] 1. **Introduction** (H2: `##`) * Brief overview of what the guide covers * What the reader will learn/accomplish * Link to any example repositories or related resources on GitHub 2. **Prerequisites** (H2: `##`) * Required software/tools with version numbers (e.g., "Node.js 20+") * Required accounts (e.g., "A Prisma Data Platform account") * Keep it concise - only list what's truly necessary 3. **Main content sections** (H2: `##`) * Use numbered steps (e.g., "## 1. Set up your project", "## 2. Install and Configure Prisma") * Use numbered subsections (e.g., "### 2.1. Install dependencies", "### 2.2. Define your Prisma Schema") * Each step should build on previous steps * Include all commands and code snippets needed 4. **Next steps** (H2: `##`) * What to do after completing the guide * Related guides or documentation (with links) * Additional resources Writing style and voice [#writing-style-and-voice] General principles [#general-principles] * Write in a clear, conversational tone * Use active voice and present tense * Address the reader directly using "you" (e.g., "You'll learn how to...") * Avoid jargon and explain technical terms when necessary * Be concise but thorough * Guide readers step-by-step through the process Code examples [#code-examples] * Include complete, runnable code examples * Use syntax highlighting with language specification * Include file paths in code block metadata using `title=` * Use ` ```bash title=".env" ` for `.env` files so inline `# [!code ++]`, `# [!code --]`, and `# [!code highlight]` annotations render correctly * Reserve ` ```text ` for other plain-text files that do not need Fumadocs code annotations * Use comments sparingly - only when needed to explain complex logic * Use ` ```npm ` for package manager commands (auto-converts to pnpm/yarn/bun) * Use ` ```bash ` for shell commands and `.env` files * Use ` ```text ` for other plain text files * Use ` ```typescript `, ` ```prisma `, ` ```json ` for respective languages Example with file path: ```typescript title="src/lib/prisma.ts" import { PrismaClient } from "../generated/prisma"; import { PrismaPg } from "@prisma/adapter-pg"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL!, }); const prisma = new PrismaClient({ adapter, }); export default prisma; ``` Example showing changes: ```typescript title="prisma.config.ts" import "dotenv/config"; // [!code ++] import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` Formatting conventions [#formatting-conventions] * Use backticks for inline code: * File names: `` `schema.prisma` `` * Directory names: `` `prisma/` `` * Code elements: `` `PrismaClient` `` * Package manager commands: Use ` ```npm ` blocks (see [Package manager commands](#package-manager-commands)) * Use admonitions for important information: ```markdown :::info Context or background information ::: :::note Important details to remember ::: :::warning Critical information or gotchas ::: :::tip Helpful suggestions or best practices ::: ``` * Use proper heading hierarchy (never skip levels) * Use numbered sections (e.g., "## 1. Setup", "### 1.1. Install") * Link to other documentation pages using relative paths (e.g., `[Database drivers](/orm/core-concepts/supported-databases/database-drivers)`) Guide categories [#guide-categories] | Category | Directory | Description | Examples | | ------------------- | ------------------------------ | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Framework** | `guides/frameworks/` | Integrate Prisma with frameworks | [Next.js](/guides/frameworks/nextjs), [NestJS](/guides/frameworks/nestjs), [SvelteKit](/guides/frameworks/sveltekit) | | **Deployment** | `guides/deployment/` | Deploy apps and set up monorepos | [Turborepo](/guides/deployment/turborepo), [Cloudflare Workers](/guides/deployment/cloudflare-workers) | | **Integration** | `guides/integrations/` | Use Prisma with platforms and tools | [GitHub Actions](/guides/integrations/github-actions), [Supabase](/guides/integrations/supabase-accelerate) | | **Database** | `guides/database/` | Database patterns and migrations | [Multiple databases](/guides/database/multiple-databases), [Data migration](/guides/database/data-migration) | | **Authentication** | `guides/authentication/` | Authentication patterns with Prisma | [Auth.js + Next.js](/guides/authentication/authjs/nextjs), [Better Auth + Next.js](/guides/authentication/better-auth/nextjs), [Clerk + Next.js](/guides/authentication/clerk/nextjs) | | **Prisma Postgres** | `guides/postgres/` | Prisma Postgres features | [Vercel](/guides/postgres/vercel), [Netlify](/guides/postgres/netlify), [Viewing data](/guides/postgres/viewing-data) | | **Migration** | `guides/switch-to-prisma-orm/` | Switch from other ORMs | [From Mongoose](/guides/switch-to-prisma-orm/from-mongoose), [From Drizzle](/guides/switch-to-prisma-orm/from-drizzle) | Common patterns [#common-patterns] Package manager commands [#package-manager-commands] Use ` ```npm ` code blocks for package manager commands. These automatically convert to other package managers (pnpm, yarn, bun) in the UI: npm pnpm yarn bun ```bash npm install prisma --save-dev ``` ```bash pnpm add prisma --save-dev ``` ```bash yarn add prisma --dev ``` ```bash bun add prisma --dev ``` Environment variables [#environment-variables] Show `.env` file examples using ` ```bash title=".env" ` blocks: ```bash title=".env" DATABASE_URL="postgresql://user:password@localhost:5432/mydb" ``` If you need to show changes in an `.env` file, use bash comments for the Fumadocs annotations: ```bash title=".env" DATABASE_URL="postgresql://user:password@localhost:5432/mydb" # [!code --] DATABASE_URL="postgresql://user:password@db.example.com:5432/mydb" # [!code ++] ``` Database provider compatibility [#database-provider-compatibility] Include an info admonition when commands or code are PostgreSQL-specific: ```markdown :::info If you are using a different database provider (MySQL, SQL Server, SQLite), install the corresponding driver adapter package instead of `@prisma/adapter-pg`. For more information, see [Database drivers](/orm/core-concepts/supported-databases/database-drivers). ::: ``` Prisma Client instantiation [#prisma-client-instantiation] Show the standard pattern for creating a Prisma Client with database adapters: ```typescript title="lib/prisma.ts" import { PrismaClient } from "../generated/prisma"; import { PrismaPg } from "@prisma/adapter-pg"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL!, }); const prisma = new PrismaClient({ adapter, }); export default prisma; ``` Include a warning about connection pooling: ```markdown :::warning We recommend using a connection pooler (like [Prisma Accelerate](https://www.prisma.io/accelerate)) to manage database connections efficiently. ::: ``` Best practices [#best-practices] 1. **Keep it focused** * Each guide should cover one main topic * Break complex topics into multiple guides * Link to related guides instead of duplicating content 2. **Show don't tell** * Include practical, real-world examples * Provide complete, working code samples * Explain why certain approaches are recommended 3. **Consider the context** * Explain prerequisites clearly * Don't assume prior knowledge * Link to foundational concepts within or outside of our docs when needed 4. **Maintain consistency** * Follow the established guide structure * Use consistent terminology * Match the style of existing guides 5. **Think about maintenance** * Use version numbers where appropriate * Avoid time-sensitive references * Consider future updates when structuring content Guide template [#guide-template] Use this template as a starting point for new guides. The template includes common sections and patterns used across Prisma guides. Basic template structure [#basic-template-structure] Copy this template for a new guide: ````markdown --- title: '[Your guide title]' description: '[One-sentence summary of what you'll learn]' image: '/img/guides/[guide-name]-cover.png' --- ## Introduction [Brief overview of what this guide covers and what you'll accomplish. Include a link to an example repository if available.] ## Prerequisites - [Node.js 20+](https://nodejs.org) - [Any other prerequisites] ## 1. Set up your project [Instructions for creating or setting up the project] ```npm # Example command npx create-next-app@latest my-app cd my-app ``` ## 2. Install and Configure Prisma ### 2.1. Install dependencies To get started with Prisma, you'll need to install a few dependencies: ```npm npm install prisma tsx @types/pg --save-dev ``` ```npm npm install @prisma/client @prisma/adapter-pg dotenv pg ``` :::info If you are using a different database provider (MySQL, SQL Server, SQLite), install the corresponding driver adapter package instead of `@prisma/adapter-pg`. For more information, see [Database drivers](/orm/core-concepts/supported-databases/database-drivers). ::: Once installed, initialize Prisma in your project: ```npm npx prisma init --db --output ../generated/prisma ``` :::info You'll need to answer a few questions while setting up your Prisma Postgres database. Select the region closest to your location and a memorable name for your database. ::: This will create: - A `prisma` directory with a `schema.prisma` file - A Prisma Postgres database - A `.env` file containing the `DATABASE_URL` - A `prisma.config.ts` file for configuration ### 2.2. Define your Prisma Schema In the `prisma/schema.prisma` file, add your models: ```prisma title="prisma/schema.prisma" generator client { provider = "prisma-client" output = "../generated/prisma" } datasource db { provider = "postgresql" } model User { // [!code ++] id Int @id @default(autoincrement()) // [!code ++] email String @unique // [!code ++] name String? // [!code ++] posts Post[] // [!code ++] } // [!code ++] model Post { // [!code ++] id Int @id @default(autoincrement()) // [!code ++] title String // [!code ++] content String? // [!code ++] published Boolean @default(false) // [!code ++] authorId Int // [!code ++] author User @relation(fields: [authorId], references: [id]) // [!code ++] } // [!code ++] ``` ### 2.3. Run migrations and generate Prisma Client Create the database tables: ```npm npx prisma migrate dev --name init ``` Then generate Prisma Client: ```npm npx prisma generate ``` ## 3. [Integration-specific steps] [Add framework or platform-specific integration steps here] ## Next steps Now that you've completed this guide, you can: - [Suggestion 1] - [Suggestion 2] - [Related guide 1](/path/to/guide) - [Related guide 2](/path/to/guide) For more information: - [Prisma documentation](/orm) - [Related documentation] ```` Adding guides to navigation [#adding-guides-to-navigation] Guides are organized by category in subdirectories. To add a guide to the navigation, you need to update the appropriate `meta.json` file. Main categories [#main-categories] The main guide categories are listed in `meta.json`: ```json title="apps/docs/content/docs/guides/meta.json" { "title": "Guides", "root": true, "icon": "NotebookTabs", "pages": [ "index", "frameworks", "deployment", "authentication", "integrations", "postgres", "database", "switch-to-prisma-orm", "upgrade-prisma-orm" ] } ``` Adding a guide to a category [#adding-a-guide-to-a-category] To add a guide to a category (e.g., `frameworks`), edit the category's `meta.json` file: ```json title="apps/docs/content/docs/guides/frameworks/meta.json" { "title": "Frameworks", "defaultOpen": true, "pages": [ "nextjs", "astro", "nuxt", "your-new-guide" // [!code ++] ] } ``` The page name should match your `.mdx` filename without the extension. For example, if your file is `your-new-guide.mdx`, add `"your-new-guide"` to the `pages` array. Next steps [#next-steps] After reading this guide, you can: * Start writing your own guide using the provided template * Review existing guides in the category you're contributing to * Coordinate with the design team for a unique header image * Submit your guide for review # Using API Clients (/docs/management-api/api-clients) This guide shows you how to configure popular API clients to work with the Management API using OAuth 2.0 authentication. Postman [#postman] Postman is a popular API client with testing, collaboration, and automation features for working with REST APIs. Prerequisites [#prerequisites] Before you begin, make sure you have: * A [Prisma Console account](https://console.prisma.io) * [Postman installed](https://www.postman.com/downloads/) 1. Create an OAuth2 Application [#1-create-an-oauth2-application] First, you'll need to register an OAuth2 application in Prisma Console: 1. Navigate to [Prisma Console](https://console.prisma.io) and log in 2. Click the **🧩 Integrations** tab in the left sidebar 3. Under the "Published Applications" section, click **New Application** 4. Fill in your application details: * **Name**: Postman API Client * **Description**: Brief description of your application *(Optional)* * **Redirect URI**: `https://oauth.pstmn.io/v1/callback` 5. Click **Continue** 6. **Important**: Copy your Client ID and Client Secret immediately and store them securely The redirect URI `https://oauth.pstmn.io/v1/callback` is Postman's default callback URL when using the "Authorize using browser" option. 2. Configure OAuth 2.0 in Postman [#2-configure-oauth-20-in-postman] Now you'll set up authentication in Postman: 1. Open Postman and create a new HTTP request 2. Set the request method to **POST** 3. Set the URL to `https://api.prisma.io/v1/projects` 4. Navigate to the **Authorization** tab 5. Set **Auth Type** to **OAuth 2.0** 6. Under **Configure New Token**, enter the following values: | Parameter | Value | | -------------------- | ------------------------------------ | | Token Name | Management API Token | | Grant Type | Authorization Code | | Callback URL | `https://oauth.pstmn.io/v1/callback` | | Authorize in Browser | `true` *(checked)* | | Auth URL | `https://auth.prisma.io/authorize` | | Access Token URL | `https://auth.prisma.io/token` | | Client ID | `your-client-id` | | Client Secret | `your-client-secret` | | Scope | `workspace:admin` | 7. Click **Get New Access Token** 8. A browser window will open and have you complete the authorization flow 9. Return to Postman and click **Use Token** to attach it to your request 10. Verify that your new token appears under **Current Token** at the top of the Authorization tab 3. Make your first request [#3-make-your-first-request] With authentication configured, you can now create a project: 1. In the request body, select **raw** and **JSON** format 2. Add the following JSON payload: ```json { "name": "My Postman Database", "region": "us-east-1" } ``` 3. Click **Send** You should receive a successful response confirming your project creation. Insomnia [#insomnia] Insomnia is an open-source API client with a clean interface for testing and debugging HTTP requests. Prerequisites [#prerequisites-1] Before you begin, make sure you have: * A [Prisma Console account](https://console.prisma.io) * [Insomnia installed](https://insomnia.rest/download/) 1. Create an OAuth2 Application [#1-create-an-oauth2-application-1] First, you'll need to register an OAuth2 application in Prisma Console: 1. Navigate to [Prisma Console](https://console.prisma.io) and log in 2. Click the **🧩 Integrations** tab in the left sidebar 3. Under the "Published Applications" section, click **New Application** 4. Fill in your application details: * **Name**: Insomnia API Client * **Description**: Brief description of your application *(Optional)* * **Redirect URI**: `https://app.insomnia.rest/oauth/redirect` 5. Click **Continue** 6. **Important**: Copy your Client ID and Client Secret immediately and store them securely Insomnia uses `https://app.insomnia.rest/oauth/redirect` as the default OAuth callback URL for local authentication flows. 2. Configure OAuth 2.0 in Insomnia [#2-configure-oauth-20-in-insomnia] Now you'll set up authentication in Insomnia: 1. Open Insomnia and create a new HTTP request 2. Set the request method to **POST** 3. Set the URL to `https://api.prisma.io/v1/projects` 4. Navigate to the **Auth** tab 5. Set the authentication type to **OAuth 2.0** 6. Under **Configuration**, enter the following values: | Parameter | Value | | -------------------------------- | ------------------------------------------ | | Grant Type | Authorization Code | | Authorization URL | `https://auth.prisma.io/authorize` | | Access Token URL | `https://auth.prisma.io/token` | | Client ID | `your-client-id` | | Client Secret | `your-client-secret` | | Redirect URL | `https://app.insomnia.rest/oauth/redirect` | | Scope *(Under Advanced Options)* | `workspace:admin` | 7. Click **Fetch Tokens** 8. A browser window will open and have you complete the authorization flow 9. Return to Insomnia and verify that the access token has been retrieved 10. The token will be automatically attached to your requests 3. Make your first request [#3-make-your-first-request-1] With authentication configured, you can now create a project: 1. Navigate to the **Body** tab and select **JSON** format 2. Add the following JSON payload: ```json { "name": "My Insomnia Database", "region": "us-east-1" } ``` 3. Click **Send** You should receive a successful response confirming your project creation. Yaak [#yaak] Yaak is a lightweight, open-source, and offline API client that works with Git. Prerequisites [#prerequisites-2] Before you begin, make sure you have: * A [Prisma Console account](https://console.prisma.io) * [Yaak installed](https://yaak.app) 1. Create an OAuth2 Application [#1-create-an-oauth2-application-2] First, you'll need to register an OAuth2 application in Prisma Console: 1. Navigate to [Prisma Console](https://console.prisma.io) and log in 2. Click the **🧩 Integrations** tab in the left sidebar 3. Under the "Published Applications" section, click **New Application** 4. Fill in your application details: * **Name**: Yaak API Client * **Description**: Brief description of your application *(Optional)* * **Redirect URI**: `https://devnull.yaak.app/callback` 5. Click **Continue** 6. **Important**: Copy your Client ID and Client Secret immediately and store them securely The redirect URI can be any valid URL. Yaak intercepts the OAuth callback regardless of the redirect URI, as long as it matches what's registered with the provider. 2. Configure OAuth 2.0 in Yaak [#2-configure-oauth-20-in-yaak] Now you'll set up authentication in Yaak: 1. Open Yaak and create a new HTTP request 2. Set the request method to **POST** 3. Set the URL to `https://api.prisma.io/v1/projects` 4. Navigate to the **Auth** tab 5. Set the authentication type to **OAuth 2.0** 6. Enter the following values: | Parameter | Value | | ----------------- | ----------------------------------- | | Grant Type | Authorization Code | | Authorization URL | `https://auth.prisma.io/authorize` | | Token URL | `https://auth.prisma.io/token` | | Client ID | `your-client-id` | | Client Secret | `your-client-secret` | | Redirect URL | `https://devnull.yaak.app/callback` | | Scope | `workspace:admin` | 7. Click **Get Token** 8. A browser window will open and have you complete the authorization flow 9. Return to Yaak and verify that the access token has been retrieved 10. The token will be automatically attached to your requests 3. Make your first request [#3-make-your-first-request-2] With authentication configured, you can now create a project: 1. Navigate to the **Body** tab and select **JSON** format 2. Add the following JSON payload: ```json { "name": "My Yaak Database", "region": "us-east-1" } ``` 3. Click **Send** You should receive a successful response confirming your project creation. # Authentication (/docs/management-api/authentication) The Management API supports two authentication methods: * **Service Tokens** - Simple bearer tokens for server-to-server integrations * **OAuth 2.0** - For user-facing applications requiring user consent Service tokens [#service-tokens] Service tokens are the simplest way to authenticate. They're ideal for scripts, CI/CD pipelines, and backend services. Creating a Service token [#creating-a-service-token] 1. Navigate to [Prisma Console](https://console.prisma.io) and log in 2. Select your workspace 3. Go to **Settings → Service Tokens** 4. Click **New Service Token** 5. Copy the generated token immediately and store it securely Using a Service token [#using-a-service-token] Include the token in the `Authorization` header: ```bash curl -X GET "https://api.prisma.io/v1/workspaces" \ -H "Authorization: Bearer your-service-token" ``` Or with the SDK: ```typescript import { createManagementApiClient } from "@prisma/management-api-sdk"; const client = createManagementApiClient({ token: "your-service-token", }); ``` Service tokens never expire Service tokens do not have an expiration date. While this provides convenience for long-running integrations, it also means these tokens require careful security management. OAuth 2.0 [#oauth-20] OAuth 2.0 is required for applications that act on behalf of users. The API uses OAuth 2.0 with PKCE for secure authentication. PKCE Support [#pkce-support] The OAuth implementation supports Proof Key for Code Exchange (PKCE) using the S256 code challenge method: * **Public clients** (no client secret): PKCE is **mandatory** * **Confidential clients** (with client secret): PKCE is **optional**, but if you start the flow with PKCE, it must be completed with PKCE This provides enhanced security, especially for mobile and single-page applications that cannot securely store client secrets. Creating an OAuth Application [#creating-an-oauth-application] 1. Navigate to [Prisma Console](https://console.prisma.io) and log in 2. Click the **Integrations** tab in the left sidebar 3. Under "Published Applications", click **New Application** 4. Fill in your application details: * **Name**: Your application name * **Description**: Brief description *(optional)* * **Redirect URI**: Your callback URL (e.g., `https://your-app.com/auth/callback`) 5. Click **Continue** 6. Copy your **Client ID** and **Client Secret** immediately Development redirect URIs For local development, the following redirect URIs are accepted with any port via wildcard matching: * `localhost` (e.g., `http://localhost:3000/callback`) * `127.0.0.1` (e.g., `http://127.0.0.1:3000/callback`) * `[::1]` - IPv6 loopback (e.g., `http://[::1]:3000/callback`) OAuth Endpoints [#oauth-endpoints] | Endpoint | URL | | ------------- | --------------------------------------------------------------- | | Authorization | `https://auth.prisma.io/authorize` | | Token | `https://auth.prisma.io/token` | | Discovery | `https://auth.prisma.io/.well-known/oauth-authorization-server` | The discovery endpoint provides OAuth server metadata that can be used for automatic client configuration. Many OAuth libraries support automatic discovery using this endpoint. Available Scopes [#available-scopes] | Scope | Description | | ----------------- | ---------------------------------------------- | | `workspace:admin` | Full access to workspace resources | | `offline_access` | Enables refresh tokens for long-lived sessions | Token Lifetimes [#token-lifetimes] | Token Type | Expiration | | -------------- | ---------- | | Access tokens | 1 hour | | Refresh tokens | 90 days | OAuth Authorization Flow [#oauth-authorization-flow] 1. Redirect users to authorize [#1-redirect-users-to-authorize] Redirect users to the authorization endpoint with the following query parameters: | Parameter | Description | | --------------- | ------------------------------------------------------------------- | | `client_id` | Your OAuth application's Client ID | | `redirect_uri` | The callback URL where users will be redirected after authorization | | `response_type` | Must be `code` for the authorization code flow | | `scope` | Permissions to request (e.g., `workspace:admin`) | ``` https://auth.prisma.io/authorize?client_id=$CLIENT_ID&redirect_uri=$REDIRECT_URI&response_type=code&scope=workspace:admin ``` This will redirect the user to the Prisma authorization page where they can grant your application access to their workspace. 2. Receive the authorization code [#2-receive-the-authorization-code] After authorization, users are redirected to your callback URL with a `code` parameter: ``` https://your-app.com/callback?code=abc123... ``` 3. Exchange the code for an access token [#3-exchange-the-code-for-an-access-token] ```bash curl -X POST https://auth.prisma.io/token \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=$CLIENT_ID" \ -d "client_secret=$CLIENT_SECRET" \ -d "code=$CODE" \ -d "grant_type=authorization_code" \ -d "redirect_uri=$REDIRECT_URI" ``` The response will include an access token that can be used to make authenticated requests to the Management API: ```json { "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...", "token_type": "Bearer", "expires_in": 3600 } ``` 4. Use the access token [#4-use-the-access-token] ```bash curl -X GET "https://api.prisma.io/v1/workspaces" \ -H "Authorization: Bearer $ACCESS_TOKEN" ``` Token Refresh [#token-refresh] If you requested the `offline_access` scope, you'll receive a refresh token. Use it to obtain new access tokens: ```bash curl -X POST https://auth.prisma.io/token \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=$CLIENT_ID" \ -d "client_secret=$CLIENT_SECRET" \ -d "refresh_token=$REFRESH_TOKEN" \ -d "grant_type=refresh_token" ``` Refresh token rotation Refresh tokens use single-use rotation with replay attack detection. When you exchange a refresh token for a new access token, you'll receive a new refresh token in the response. The old refresh token is immediately invalidated. If an invalidated refresh token is used again, it indicates a potential security breach, and the system will revoke all tokens associated with that authorization. Using OAuth with the SDK [#using-oauth-with-the-sdk] The SDK handles the OAuth flow automatically. See the [SDK documentation](/management-api/sdk#oauth-authentication-flow) for implementation details. Using API Clients [#using-api-clients] You can also authenticate using popular API clients like Postman, Insomnia, or Yaak. See the [Using API Clients](/management-api/api-clients) guide for step-by-step instructions. # Getting Started (/docs/management-api/getting-started) This guide walks you through setting up a basic TypeScript project that uses the Management API to create a new Prisma Console project with a Prisma Postgres database, and print out all connection details. You'll authenticate via a service token, set up your environment, and run a script to interact with the API. Prerequisites [#prerequisites] * Node.js and `npm` installed * A [Prisma Data Platform](https://console.prisma.io/) account 1. Create a service token in Prisma Console [#1-create-a-service-token-in-prisma-console] First, you need to create a service token to be able to access the Management API: 1. Open the [Prisma Console](https://console.prisma.io/) 2. Navigate to the **Settings** page of your workspace and select **Service Tokens** 3. Click **New Service Token** 4. Copy and save the generated service token securely, you'll use it in step 2.2. 2. Set up your project directory [#2-set-up-your-project-directory] 2.1. Create a basic TypeScript project [#21-create-a-basic-typescript-project] Open your terminal and run the following commands: ```bash mkdir management-api-demo cd management-api-demo ``` Next, initialize npm and install dependencies required for using TypeScript: npm pnpm yarn bun ```bash npm init -y npm install tsx typescript @types/node --save-dev touch index.ts ``` ```bash pnpm init -y pnpm add tsx typescript @types/node --save-dev touch index.ts ``` ```bash yarn init -y yarn add tsx typescript @types/node --dev touch index.ts ``` ```bash bun init -y bun add tsx typescript @types/node --dev touch index.ts ``` You now have an `index.ts` file that you can execute with `npx tsx index.ts`. It's still empty, you'll start writing code in step 3. 2.2. Configure service token environment variable [#22-configure-service-token-environment-variable] Create your `.env` file: ```bash touch .env ``` Next, install the [`dotenv`](https://github.com/motdotla/dotenv) library for loading environment variables from the `.env` file: npm pnpm yarn bun ```bash npm install dotenv ``` ```bash pnpm add dotenv ``` ```bash yarn add dotenv ``` ```bash bun add dotenv ``` Finally, add your service token (from step 1.) to `.env`: ```bash PRISMA_SERVICE_TOKEN="ey..." ``` 2.3. Install the axios library for HTTP request [#23-install-the-axios-library-for-http-request] You're going to use [`axios`](https://github.com/axios/axios/tree/main) as your HTTP client to interact with the Management API. Install it as follows: npm pnpm yarn bun ```bash npm install axios ``` ```bash pnpm add axios ``` ```bash yarn add axios ``` ```bash bun add axios ``` You're all set, let's write some code to create a project and provision a Prisma Postgres database! 3. Programmatically create a new project with a database [#3-programmatically-create-a-new-project-with-a-database] Paste the following code into `index.ts`: ```ts import axios from "axios"; import dotenv from "dotenv"; // Load environment variables dotenv.config(); const API_URL = "https://api.prisma.io/v1"; const SERVICE_TOKEN = process.env.PRISMA_SERVICE_TOKEN; if (!SERVICE_TOKEN) { throw new Error("PRISMA_SERVICE_TOKEN is not set in the environment"); } // Set HTTP headers to be used in this script const headers = { Authorization: `Bearer ${SERVICE_TOKEN}`, "Content-Type": "application/json", }; async function main() { // Create a new project in your Prisma Console workspace const projectName = `demo-project-${Date.now()}`; const region = "us-east-1"; const createProjectRes = await axios.post( `${API_URL}/projects`, { name: projectName, region }, { headers }, ); const project = createProjectRes.data; console.log("Created project: \n", project); // Log the database details const apiKeys = project.databases[0].apiKeys || []; for (const key of apiKeys) { console.log(`\nDatabase details`); console.log(`- ID: ${key.id}`); console.log(`- Created at: ${key.createdAt}`); console.log(`- API key: ${key.apiKey}`); console.log(`- Prisma Postgres connection string: ${key.connectionString}`); if (key.ppgDirectConnection) { console.log(`- Direct TCP connection: ${key.ppgDirectConnection.host}`); console.log(` - Host: ${key.ppgDirectConnection.host}`); console.log(` - Username: ${key.ppgDirectConnection.user}`); console.log(` - Password: ${key.ppgDirectConnection.pass}`); } } } main().catch((e) => { console.error(e.response?.data || e); process.exit(1); }); ``` You can run your script with the following command: npm pnpm yarn bun ```bash npx tsx index.ts ``` ```bash pnpm dlx tsx index.ts ``` ```bash yarn dlx tsx index.ts ``` ```bash bunx --bun tsx index.ts ``` ```text no-copy Created project: { createdAt: '2025-07-09T11:52:15.341Z', id: 'cmcvwftgs00v5zq0vh3kp7pms', name: 'demo-project-1752061932800', databases: [ { createdAt: '2025-07-09T11:52:15.341Z', id: 'cmcvwftgs00v1zq0v0qrtrg8t', name: 'demo-project-1752061932800', connectionString: 'prisma+postgres://accelerate.prisma-data.net/?api_key=', region: 'us-east-1', status: 'ready', apiKeys: [Array], isDefault: true } ] } Database details - ID: cmcvwftgs00v2zq0vj3v0104j - Created at: 2025-07-09T11:52:15.341Z - API key: ey... - Prisma Postgres connection string: prisma+postgres://accelerate.prisma-data.net/?api_key=ey... - Direct TCP connection: db.prisma.io:5432 - Host: db.prisma.io:5432 - Username: - Password: ``` Your output of the command should look similar to the output above. Conclusion [#conclusion] You have now set up a TypeScript project that interacts with the Management API, creates a new project and database, and prints out all connection strings. You can extend this script to manage more resources or automate other tasks using the Management API. # Management API (/docs/management-api) Base URL [#base-url] ``` https://api.prisma.io/v1 ``` Append an endpoint path to construct the full URL. For example: `https://api.prisma.io/v1/projects/{projectId}` An interactive [OpenAPI 3.1 specification](https://api.prisma.io/v1/swagger-editor) is available for exploring endpoints and request/response formats. Getting Started [#getting-started] * **[Getting Started](/management-api/getting-started)** - Create your first project and database * **[Authentication](/management-api/authentication)** - OAuth 2.0 and service tokens setup * **[SDK](/management-api/sdk)** - TypeScript SDK with built-in OAuth and automatic token refresh * **[Using API Clients](/management-api/api-clients)** - Use with Postman, Insomnia, and Yaak * **[Partner Integration](/management-api/partner-integration)** - Build integrations that provision and transfer databases # Partner Integration (/docs/management-api/partner-integration) This guide walks you through building a partner integration with the Management API to power experiences like the [`npx create-db`](https://create-db.prisma.io/) command. You'll learn how to provision a Prisma Postgres database on your workspace as a partner, and how to transfer it to another user's workspace so they can "claim" the database. We'll cover how the process is secured using OAuth2, and by the end, you'll understand the full flow and how to integrate it into your own product experience. This guide references the actual implementation in the `npx create-db` CLI and Cloudflare Workers as real world examples. The repo for the `npx create-db` is [here](https://github.com/prisma/create-db), which can be used as a reference for how to use the Management API in your own projects. How does this fit into your app? The two Cloudflare Workers in this guide are just reference examples. You would typically build this logic into your own backend or serverless functions. Similarly, the `npx create-db` CLI is a simple demo. In your product, you can trigger the same API calls from your own UI or onboarding flows to create a seamless experience for your users. Core concepts [#core-concepts] Before diving into implementation, let's clarify the main concepts involved in the Management API integration: * **Management API**: A set of endpoints that allow you to programmatically provision and manage Prisma Postgres databases. * **Projects vs Databases**: A project is a container that can hold multiple databases. You can use this to organize databases you create e.g. by user. Projects can then be transferred to users, including all databases they contain. * **Authentication**: All API requests require authentication. As a partner, you authenticate provisioning calls with a service token for your workspace, and use OAuth 2 to obtain an access token for the user during the claim flow. * **Tokens**: There are two main types of tokens: * **Service token**: Issued to your partner integration, scoped to provision and manage databases on your own workspace. * **OAuth 2 access token**: Obtained via OAuth 2 when a user authenticates with your app; it is scoped to the user's workspace and used to transfer project/database ownership to that workspace. How to become a partner [#how-to-become-a-partner] To use the Prisma Postgres Management API, you first need to set up as a partner: 1. **Request access to the Management API**: Contact the Prisma team from the [Prisma Partners page](https://www.prisma.io/partners) to request access to the Management API. You will be guided through the onboarding process. 2. **Obtain OAuth credentials**: You can obtain your OAuth credentials in the [Prisma Console](https://console.prisma.io). See the [next section](#get-oauth-credentials) for details. For a complete list of available endpoints and details on request/response formats, see the [Prisma Management API documentation](/management-api). Get OAuth credentials [#get-oauth-credentials] To obtain a client ID and client secret, you need go through this flow: 1. Open the [Prisma Console](https://console.prisma.io). 2. Click the 🧩 **Integrations** tab in the sidenav. 3. In the **Published Applications** section, click **New Application** button to start the flow for creating a new OAuth app. 4. Enter a **Name**, **Description** and **Callback URL** for your OAuth app. 5. Click **Continue**. On the next screen, you can access and save the client ID and client secret for your OAuth app. Provisioning a database as a Partner [#provisioning-a-database-as-a-partner] To provision a new Prisma Postgres database for your users as a partner, follow these steps: 1. **Gather required information**: Prepare the necessary details for provisioning, such as region, database name, and any other options your application requires. This information may come from user input or be determined by your application logic. 2. **Authenticate your integration**: Use your service token to authenticate API requests from your backend. This token authenticates your app as an approved partner. 3. **Send a database provisioning request**: Make a `POST` request to the Management API endpoint to create a new project with a default database. For example: ```ts const prismaResponse = await fetch("https://api.prisma.io/v1/projects", { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer `, }, body: JSON.stringify({ region, name }), }); ``` 4. **Handle the response**: If successful, the API will return the new project's details, including database connection strings and a `project_id`. Store these securely and display them to your user as needed. 5. **(Optional) Store project metadata**: You may want to associate the `project_id` with your user in your own database for future reference. Database claim flow [#database-claim-flow] Once a database is provisioned, you may want to transfer ownership to your user at a later point so they can manage it in their own Prisma workspace and go beyond the free database usage limits. This is done via the claim flow, which consists of three main steps: Overview: How the claim flow works [#overview-how-the-claim-flow-works] When a user wants to claim a database, your app will: 1. Trigger the OAuth2 flow, redirecting the user to Prisma Auth. This is necessary, so your app will have the permissions to transfer the database into the user's workspace. 2. The user authenticates and selects a workspace. 3. Your backend receives an authorization code, exchanges it for a user access token, and calls the Management API transfer endpoint with both your integration token and the user's token. This ensures the transfer is secure and only the intended user can claim the database. 1. Triggering the claim flow [#1-triggering-the-claim-flow] When your user wants to take ownership of a database you provisioned for them, they need to transfer it to their own Prisma Postgres workspace. This gives them full control over it. To initiate this process, provide a button or link in your app (e.g., "Claim Database" or "Transfer to My Workspace"). When clicked, your backend should: * Generate a secure `state` value to track the session and prevent CSRF attacks. * Construct an OAuth2 authorization URL with your client ID, redirect URI, and required scopes. * Redirect the user to this URL to begin the authentication flow. Example: ```ts const authParams = new URLSearchParams({ client_id: YOUR_CLIENT_ID, redirect_uri: "https://your-app.com/auth/callback", // Your callback endpoint response_type: "code", scope: "workspace:admin", // The scope of the OAuth2 authorization state: generateState(), // Securely track the session }); const authUrl = `https://auth.prisma.io/authorize?${authParams.toString()}`; // Redirect the user to authUrl ``` 2. Authenticating the user [#2-authenticating-the-user] The user will be prompted to log in (if not already authenticated) and select the workspace where they want to claim the database. After successful authentication and workspace selection, Prisma Auth will redirect back to your callback endpoint with a `code` and `state` (and, in some cases, a `project_id`). 3. Finishing the claim flow [#3-finishing-the-claim-flow] Your backend should now: 1. **Exchange the authorization code for a user access token**: ```ts const tokenResponse = await fetch("https://auth.prisma.io/token", { method: "POST", headers: { "Content-Type": "application/x-www-form-urlencoded" }, body: new URLSearchParams({ grant_type: "authorization_code", code: code, // The code received from the callback redirect_uri: "https://your-app.com/auth/callback", // Must match the redirect_uri used in step 1 client_id: YOUR_CLIENT_ID, client_secret: YOUR_CLIENT_SECRET, }).toString(), }); const tokenData = await tokenResponse.json(); ``` 2. **Call the Management API transfer endpoint** to move the project to the selected workspace. You will need the `project_id` and the user's access token: ```ts const transferResponse = await fetch(`https://api.prisma.io/v1/projects/${project_id}/transfer`, { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${PRISMA_SERVICE_TOKEN}`, }, body: JSON.stringify({ recipientAccessToken: tokenData.access_token }), }); ``` If the transfer is successful, the database is now owned by the user's workspace. Conclusion [#conclusion] By following this guide, you have learned how to: * Set up as a Prisma Postgres Partner and obtain the necessary credentials * Provision a new database for your users using the Management API * Implement a secure claim flow that allows users to claim ownership of a database in their own workspace using OAuth2 This flow enables you to integrate Prisma Postgres provisioning and transfer seamlessly into your own product, providing a smooth onboarding experience for your users. For further details, see the [create-db](https://github.com/prisma/create-db) repo for a reference implementation, or consult the [Prisma Management API documentation](/management-api). # SDK (/docs/management-api/sdk) Overview [#overview] The [`@prisma/management-api-sdk`](https://www.npmjs.com/package/@prisma/management-api-sdk) is a TypeScript SDK for the [Prisma Data Platform Management API](/management-api). Use the simple client for direct API access, or the full SDK with built-in OAuth authentication and automatic token refresh. Based on the [public OpenAPI 3.1 specification](https://api.prisma.io/v1/swagger-editor). Installation [#installation] npm pnpm yarn bun ```bash npm install @prisma/management-api-sdk ``` ```bash pnpm add @prisma/management-api-sdk ``` ```bash yarn add @prisma/management-api-sdk ``` ```bash bun add @prisma/management-api-sdk ``` Basic usage [#basic-usage] For usage with an existing access or [service token](/management-api/authentication#service-tokens). Making API calls [#making-api-calls] The client provides fully typed methods for all API endpoints: ```typescript import { createManagementApiClient } from "@prisma/management-api-sdk"; const client = createManagementApiClient({ token: "your-access-token", }); // List workspaces const { data: workspaces, error } = await client.GET("/v1/workspaces"); // Get a specific project const { data: project } = await client.GET("/v1/projects/{id}", { params: { path: { id: "project-id" } }, }); // Create a new project const { data: newProject } = await client.POST("/v1/workspaces/{workspaceId}/projects", { params: { path: { workspaceId: "workspace-id" } }, body: { name: "My New Project" }, }); // Create a new database const { data: newDatabase } = await client.POST("/v1/projects/{projectId}/databases", { params: { path: { projectId: "project-id" } }, body: { name: "my-new-db-instance", region: "us-east-1", isDefault: true, }, }); // Delete a database const { error: deleteError } = await client.DELETE("/v1/databases/{databaseId}", { params: { path: { databaseId: "database-id" } }, }); ``` Customizing the client [#customizing-the-client] You can override any `ClientOptions` from `openapi-fetch`, including `baseUrl`, `headers`, and other fetch options: ```typescript import { createManagementApiClient } from "@prisma/management-api-sdk"; // Override baseUrl and add custom headers const client = createManagementApiClient({ token: "your-access-token", baseUrl: "https://api.example.com", headers: { "X-Custom-Header": "value", }, }); ``` If you provide both `token` and `headers.Authorization`, the `headers.Authorization` takes precedence. The `baseUrl` defaults to `https://api.prisma.io` if not provided. Advanced usage [#advanced-usage] For applications that need [OAuth authentication](#oauth-authentication-flow), automatic token refresh, and token storage management, use the full SDK. OAuth authentication flow [#oauth-authentication-flow] The SDK uses OAuth 2.0 with PKCE for secure authentication. The flow is stateless - you're responsible for storing the `state` and `verifier` between the login URL generation and callback handling. 1. Create the SDK instance [#1-create-the-sdk-instance] ```typescript import { createManagementApiSdk, type TokenStorage } from "@prisma/management-api-sdk"; // Implement token storage for your environment const tokenStorage: TokenStorage = { async getTokens() { const stored = localStorage.getItem("prisma-tokens"); return stored ? JSON.parse(stored) : null; }, async setTokens(tokens) { localStorage.setItem("prisma-tokens", JSON.stringify(tokens)); }, async clearTokens() { localStorage.removeItem("prisma-tokens"); }, }; // Create the SDK instance const api = createManagementApiSdk({ clientId: "your-oauth-client-id", redirectUri: "https://your-app.com/auth/callback", tokenStorage, }); ``` 2. Initiate login [#2-initiate-login] Generate the OAuth login URL. The returned `state` and `verifier` must be stored (e.g., in a session or cookie) for use when handling the callback: ```typescript const { url, state, verifier } = await api.getLoginUrl({ scope: "workspace:admin offline_access", additionalParams: { utm_source: "my-app", utm_medium: "login", }, }); // Store state and verifier for the callback (e.g., in session storage) sessionStorage.setItem("oauth-state", state); sessionStorage.setItem("oauth-verifier", verifier); // Redirect user to the login URL window.location.href = url; ``` 3. Handle the callback [#3-handle-the-callback] When the user is redirected back to your app, retrieve the stored `state` and `verifier` and pass them to `handleCallback`. On success, tokens are automatically stored via your `tokenStorage` implementation: ```typescript // In your callback route handler const callbackUrl = window.location.href; // Retrieve the stored values const expectedState = sessionStorage.getItem("oauth-state"); const verifier = sessionStorage.getItem("oauth-verifier"); // Clean up stored values sessionStorage.removeItem("oauth-state"); sessionStorage.removeItem("oauth-verifier"); try { await api.handleCallback({ callbackUrl, verifier, expectedState, }); // Tokens are now stored in tokenStorage and the client is ready to use console.log("Login successful!"); } catch (error) { if (error instanceof AuthError) { console.error("Authentication failed:", error.message); } } ``` 4. Make API calls [#4-make-api-calls] The client automatically includes authentication headers and refreshes tokens when they expire. Use `api.client` with the same methods shown in [Basic usage](#making-api-calls). 5. Logout [#5-logout] ```typescript await api.logout(); // Clears stored tokens ``` Token storage interface [#token-storage-interface] Implement this interface to handle token persistence in your environment: ```typescript interface TokenStorage { /** Provide the stored tokens to the SDK */ getTokens(): Promise; /** Store new or updated tokens when the SDK has successfully authenticated or refreshed tokens */ setTokens(tokens: Tokens): Promise; /** Clear the tokens when the user logs out or the refresh token is invalid */ clearTokens(): Promise; } type Tokens = { /** The workspace ID that these tokens are valid for (extracted from the access token) */ workspaceId: string; /** The access token for API requests */ accessToken: string; /** The refresh token for obtaining new access tokens (only present if scope includes 'offline_access') */ refreshToken?: string; }; ``` Example: VS Code Extension [#example-vs-code-extension] ```typescript const tokenStorage: TokenStorage = { async getTokens() { const workspaceId = await context.secrets.get("workspaceId"); const accessToken = await context.secrets.get("accessToken"); const refreshToken = await context.secrets.get("refreshToken"); if (!workspaceId || !accessToken) return null; return { workspaceId, accessToken, refreshToken: refreshToken || undefined }; }, async setTokens(tokens) { await context.secrets.store("workspaceId", tokens.workspaceId); await context.secrets.store("accessToken", tokens.accessToken); if (tokens.refreshToken) { await context.secrets.store("refreshToken", tokens.refreshToken); } }, async clearTokens() { await context.secrets.delete("workspaceId"); await context.secrets.delete("accessToken"); await context.secrets.delete("refreshToken"); }, }; ``` Example: Node.js CLI [#example-nodejs-cli] ```typescript import { readFile, writeFile, unlink } from "node:fs/promises"; import { homedir } from "node:os"; import { join } from "node:path"; const tokenPath = join(homedir(), ".prisma", "credentials.json"); const tokenStorage: TokenStorage = { async getTokens() { try { const data = await readFile(tokenPath, "utf-8"); return JSON.parse(data); } catch { return null; } }, async setTokens(tokens) { await writeFile(tokenPath, JSON.stringify(tokens, null, 2)); }, async clearTokens() { await unlink(tokenPath).catch(() => {}); }, }; ``` For other environments: * **VS Code extensions** - Use `context.secrets` for secure storage * **Stateless web servers** - Store PKCE state/verifier in encrypted cookies or a database Automatic token refresh [#automatic-token-refresh] The SDK automatically handles token refresh when a refresh token is available (requires `offline_access` scope): * When a request returns 401, the SDK refreshes the access token using the refresh token * Concurrent requests during refresh are queued and resolved once refresh completes * If refresh fails due to an invalid refresh token, tokens are cleared and `AuthError` is thrown with `refreshTokenInvalid: true` * If no refresh token is available, an `AuthError` is thrown with the message "No refresh token available. Please log in again." API reference [#api-reference] createManagementApiClient(options) [#createmanagementapiclientoptions] Creates a raw API client without authentication handling. Useful if you want to manage authentication yourself or use a service token. **Parameters:** * `options.token?: string` - Access token (automatically converted to `Authorization: Bearer ${token}` header) * `options.baseUrl?: string` - Base URL for API requests (defaults to `https://api.prisma.io`) * `options.headers?: Record` - Additional headers * Other `ClientOptions` from `openapi-fetch` are also supported **Returns:** A typed API client for making requests. createManagementApiSdk(config) [#createmanagementapisdkconfig] Creates a Management API SDK instance with OAuth authentication and automatic token refresh. **Parameters:** ```typescript type ManagementApiClientConfig = { // Required clientId: string; // OAuth client ID redirectUri: string; // OAuth redirect URI tokenStorage: TokenStorage; // Optional (with defaults) apiBaseUrl?: string; // Default: 'https://api.prisma.io' authBaseUrl?: string; // Default: 'https://auth.prisma.io' }; ``` **Returns:** An object with: * `client` - The typed API client for making requests * `getLoginUrl(options)` - Generate OAuth login URL with specified scope * `handleCallback(options)` - Handle OAuth callback and store tokens via `tokenStorage` * `logout()` - Clear stored tokens Error handling [#error-handling] The SDK exports two error classes: AuthError [#autherror] Thrown for authentication-related errors: * OAuth callback errors (includes `error_description` when available) * Invalid or missing tokens * Token refresh failures ```typescript import { AuthError } from "@prisma/management-api-sdk"; try { await api.handleCallback({ callbackUrl, verifier, expectedState }); } catch (error) { if (error instanceof AuthError) { if (error.refreshTokenInvalid) { // Token is invalid/expired, user needs to log in again const { url } = await api.getLoginUrl({ scope: "workspace:admin offline_access" }); // redirect to url... } else { // Other auth errors (e.g., "access_denied: User cancelled") console.error("Auth error:", error.message); } } } ``` FetchError [#fetcherror] Thrown for network-related errors. Includes the original error as `cause` for debugging: ```typescript import { FetchError } from "@prisma/management-api-sdk"; try { const { data } = await client.GET("/v1/workspaces"); } catch (error) { if (error instanceof FetchError) { console.error("Network error:", error.message); console.error("Cause:", error.cause); // Original error for debugging } } ``` TypeScript types [#typescript-types] The SDK exports all API types generated from the OpenAPI spec: ```typescript import type { paths, components } from "@prisma/management-api-sdk"; // Access response types type Workspace = components["schemas"]["Workspace"]; type Project = components["schemas"]["Project"]; ``` # Prisma ORM (/docs/orm) Prisma ORM is [open-source](https://github.com/prisma/prisma) and consists of: * [**Prisma Client**](/orm/prisma-client/setup-and-configuration/introduction): Auto-generated, type-safe **ORM interface** * [**Prisma Migrate**](/orm/prisma-migrate): Database migration system * [**Prisma Studio**](https://www.prisma.io/studio): GUI to view and edit your data Prisma Client works with any Node.js or TypeScript backend, whether you're deploying to traditional servers, serverless functions, or microservices. Why Prisma ORM [#why-prisma-orm] Traditional database tools force a tradeoff between **productivity** and **control**. Raw SQL gives full control but is error-prone and lacks type safety. Traditional ORMs improve productivity but abstract too much, leading to the [object-relational impedance mismatch](https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch) and performance pitfalls like the n+1 problem. Prisma takes a different approach: * **Type-safe queries** validated at compile time with full autocompletion * **Thinking in objects** without the complexity of mapping relational data * **Plain JavaScript objects** returned from queries, not complex model instances * **Single source of truth** in the Prisma schema for database and application models * **Healthy constraints** that prevent common pitfalls and anti-patterns When to use Prisma [#when-to-use-prisma] **Prisma is a good fit if you:** * Build server-side applications (REST, GraphQL, gRPC, serverless) * Value type safety and developer experience * Work in a team and want a clear, declarative schema * Need migrations, querying, and data modeling in one toolkit **Consider alternatives if you:** * Need full control over every SQL query (use raw SQL drivers) * Want a no-code backend (use a BaaS like Supabase or Firebase) * Need an auto-generated CRUD GraphQL API (use Hasura or PostGraphile) How it works [#how-it-works] 1. Define your schema [#1-define-your-schema] The [Prisma schema](/orm/prisma-schema/overview) defines your data models and database connection: ```prisma datasource db { provider = "postgresql" } generator client { provider = "prisma-client" output = "./generated" } model User { id Int @id @default(autoincrement()) email String @unique name String? posts Post[] } model Post { id Int @id @default(autoincrement()) title String published Boolean @default(false) author User? @relation(fields: [authorId], references: [id]) authorId Int? } ``` 2. Configure your connection [#2-configure-your-connection] Create a `prisma.config.ts` file in your project root: ```ts title="prisma.config.ts" import "dotenv/config"; import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` 3. Run migrations [#3-run-migrations] Use [Prisma Migrate](/orm/prisma-migrate) to create and apply migrations: npm pnpm yarn bun ```bash npx prisma migrate dev ``` ```bash pnpm dlx prisma migrate dev ``` ```bash yarn dlx prisma migrate dev ``` ```bash bunx --bun prisma migrate dev ``` Or [introspect](/orm/prisma-schema/introspection) an existing database: npm pnpm yarn bun ```bash npx prisma db pull ``` ```bash pnpm dlx prisma db pull ``` ```bash yarn dlx prisma db pull ``` ```bash bunx --bun prisma db pull ``` 4. Query with Prisma Client [#4-query-with-prisma-client] Generate and use the type-safe client: npm pnpm yarn bun ```bash npm install @prisma/client npx prisma generate ``` ```bash pnpm add @prisma/client pnpm dlx prisma generate ``` ```bash yarn add @prisma/client yarn dlx prisma generate ``` ```bash bun add @prisma/client bun x prisma generate ``` ```ts import { PrismaClient } from "./generated/client"; const prisma = new PrismaClient(); // Find all users with their posts const users = await prisma.user.findMany({ include: { posts: true }, }); // Create a user with a post const user = await prisma.user.create({ data: { email: "alice@prisma.io", posts: { create: { title: "Hello World" }, }, }, }); ``` Next steps [#next-steps] * [**Prisma schema**](/orm/prisma-schema/overview) - Learn the schema language * [**Prisma Client**](/orm/prisma-client/setup-and-configuration/introduction) - Explore the query API # Error reference (/docs/postgres/error-reference) When working with Prisma Postgres, you may encounter errors often highlighted by specific error codes during development and operations. It is important to understand the meaning of these errors, why they occur, and how to resolve them in order to ensure the smooth operation of your applications. This guide aims to provide insights and steps to troubleshoot specific error codes encountered with Prisma Postgres. P6009 (ResponseSizeLimitExceeded) [#p6009-responsesizelimitexceeded] This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over `5MB` can significantly slow down your application due to multiple network layers. Typically, transmitting more than `5MB` of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience. Possible causes for P6009 [#possible-causes-for-p6009] Transmitting images/files in response [#transmitting-imagesfiles-in-response] This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups. **Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. Over-fetching of data [#over-fetching-of-data] In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when [the `where` clause](/orm/reference/prisma-client-reference#where) in the query is incorrect or entirely missing. **Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. Fetching a large volume of data [#fetching-a-large-volume-of-data] In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit. **Suggested solution:** Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. P6004 (QueryTimeout) [#p6004-querytimeout] This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. The time for Prisma Postgres's cross-region networking is excluded from the configured query timeout limit. Possible causes for P6004 [#possible-causes-for-p6004] This error could be caused by numerous reasons. Some of the prominent ones are: High traffic and insufficient connections [#high-traffic-and-insufficient-connections] If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. **Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment. This limit should align with your database's maximum number of connections. By default, the connection limit is set to 10 unless a different `connection_limit` is specified in your database connection string. Long-running queries [#long-running-queries] Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. **Suggested solution**: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. Database resource contention [#database-resource-contention] A common yet challenging issue is when other services operating on the same database perform heavy analytics or data processing tasks, significantly consuming database resources. These operations can monopolize database connections and processing power, leading to a scenario where even simple queries cannot be executed in a timely manner. This "busy" or "noisy" database environment can cause queries that are typically fast to run slowly or even timeout, particularly during periods of high activity from other services. Users often rely on CPU and memory usage metrics to gauge database load, which can be misleading. While these are important indicators, they might not fully represent the database's operational state. Direct metrics like the number of reads, writes, and wait times offer a clearer view of the database's performance and should be monitored closely. A noticeable degradation in these metrics, especially in the absence of changes to the queries or data model, suggests that external pressures are affecting database performance. **Suggested solution**: If normally quick queries are intermittently slow or timing out without any modifications to them, it's probable that competing queries are exerting pressure on the same database tables. To diagnose this, adopt monitoring tools or leverage your database's inherent capabilities to observe reads, writes, and wait times. Such monitoring will unveil activity patterns or spikes that align with the observed performance dips. Moreover, it's crucial to periodically scrutinize and refine essential queries and verify that tables are properly indexed. This proactive approach minimizes the vulnerability of these queries to slowdowns caused by competing workloads. P6008 (ConnectionError|EngineStartError) [#p6008-connectionerrorenginestarterror] This error indicates that Prisma ORM cannot establish a connection to your Prisma Postgres database, potentially due to several reasons. Possible causes for P6008 [#possible-causes-for-p6008] Unreachable Database Host/Port [#unreachable-database-hostport] If the database's server address (hostname) and port are incorrect or unreachable then you may encounter this error. **Suggested solution:** Verify the hostname/port of the database connection string that was provided while creating the project. Additionally, attempt to connect to the database using a Database GUI tool (e.g., [Prisma Studio](https://www.prisma.io/studio), [TablePlus](https://tableplus.com/), or [DataGrip](https://www.jetbrains.com/datagrip/)) for further investigation. Incorrect username/password/database name [#incorrect-usernamepassworddatabase-name] This error can happen when the wrong credentials are provided, preventing it from establishing a connection to your database. **Suggested solution:** Verify the correctness of your database's username, password, and name in the connection string provided to Prisma Postgres. Ensure that these credentials match those required by your database. Testing the connection using a direct database GUI tool can also help in confirming if the provided credentials are correct. P5011 (TooManyRequests) [#p5011-toomanyrequests] This error occurs when Prisma Postgres detects a high volume of requests that surpasses allowable thresholds. It acts as a protective measure to safeguard both Prisma Postgres and your underlying database from excessive load. Possible causes for P5011 [#possible-causes-for-p5011] Aggressive retry loops [#aggressive-retry-loops] If your application retries queries immediately or with minimal delay, especially after receiving certain errors, the rapid accumulation of requests can surpass the threshold. **Suggested solution:** * Implement an exponential backoff strategy. Rather than retrying immediately or with a fixed delay, gradually increase the delay period after each failed attempt. Sudden traffic spikes [#sudden-traffic-spikes] Unpredicted traffic surges (for example, during product launches, flash sales, or viral growth events) can cause the threshold to be met and result into `P5011`. **Suggested solution:** * Monitor traffic and resource usage. If you anticipate a surge, please contact [support](/console/more/support) for capacity planning and potential configuration adjustments. Prolonged or planned high workloads [#prolonged-or-planned-high-workloads] Certain processes, such as bulk data imports, ETL operations, or extended CRON jobs, can generate continuous high query volume over time. **Suggested solution:** * Use batching or chunking techniques to break large operations into smaller parts. * Establish throttling or scheduling to distribute the load more evenly. # Prisma Postgres FAQ (/docs/postgres/faq) Common questions about how Prisma Postgres works, how queries are billed, and how it integrates with the Prisma ORM. General [#general] Can I use Prisma Postgres without Prisma ORM? [#can-i-use-prisma-postgres-without-prisma-orm] Yes, you can use Prisma Postgres with any database library or tool via a [direct connection](/postgres/database/connection-pooling). You can find examples of using Prisma Postgres with various ORMs below: * [Prisma ORM](https://github.com/prisma/prisma-examples/tree/latest/databases/prisma-postgres) * [Drizzle](https://github.com/prisma/prisma-examples/tree/latest/databases/drizzle-prisma-postgres) * [Kysely](https://github.com/prisma/prisma-examples/tree/latest/databases/kysely-prisma-postgres) * [TypeORM](https://github.com/prisma/prisma-examples/tree/latest/databases/typeorm-prisma-postgres) How do I switch from GitHub login to email and password login? [#how-do-i-switch-from-github-login-to-email-and-password-login] If you previously signed up using GitHub and want to switch to email and password login, follow these steps: 1. Verify Your GitHub Email Address * Check the primary email address associated with your GitHub account (e.g., from your GitHub profile or notification settings). 2. Create a New Email/Password Account * Go to the email/password sign-up page. * Use the *same email address* linked to your GitHub account to create the new account. * Our system will automatically connect your new email/password account to your existing data. 3. Test Your Login * Log out and try logging in with your email and the password you just created. If you encounter any issues, please contact our support team for help linking your accounts. VS Code does not recognize the $extends method [#vs-code-does-not-recognize-the-extends-method] If you add the Prisma Client extension for Accelerate to an existing project that is currently open in VS Code, the editor might not immediately recognize the `$extends` method. This might be an issue with the TypeScript server not yet recognizing the regenerated Prisma Client. To resolve this, you need to restart TypeScript. 1. In VS Code, open the Command Palette. You can do so when you press F1 or select **View** > **Command Palette**. 2. Enter `typescript` and select and run the **TypeScript: Restart TS server** command. VS Code should now recognize the `$extends` method. What regions is Prisma Postgres available in? [#what-regions-is-prisma-postgres-available-in] Prisma Postgres is currently available in the following regions: | Region Code | Location | | ---------------- | -------------- | | `us-west-1` | San Francisco | | `us-east-1` | North Virginia | | `eu-west-3` | Paris | | `eu-central-1` | Frankfurt | | `ap-northeast-1` | Tokyo | | `ap-southeast-1` | Singapore | We're continuously working to expand regional support. If you'd like to request a specific region, reach out to us via [Discord](https://pris.ly/discord). Pricing [#pricing] Prisma Postgres bills based on *operations* and *storage* consumed. Visit the [pricing page](https://www.prisma.io/pricing) for details and our [blog post explaining operations-based billing](https://www.prisma.io/blog/operations-based-billing) for a detailed explanation on what an operation is and how this pricing model works. What is an operation? [#what-is-an-operation] An operation is counted each time you interact with your database. Read, write, simple or complex. It all simply counts as one. An operation can be: * a Prisma ORM query (when using Prisma ORM) * a SQL query (when using a [direct connection](/postgres/database/connecting-to-your-database)) Does query execution time affect pricing in Prisma Postgres? [#does-query-execution-time-affect-pricing-in-prisma-postgres] No, cost for Prisma Postgres is based solely on the *number of operations*, not the amount of compute required to execute them. Whether a query takes 10ms or 10sec to execute, its pricing impact remains the same. How does pricing differ between using Prisma ORM and direct TCP connections? [#how-does-pricing-differ-between-using-prisma-orm-and-direct-tcp-connections] The fundamental principle of operations-based pricing remains the same for Prisma ORM and [direct connections](/postgres/database/connecting-to-your-database). However, depending on whether you use Prisma ORM or direct SQL to interact with your database, an operation is something different: * when using Prisma ORM: a query sent with Prisma Client (e.g. `prisma.user.findMany()`) * when using another tool: a SQL query sent via the direct connection (e.g. `SELECT * from "User"`) Note that a single Prisma ORM query may translate into multiple SQL queries which may make using Prisma ORM more economical than direct SQL. Do read and write queries cost the same? [#do-read-and-write-queries-cost-the-same] Yes, read and write queries are counted equally as *operations* and are billed the same way. Does a SELECT 1 query count as a billable operation? [#does-a-select-1-query-count-as-a-billable-operation] Yes, a query like `SELECT 1` counts as an operation and will be billed accordingly (even if no actual data is accessed in the query). How can I estimate the number of operations in Prisma ORM? [#how-can-i-estimate-the-number-of-operations-in-prisma-orm] You can estimate your operation usage in Prisma ORM by integrating an application performance monitoring tool like Prometheus. What strategies can I use to optimize cost per operation? [#what-strategies-can-i-use-to-optimize-cost-per-operation] Prisma Postgres bills by operation. The more you can perform using a single operation, the lower your bill. Some tips to reduce the number of operations: * [**Batch your writes**](/orm/prisma-client/queries/transactions#bulk-operations) with `createMany`, `updateMany`, or `deleteMany` instead of looping over single-row calls. ```ts // One operation, three users await prisma.user.createMany({ data: [{ name: "Alice" }, { name: "Bob" }, { name: "Carol" }], }); ``` * **Use nested-relation helpers** such as [`connectOrCreate`](/orm/reference/prisma-client-reference#connectorcreate) or [`set`](/orm/reference/prisma-client-reference#set) to create or link related records in a single operation. ```ts // Post and (if needed) its author, all in one request await prisma.post.create({ data: { title: "Hello World", author: { connectOrCreate: { where: { email: "alice@example.com" }, create: { name: "Alice", email: "alice@example.com" }, }, }, }, }); ``` * **Prefer regular (array) [transactions](/orm/prisma-client/queries/transactions#transaction-api) over [interactive transactions](/orm/prisma-client/queries/transactions#interactive-transactions)** when the individual queries don't depend on each other. ```ts // Interactive transaction: counted as 2 operations await prisma.$transaction(async (tx) => { await tx.user.create({ data: { name: "Alice" } }); await tx.post.create({ data: { title: "Hello", authorId: 1 } }); }); // Array transaction: counted as 1 operation await prisma.$transaction([ prisma.user.create({ data: { name: "Alice" } }), prisma.post.create({ data: { title: "Hello", authorId: 1 } }), ]); ``` If a later query needs the result of an earlier one (for example, you need the user ID you just created), stick with an interactive transaction for correctness. Otherwise, batching and array transactions let you collapse multiple queries into a single billed operation, keeping both your operation count, and your cost down. Is there a sample workload to estimate my expected charges? [#is-there-a-sample-workload-to-estimate-my-expected-charges] We will demonstrate three example workloads and estimate bills for small, medium, and large workloads. Each combine a realistic number of monthly active users (MAUs), a typical level of daily activity per user, and a rounded storage footprint. We will use the following equations to estimate the monthly bill: ``` total_ops = MAUs x actions_per_user_per_day x 30 billable_ops = total_ops - included_ops_for_plan ops_cost = (billable_ops ÷ 1_000_000) x plan_rate billable_storage_GB = storage_used_GB - free_storage_for_plan storage_cost = billable_storage_GB x storage_rate_for_plan total_monthly_cost = ops_cost + storage_cost + base_plan_fee ``` You can use your own MAU count, activity level, and storage used to project costs on any plan using the equations above. We will associate each workload with a paid plan and its corresponding pricing details, for example, the **Starter plan** for the small workload, the **Pro plan** for the medium workload, and the **Business plan** for the large workload. Then we will apply the equations to the example workloads to generate a rough estimate of a monthly bill. For example: Pricing details Here are the details for each pricing plan: * **Starter plan** - $8 per million operations * Base plan fee - $10 per month * Included operations - 1,000,000 * Storage - 10 GB free then $2 per additional GB * **Pro plan** - $2 per million operations * Base plan fee - $49.00 per month * Included operations - 10,000,000 * Storage - 50 GB free then $1.5 per additional GB * **Business plan** - $1 per million operations * Base plan fee - $129.00 per month * Included operations - 50,000,000 * Storage - 100 GB free then $1 per additional GB We also have a Free plan, but we are leaving it out in the following calculations because it's intended for evaluation only and is not meant for production workloads. You can also learn more about the pricing details for each plan on the [pricing page](https://www.prisma.io/pricing?utm_source=docs). **Example of a small workload on the Starter plan**: A hobby or early-stage side-project with \~`500` MAUs. Each user performs \~`10` actions per day, and the entire database uses \~`0.5` GB of storage. Based on the assumptions, you would calculate the monthly bill using the following equations: * `total_ops` = `500` x `10` x `30` = `150000` * `billable_ops` = `0` (150,000 operations is below the 1 million free operations) * `ops_cost` = $`0` * `storage_cost` = $`0` (0.5 GB is below the 10 GB storage already included) * `base_plan_fee` = $`10` `total_monthly_cost` = $`10.00` per month **Example of a medium workload on the Pro plan**: A growing SaaS product serving \~`5000` MAUs. Power users average \~`40` actions per day, and the app stores \~`6` GB of data. Based on the assumptions, you would calculate the monthly bill using the following equations: * `total_ops` = `5000` x `40` x `30` = `6000000` * `billable_ops` = `0` (6 million operations is below the 10 million free operations) * `ops_cost` = $`0` * `storage_cost` = $`0` (6 GB is below the 50 GB storage already included) * `base_plan_fee` = $`49.00` `total_monthly_cost` = $`49.00` per month **Example of a large workload on the Business plan**: A production-grade, consumer-facing application handling \~`50000` MAUs. Heavy usage with \~`60` actions per user per day drives significant traffic, and the dataset reaches \~`40` GB. Based on the assumptions, you would calculate the monthly bill using the following equations: * `total_ops` = `50000` x `60` x `30` = `90000000` * `billable_ops` = `90000000` - `50000000` = `40000000` * `ops_cost` = (`40000000` ÷ `1000000`) = `40.00` x $`1` = $`40.00` * `storage_cost` = $`0.00` (40 GB is below the 100 GB storage already included) * `base_plan_fee` = $`129.00` `total_monthly_cost` = $`40.00` + $`129.00` = $`169.00` per month Are cached operations billed the same? [#are-cached-operations-billed-the-same] Every request, whether it hits the database or is served from cache, counts as an operation. Prisma Postgres use a flat per-operation price and never charge for egress traffic, so a cached response doesn't incur any extra or reduced fee. This unified rate keeps the billing model predictable and avoids per-request complexity. How do I upgrade my plan if I am using Prisma Postgres via Vercel? [#how-do-i-upgrade-my-plan-if-i-am-using-prisma-postgres-via-vercel] To upgrade your plan via Vercel, follow these steps: * Open your [Vercel](https://vercel.com/) Dashboard. * Go to the **Integrations** tab in your Vercel Team. * Click **Manage** on the Prisma Integration. * Navigate to the **Settings** tab. * Under Current Installation Plan Level, click **Change Plan**. * Select the plan you want to upgrade to. Caching [#caching] Prisma Postgres includes built-in connection pooling and global caching. These features improve performance by optimizing how your queries are routed and cached. How does Prisma Postgres's cache layer know what region to fetch the cache from? [#how-does-prisma-postgress-cache-layer-know-what-region-to-fetch-the-cache-from] Under the hood, Prisma Postgres's cache layer uses Cloudflare, which uses [Anycast](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) for network addressing and routing. An incoming request will be routed to the nearest data center or "node" in their network that has the capacity to process the request efficiently. To learn more about how this works, we recommend looking into [Anycast](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/). How can I invalidate a cache for Prisma Postgres? [#how-can-i-invalidate-a-cache-for-prisma-postgres] You can invalidate the cache on-demand via the [`$accelerate.invalidate` API](/accelerate/reference/api-reference#accelerateinvalidate) if you're on a [paid plan](https://www.prisma.io/pricing#accelerate), or you can invalidate your entire cache, on a project level, a maximum of five times a day. This limit is set based on [your plan](https://www.prisma.io/pricing). You can manage this via the Accelerate configuration page. How is Prisma Postgres's caching layer different from other caching tools, such as Redis? [#how-is-prisma-postgress-caching-layer-different-from-other-caching-tools-such-as-redis] The caching layer of Prisma Postgres: * Is a *specialized* cache that allows you to optimize data access in code at the query level with a cache strategy. On the other hand, tools such as Redis and Memcached are *general-purpose* caches designed to be adaptable and flexible. * Is a managed service that reduces the time, risk, and engineering effort of building and maintaining a cache service. * Is globally distributed, by default, reducing the latency of your queries. Other cache tools would require additional configuration to make them available globally. When should I not use Prisma Postgres's caching features? [#when-should-i-not-use-prisma-postgress-caching-features] The caching layer of Prisma Postgres is a global data cache and connection pool that allows you to optimize data access in code at the query level. While caching with Prisma Postgres can greatly boost the performance of your app, it may not always the best choice for your use case. This global cache feature may not be a good fit for your app if: * Your app is exclusively used within a specific region and both your application server and database are situated in that same region on the same network. For example, database queries will likely be much faster if your application server and database are in the same region and network. However, If your application server is in different regions or networks from your database, the cache nodes will speed up your queries because the data will be cached in the closest data center to your application. * Your application data *always* needs to be up-to-date on retrieval, making it difficult to establish a reasonable cache strategy. What is the maximum allowed value for the ttl parameter when configuring cacheStrategy? [#what-is-the-maximum-allowed-value-for-the-ttl-parameter-when-configuring-cachestrategy] The [Time-to-live](/accelerate/caching) (`ttl`) parameter can be set for up to a *year*. However, it's important to note that items within the cache may be evicted if they are not frequently accessed. Based on our experimentation, we’ve seen cache items persist for around 18 hours. While items may remain in the cache for an extended period if they are actively accessed, there is no guarantee. Even frequently accessed items may occasionally be evicted from the cache. It's unlikely for an item to survive for up to or longer than a month, regardless of its activity level. Why do I sometimes see unexpected cache behavior? [#why-do-i-sometimes-see-unexpected-cache-behavior] Prisma Postgres's cache layer performs best when it observes a higher load from a project. Many cache operations, such as committing data to cache and refreshing stale data, happen asynchronously. When benchmarking the cache layer, we recommend doing so with loops or a load testing approach. This will mimic higher load scenarios better and reduce outliers from low frequency operations. Prisma operations are sent to Prisma Postgres over HTTP. As a result, the first request to Prisma Postgres must establish an HTTP handshake and may have additional latency as a result. We're exploring ways to reduce this initial request latency in the future. What regions are Prisma Postgres's cache nodes available in? [#what-regions-are-prisma-postgress-cache-nodes-available-in] Prisma Postgres's cache layer runs on Cloudflare's network and cache hits are served from Cloudflare's 300+ locations. You can find the regions where Prisma Postgres's cache nodes are available here: [https://www.cloudflare.com/network/](https://www.cloudflare.com/network/). How long does it take to invalidate a cache query result? [#how-long-does-it-take-to-invalidate-a-cache-query-result] As the cache needs to be cleared globally, it is difficult to provide a specific time frame. However, the cached data is eventually consistent and typically propagates to all PoPs within a few seconds. In very rare cases, it may take longer. Here is a [demo app](https://pris.ly/test-cache-invalidation) to test the time it takes to invalidate a cache query result. What is the difference between Invalidate and Revalidate? [#what-is-the-difference-between-invalidate-and-revalidate] **Invalidate**: The cache entry is deleted, and new data will be fetched on the next request, causing a cache miss. This removes stale data but may lead to slower responses until the cache is repopulated. **Revalidate**: The cache entry is updated proactively, ensuring the next request uses fresh data from the cache. This keeps the cache valid and maintains faster response times by avoiding cache misses. What is on-demand cache invalidation? [#what-is-on-demand-cache-invalidation] [On-demand cache invalidation](/accelerate/caching) lets applications instantly update specific cached data when it changes, instead of waiting for regular cache refresh cycles. This keeps information accurate and up-to-date for users. When should I use the cache invalidate API? [#when-should-i-use-the-cache-invalidate-api] The [cache invalidate API](/accelerate/caching) is essential when data consistency cannot wait for the cache’s standard expiration or revalidation. Key use cases include: * **Content updates**: When critical changes occur, such as edits to a published article, product updates, or profile modifications, that need to be visible immediately. * **Inventory management**: In real-time applications, like inventory or booking systems, where stock levels, availability, or reservation statuses must reflect the latest information. * **High-priority data**: For time-sensitive data, like breaking news or urgent notifications, where it’s essential for users to see the most current information right away. Using on-demand cache invalidation in these scenarios helps keep only the necessary data refreshed, preserving system performance while ensuring accurate, up-to-date information for users. Connection pooling [#connection-pooling] Can I increase the query duration and response size limits for my Prisma Postgres instance? [#can-i-increase-the-query-duration-and-response-size-limits-for-my-prisma-postgres-instance] Yes, you can increase your Prisma Postgres limits based on your subscription plan. Here are the configurable limits: | Limit | Free | Starter | Pro Plan | Business Plan | | ------------------------------------ | ---------------- | ---------------- | ---------------- | ---------------- | | **Query timeout** | Up to 10 seconds | Up to 10 seconds | Up to 20 seconds | Up to 60 seconds | | **Interactive transactions timeout** | Up to 15 seconds | Up to 15 seconds | Up to 30 seconds | Up to 90 seconds | | **Response size** | Up to 5 MB | Up to 5 MB | Up to 10 MB | Up to 20 MB | Check the [pricing page](https://www.prisma.io/pricing) for more details on the available plans and their corresponding limits. While you can increase these limits based on your subscription plan, it's *still* recommended to optimize your database operations. [Learn more in our troubleshooting guide.](/postgres/error-reference) Query Insights [#query-insights] [Query Insights](/query-insights) is built into Prisma Postgres and helps you identify slow queries, understand their cost, and decide what to fix. I only see raw SQL — how do I see my Prisma ORM queries? [#i-only-see-raw-sql--how-do-i-see-my-prisma-orm-queries] By default, Query Insights shows raw SQL. To also see the Prisma ORM operation that generated each query (model name, action, and query shape), install the `@prisma/sqlcommenter-query-insights` package: ```bash npm install @prisma/sqlcommenter-query-insights ``` Then pass it to the `comments` option in your `PrismaClient` constructor: ```ts import { prismaQueryInsights } from "@prisma/sqlcommenter-query-insights"; import { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient({ adapter: myAdapter, comments: [prismaQueryInsights()], }); ``` This annotates every query with a SQL comment containing the model, action, and parameterized query shape. Query Insights uses these annotations to map SQL back to the Prisma call that generated it. Let your AI agent handle setup [#let-your-ai-agent-handle-setup] Copy this prompt into your AI coding assistant: ``` Install and configure @prisma/sqlcommenter-query-insights in my project so I can see Prisma ORM queries in Query Insights. Docs: https://www.prisma.io/docs/query-insights ``` Does Query Insights alter my queries or schema? [#does-query-insights-alter-my-queries-or-schema] No. Query Insights is read-only — it observes query behavior but does not rewrite queries or modify your Prisma schema. Can I use Query Insights in production? [#can-i-use-query-insights-in-production] Query Insights is designed primarily for development and debugging. Running it in production is possible but not recommended, as the SQL comment annotations add a small overhead to every query. # Prisma Postgres (/docs/postgres) [Prisma Postgres](https://www.prisma.io/postgres?utm_source=docs) is a managed PostgreSQL service built for modern app development. Use this page to choose a connection path and get started quickly. Getting started [#getting-started] Create a database [#create-a-database] New to Prisma Postgres? Start here. Create a temporary Prisma Postgres database in one command. Set up Prisma ORM and connect it to Prisma Postgres. Get your connection string [#get-your-connection-string] In [Prisma Console](https://console.prisma.io), open your database and click **Connect to your database** to copy connection URLs. Choose a connection type [#choose-a-connection-type] Prisma ORM (recommended default) [#prisma-orm-recommended-default] Use Prisma ORM for migrations and type-safe queries. Get started with the recommended Prisma ORM workflow. Any PostgreSQL client or ORM [#any-postgresql-client-or-orm] Use Prisma Postgres with `psql`, GUI tools, `node-postgres`, or other ORMs. Connect Prisma Postgres from Kysely. Connect Prisma Postgres from Drizzle ORM. Connect Prisma Postgres from TypeORM. Choose the right connection string for Prisma ORM, PostgreSQL tools, and serverless runtimes. Serverless and edge runtimes [#serverless-and-edge-runtimes] Use the serverless driver for HTTP/WebSocket connectivity in edge or constrained runtimes. * [Serverless driver (`@prisma/ppg`)](/postgres/database/serverless-driver) Local development [#local-development] Run Prisma Postgres locally with `prisma dev`, then switch to cloud when ready. * [Local development](/postgres/database/local-development) Optimize and manage [#optimize-and-manage] * [Connecting to your database](/postgres/database/connecting-to-your-database) * [Connection pooling](/postgres/database/connection-pooling) * [Caching](/accelerate/caching) * [Backups](/postgres/database/backups) * [PostgreSQL extensions](/postgres/database/postgres-extensions) * [Troubleshooting](/postgres/troubleshooting) * [FAQ](/postgres/faq) Billing and limits [#billing-and-limits] Prisma Postgres uses usage-based pricing and includes spend controls. * [Pricing](https://www.prisma.io/pricing) * [Operations-based billing explained](https://www.prisma.io/blog/operations-based-billing?utm_source=docs) * [FAQ: estimating costs](/postgres/faq#is-there-a-sample-workload-to-estimate-my-expected-charges) In Prisma Console, you can track usage, set spend limits, and view billing details. Billing and Usage dashboard metrics. Technical details [#technical-details] Prisma Postgres is based on **PostgreSQL v17** and uses a unikernel-based architecture. Learn more: [Prisma Postgres: Building a modern PostgreSQL service](https://pris.ly/ppg-early-access?utm_source=docs). Note Postgres, PostgreSQL, and the Slonik Logo are trademarks or registered trademarks of the PostgreSQL Community Association of Canada and are used with permission. # create-db (/docs/postgres/npx-create-db) [`create-db`](https://create-db.prisma.io/) is an open-source CLI tool that provisions temporary [Prisma Postgres](/postgres) databases with a single command. * **Fast setup:** No sign-up required to create a temporary production-ready Prisma Postgres database. * **Lifetime:** Each database is available for *24 hours* by default. * **Keep for free:** You can *claim* a database (via the URL provided in the CLI output) to make it permanent. Prerequisites [#prerequisites] To use `npx create-db`, you need: * **Node.js** version `16` or higher (we recommend the latest LTS version). * **npm** (comes with Node.js) to run `npx` commands. **A Prisma Data Platform account is not required** to create a temporary database. However, if you want to keep a database permanently, you can claim it ([details below](#claiming-your-database)). Option 1: Using the web interface (recommended) [#option-1-using-the-web-interface-recommended] The [create-db web application](https://create-db.prisma.io) provides a browser-based interface for creating and managing your databases. Key features: [#key-features] * No installation required - works directly in your web browser * Visual interface for database management * Easy connection string display and copying * Built-in schema viewer and editor * Direct integration with Prisma Studio * Simple database claiming workflow Getting started: [#getting-started] 1. Visit [create-db.prisma.io](https://create-db.prisma.io) in your web browser 2. Click "Create with the web interface" 3. Modify your schema and interact with the Studio 4. Copy the provided connection strings for your project 5. Claim your database to make it permanent Option 2: Using the CLI [#option-2-using-the-cli] You can create a database using one of the following options: Option 1: Quick start with default settings [#option-1-quick-start-with-default-settings] Run the following command in your terminal: npm pnpm yarn bun ```bash npx create-db@latest ``` ```bash pnpm dlx create-db@latest ``` ```bash yarn dlx create-db@latest ``` ```bash bunx --bun create-db@latest ```
* The `@latest` tag automatically downloads and runs the latest version of the tool, hence, no global installation required. * After a few seconds, you'll receive **connection strings** for both Prisma ORM projects and standard PostgreSQL. * The default region is `us-east-1`. You can specify the region where you want to provision the database in using the `--region` flag. See [the section below](#available-cli-options) to view all the CLI options. Option 2: Choose a region interactively [#option-2-choose-a-region-interactively] If you want to select a region manually: npm pnpm yarn bun ```bash npx create-db@latest --interactive ``` ```bash pnpm dlx create-db@latest --interactive ``` ```bash yarn dlx create-db@latest --interactive ``` ```bash bunx --bun create-db@latest --interactive ```
* This opens a region selection menu (for example, `us-east-1`, `eu-west-3`). * Alternatively, you can use the shorthand `-i`: npm pnpm yarn bun ```bash npx create-db@latest -i ``` ```bash pnpm dlx create-db@latest -i ``` ```bash yarn dlx create-db@latest -i ``` ```bash bunx --bun create-db@latest -i ``` To view all options and regions: npm pnpm yarn bun ```bash npx create-db@latest --help ``` ```bash pnpm dlx create-db@latest --help ``` ```bash yarn dlx create-db@latest --help ``` ```bash bunx --bun create-db@latest --help ``` CLI output walkthrough [#cli-output-walkthrough] Here is an example output: ``` ┌ 🚀 Creating a Prisma Postgres database │ │ Provisioning a temporary database in us-east-1... │ It will be automatically deleted in 24 hours, but you can claim it. ◇ Database created successfully! │ ● Database Connection │ Connection String: │ postgresql://:@db.prisma.io:5432/postgres │ ◆ Claim your database → │ Keep your database for free: │ https://create-db.prisma.io?projectID=proj_... └ ``` Once you have the output, take the connection string and add it to your `.env` file as `DATABASE_URL`: ```text DATABASE_URL="postgresql://:@db.prisma.io:5432/postgres" ``` You can now follow the [Prisma Postgres quickstart guide](/prisma-orm/quickstart/prisma-postgres) to connect your Prisma project to this database. If you're using other tools or libraries, use the standard PostgreSQL connection string with any PostgreSQL-compatible client, such as `psql`, `pgAdmin`, `node-postgres`, or an ORM of your choice. Detailed instructions are available in [Connecting to your database](/postgres/database/connecting-to-your-database). Claiming your database [#claiming-your-database] By default, databases created with `npx create-db` are **temporary** and will be automatically deleted after **24 hours**. You can prevent this by **claiming the database** using the claim URL shown in the CLI output: ``` ◆ Claim your database → │ │ Want to keep your database? Claim for free: │ │ https://create-db.prisma.io?projectID=proj_... │ │ Your database will be deleted on 7/24/2025, 2:25:41 AM if not claimed. ``` To claim your database and make it permanent: 1. Copy the **claim URL** from the CLI output. 2. Open it in your browser and click **Claim database**. 3. Sign in to your [Prisma Data Platform account](https://console.prisma.io/) (or create one if you don’t have it yet). 4. Choose a **Workspace** that has capacity for creating new projects. 5. Click **Authorize Prisma Create DB** to confirm. 6. You’ll be redirected to a success page. Then, click **Go use your database** to view and manage the claimed database in your workspace. When you claim a database: * It's moved into your Prisma Data Platform account workspace. * It's no longer auto-deleted after 24 hours. * You can continue using it as a permanent database instance. Available CLI options [#available-cli-options] Here are the CLI flags for the `npx create-db` command: | Flag | Shorthand | Description | | --------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------- | | `--region` | `-r` | Specify a region.
**Available regions:** `ap-southeast-1`, `ap-northeast-1`, `eu-central-1`, `eu-west-3`, `us-east-1`, `us-west-1` | | `--interactive` | `-i` | Run in interactive mode (select region from a list). | | `--json` | `-j` | Output machine-readable JSON and exit. | | `--help` | `-h` | Show this help message. | To view all CLI options use the `--help` or `-h` flag: npm pnpm yarn bun ```bash npx create-db@latest --help ``` ```bash pnpm dlx create-db@latest --help ``` ```bash yarn dlx create-db@latest --help ``` ```bash bunx --bun create-db@latest --help ``` ``` npx create-db@latest [options] Options: --region , -r Specify a region Available regions: ap-southeast-1, ap-northeast-1, eu-central-1, eu-west-3, us-east-1, us-west-1 --interactive, -i Run in interactive mode --help, -h Show this help message ``` # Troubleshooting (/docs/postgres/troubleshooting) This guide helps resolve common issues when working with Prisma Postgres. The --db option is not recognized when running prisma init [#the---db-option-is-not-recognized-when-running-prisma-init] Problem [#problem] Running the following command fails because the `--db` option is not recognized: npm pnpm yarn bun ```bash npx prisma init --db ``` ```bash pnpm dlx prisma init --db ``` ```bash yarn dlx prisma init --db ``` ```bash bunx --bun prisma init --db ``` Cause [#cause] This can occur due to npx caching. If you've previously run `npx prisma init`, your machine may be using an outdated cached version that doesn't recognize the `--db` flag because it was only introduced in a later version of Prisma ORM. Solution [#solution] Explicitly run the `latest` Prisma CLI version: npm pnpm yarn bun ```bash npx prisma@latest init --db ``` ```bash pnpm dlx prisma@latest init --db ``` ```bash yarn dlx prisma@latest init --db ``` ```bash bunx --bun prisma@latest init --db ``` This ensures that you're using the most up-to-date CLI, preventing issues with outdated command syntax. Workspace plan limit reached when running prisma init --db [#workspace-plan-limit-reached-when-running-prisma-init---db] Problem [#problem-1] When running the command: npm pnpm yarn bun ```bash npx prisma@latest init --db ``` ```bash pnpm dlx prisma@latest init --db ``` ```bash yarn dlx prisma@latest init --db ``` ```bash bunx --bun prisma@latest init --db ``` You may encounter the following error message in your logs: ``` Workspace plan limit reached for feature "Project". ``` Cause [#cause-1] Your default [workspace](/console/concepts#workspace) project limit has been reached. Solution [#solution-1] To resolve this issue, consider the following options: * Configure a different Workspace as your default—one that has available capacity for additional projects. * Delete unused projects or databases from your current default Workspace to free up space. * Ensure that you are logged into the correct account in the Prisma CLI. For more details on authentication and account management, please refer to the [Prisma CLI documentation](/cli/console). * [Upgrade to a plan](/postgres#billing-and-limits) that supports more projects in your default Workspace. Implementing one or more of these solutions should help you overcome the plan limit issue. # Query Insights (/docs/query-insights) Query Insights is built into Prisma Postgres and helps you understand which queries are slow, why they are expensive, and what to change next. It does not automatically rewrite your queries or schema. Query Insights replaces Prisma Optimize and is now included with Prisma Postgres at no extra cost. You can try it today in the [Prisma Console](https://console.prisma.io). Dashboard [#dashboard] The main Query Insights view gives you a live summary of query activity for your database. At the top of the page, you can inspect: * Average latency over the selected period * Queries per second * A time-based chart for each metric * Hover values for exact timestamps and measurements * Playback controls for stepping through captured activity This makes it easier to see whether a problem is steady, bursty, or tied to a short window of activity. Query list [#query-list] Below the charts, Query Insights shows a list of grouped queries. Each row includes: * Latency * Executions * Reads * Last seen * The SQL statement shape You can use the controls above the table to: * Filter results by table * Sort the list to surface the most important queries first * Focus on repeated, high-read, or recently executed statements This view is the fastest way to identify which query patterns deserve investigation first. Query detail [#query-detail] Selecting a query opens a detail view for that statement. The detail view shows: * A stat summary describing the query's table, execution count, average latency, and reads per call * The full SQL statement * An AI-generated analysis explaining whether the query needs optimization and why * A copyable prompt you can paste directly into your editor or an AI coding assistant to apply the suggested fix The AI analysis describes the likely cause of the performance issue, the specific change it recommends, and the expected impact. The copyable prompt includes your actual query along with context, so you can paste it into your editor or a tool like Cursor, Copilot, or Claude and get a concrete code change without switching context. Treat the AI analysis as a starting point, not a final answer. Review any suggested change before shipping it. Prisma ORM attribution [#prisma-orm-attribution] When using Prisma ORM, Query Insights can trace the full chain from your application code to the SQL it generates. This means you can see which `prisma.*` call produced a slow query, even when a single Prisma call expands into multiple SQL statements. For raw SQL or queries issued outside Prisma ORM, Query Insights still shows full SQL behavior, but ORM-level attribution requires the steps below. Setup [#setup] To enable ORM attribution, install the `@prisma/sqlcommenter-query-insights` package: ```bash npm install @prisma/sqlcommenter-query-insights ``` Then pass it to the `comments` option in your `PrismaClient` constructor: ```ts import "dotenv/config"; import { PrismaClient } from "../generated/prisma/client"; import { PrismaPg } from "@prisma/adapter-pg"; import { prismaQueryInsights } from "@prisma/sqlcommenter-query-insights"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL, }); export const prisma = new PrismaClient({ adapter: adapter, comments: [prismaQueryInsights()], }) ``` This adds SQL comment annotations to queries so Query Insights can map SQL statements back to the Prisma calls that generated them. It is built on top of the [SQL comments](/orm/prisma-client/observability-and-logging/sql-comments) feature in Prisma Client. Availability [#availability] Query Insights is included with Prisma Postgres at no extra cost. You can try it today in the [Prisma Console](https://console.prisma.io). Typical issues [#typical-issues] Query Insights is most useful when it connects a database symptom to a concrete code change. | Issue | What you might see | Typical fix | | ------------------ | ------------------------------------ | ------------------------------------------- | | N+1 queries | High query count for one request | Use nested reads, batching, or joins | | Missing indexes | High reads relative to rows returned | Add the right index for the filter pattern | | Over-fetching | Wide rows or large payloads | Use `select` to fetch fewer fields | | Offset pagination | Reads grow on deeper pages | Switch to cursor pagination | | Large nested reads | High reads and large payloads | Limit fields, limit depth, or split queries | | Repeated queries | The same statement shape runs often | Cache or reuse results when appropriate | How to use it [#how-to-use-it] When an endpoint gets slow, Query Insights gives you a practical workflow: 1. Open Query Insights and scan the latency and queries-per-second charts. 2. Sort or filter the query list to isolate the expensive statement. 3. Open the query detail view. 4. Read the AI analysis and inspect the SQL. 5. Copy the suggested prompt and paste it into your editor. 6. Review the suggested change, then apply it in code or schema. 7. Re-run the workload and compare the same signals again. In most cases, the next change falls into one of these buckets: * Change the Prisma query shape * Add or adjust an index * Return fewer fields or fewer rows * Cache repeated work Example [#example] A common example is an N+1 pattern: ```ts const users = await prisma.user.findMany({ select: { id: true, name: true, email: true }, }); for (const user of users) { await prisma.post.findMany({ where: { authorId: user.id }, select: { id: true, title: true }, }); } ``` Query Insights would typically show: * One query to load users * Many repeated queries to load posts * A high execution count for the same statement shape * More reads and latency than the route should need In this case, the likely fix is to load the related posts in one nested read: ```ts const usersWithPosts = await prisma.user.findMany({ select: { id: true, name: true, email: true, posts: { select: { id: true, title: true, }, }, }, }); ``` The same pattern applies to other issues. Query Insights helps you identify the expensive query shape, understand why it is expensive, and choose the next change to verify. Next steps [#next-steps] * Review [Connection pooling](/postgres/database/connection-pooling) for high-concurrency workloads * Use [Connecting to your database](/postgres/database/connecting-to-your-database) when choosing connection strings for other tools * See [Prisma Client query optimization](/orm/prisma-client/queries/advanced/query-optimization-performance) for related Prisma ORM patterns # Getting Started (/docs/studio/getting-started) Installation [#installation] Prisma Studio comes bundled with the Prisma CLI. To get started, make sure you have Node.js installed, then install the Prisma CLI: npm pnpm yarn bun ```bash npm install -g prisma ``` ```bash pnpm add -g prisma ``` ```bash yarn global add prisma ``` ```bash bun add --global prisma ``` Launching Studio [#launching-studio] With a Prisma Project [#with-a-prisma-project] If you have an existing Prisma project, navigate to your project directory and run: npm pnpm yarn bun ```bash npx prisma studio ``` ```bash pnpm dlx prisma studio ``` ```bash yarn dlx prisma studio ``` ```bash bunx --bun prisma studio ``` This will start the Studio server and open it in your default browser at `http://localhost:5555`. Without a Prisma Project [#without-a-prisma-project] You can also use Studio with any database by providing a connection string: npm pnpm yarn bun ```bash npx prisma studio --url="postgresql://user:password@localhost:5432/yourdb" ``` ```bash pnpm dlx prisma studio --url="postgresql://user:password@localhost:5432/yourdb" ``` ```bash yarn dlx prisma studio --url="postgresql://user:password@localhost:5432/yourdb" ``` ```bash bunx --bun prisma studio --url="postgresql://user:password@localhost:5432/yourdb" ``` Connecting to Your Database [#connecting-to-your-database] 1. **Using environment variables**: Create a `.env` file in your project root with your database URL: ``` DATABASE_URL="postgresql://user:password@localhost:5432/yourdb" ``` Then run: `npx prisma studio` 2. **Using command line**: ```bash npx prisma studio --url="your-database-connection-string" ``` Basic Usage [#basic-usage] Browsing Data [#browsing-data] * The left sidebar lists all your database tables * Click on a table to view its data * Use the search bar to quickly find tables or columns Editing Data [#editing-data] * **Edit cells**: Double-click any cell to edit its value * **Add records**: Click the "+" button to add a new record * **Delete records**: Select records using checkboxes and click the trash icon Filtering and Sorting [#filtering-and-sorting] * Click the filter icon to add filters * Click on column headers to sort the table * Use the search box to filter records by any field Common Tasks [#common-tasks] Viewing Table Relationships [#viewing-table-relationships] * Related tables are shown as expandable rows * Click the "+" icon to view related records Exporting Data [#exporting-data] * Use the export button to download data as CSV or JSON * Select specific columns to include in the export Next Steps [#next-steps] * Learn how to [embed Studio in your application](/studio/integrations/embedding) * Discover [VS Code integration](/studio/integrations/vscode-integration) features # Prisma Studio (/docs/studio) [Prisma Studio](https://www.prisma.io/studio) works with or without Prisma ORM and supports the following workflows: * Viewing and editing data in a spreadsheet-like interface * Real-time schema introspection * Embedding directly into your Next.js applications * VS Code integration for in-editor database management Supported databases [#supported-databases] * PostgreSQL * MySQL * SQLite Quick start [#quick-start] npm pnpm yarn bun ```bash # With Prisma project npx prisma studio # With direct database connection npx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash # With Prisma project pnpm dlx prisma studio # With direct database connection pnpm dlx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash # With Prisma project yarn dlx prisma studio # With direct database connection yarn dlx prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` ```bash # With Prisma project bun x prisma studio # With direct database connection bun x prisma studio --url="postgresql://user:password@localhost:5432/dbname" ``` Getting started [#getting-started] * [Getting Started](/studio/getting-started) - Learn how to set up and use Prisma Studio to manage your database * [Embed Studio](/studio/integrations/embedding) - Learn how to embed Prisma Studio in your own applications * [Studio in VS Code](/studio/integrations/vscode-integration) - Learn how to use Prisma Studio directly in VS Code # From the CLI (/docs/prisma-postgres/from-the-cli) This page provides a step-by-step guide for Prisma Postgres after setting it up with `prisma init --db`: 1. Set up a TypeScript app with Prisma ORM 2. Migrate the schema of your database 3. Query your database from TypeScript Prerequisites [#prerequisites] This guide assumes you set up [Prisma Postgres](/postgres) instance with `prisma init --db`: npm pnpm yarn bun ```bash npx prisma@latest init --db ``` ```bash pnpm dlx prisma@latest init --db ``` ```bash yarn dlx prisma@latest init --db ``` ```bash bunx --bun prisma@latest init --db ``` Once this command has terminated: * You're logged into Prisma Data Platform. * A new Prisma Postgres instance was created. * The `prisma/` folder was created with an empty `schema.prisma` file. * The `DATABASE_URL` env var was set in a `.env` file. * The `prisma.config.ts` file was created with the default configuration. 1. Organize your project directory [#1-organize-your-project-directory] If you ran the `prisma init --db` command inside a folder where you want your project to live, you can skip this step and [proceed to the next section](/prisma-postgres/from-the-cli#2-set-up-your-project). If you ran the command outside your intended project directory (e.g., in your home folder or another location), you need to move the generated `prisma` folder and the `.env` file into a dedicated project directory. Create a new folder (e.g. `hello-prisma`) where you want your project to live and move the necessary files into it: ```bash mkdir hello-prisma mv .env ./hello-prisma/ mv prisma ./hello-prisma/ ``` Navigate into your project folder: ```bash cd ./hello-prisma ``` Now that your project is in the correct location, continue with the setup. 2. Set up your project [#2-set-up-your-project] 2.1. Set up TypeScript [#21-set-up-typescript] Initialize a TypeScript project and add the Prisma CLI as a development dependency: npm pnpm yarn bun ```bash npm init -y ``` ```bash pnpm init -y ``` ```bash yarn init -y ``` ```bash bun init -y ``` npm pnpm yarn bun ```bash npm install typescript tsx @types/node @types/pg -D ``` ```bash pnpm add typescript tsx @types/node @types/pg -D ``` ```bash yarn add typescript tsx @types/node @types/pg --dev ``` ```bash bun add typescript tsx @types/node @types/pg --dev ``` This creates a `package.json` file with an initial setup for your TypeScript app. Next, initialize TypeScript with a `tsconfig.json` file in the project: npm pnpm yarn bun ```bash npx tsc --init ``` ```bash pnpm dlx tsc --init ``` ```bash yarn dlx tsc --init ``` ```bash bunx --bun tsc --init ``` 2.2. Configure ESM support [#22-configure-esm-support] Update `tsconfig.json` for ESM compatibility: ```json title="tsconfig.json" { "compilerOptions": { "module": "ESNext", "moduleResolution": "bundler", "target": "ES2023", "strict": true, "esModuleInterop": true, "ignoreDeprecations": "6.0" } } ``` Update `package.json` to enable ESM: ```json title="package.json" { "type": "module" // [!code ++] } ``` 2.3. Set up Prisma ORM [#23-set-up-prisma-orm] Install the required dependencies to use Prisma Postgres: npm pnpm yarn bun ```bash npm install prisma --save-dev npm install @prisma/client @prisma/adapter-pg pg dotenv ``` ```bash pnpm add prisma --save-dev pnpm add @prisma/client @prisma/adapter-pg pg dotenv ``` ```bash yarn add prisma --dev yarn add @prisma/client @prisma/adapter-pg pg dotenv ``` ```bash bun add prisma --dev bun add @prisma/client @prisma/adapter-pg pg dotenv ``` Here's what each package does: * **`prisma`** - The Prisma CLI for running commands like `prisma migrate` and `prisma generate` * **`@prisma/client`** - The Prisma Client library for querying your database * **`@prisma/adapter-pg`** - The [`node-postgres` driver adapter](/orm/core-concepts/supported-databases/postgresql#using-driver-adapters) that connects Prisma Client to your database * **`pg`** - The node-postgres database driver * **`@types/pg`** - TypeScript type definitions for node-postgres * **`dotenv`** - Loads environment variables from your `.env` file 2.4. Review the generated prisma.config.ts [#24-review-the-generated-prismaconfigts] The `prisma init --db` command automatically created a `prisma.config.ts` file that looks like this: ```typescript title="prisma.config.ts" import "dotenv/config"; import { defineConfig, env } from "prisma/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` 2.5. Create a script to query the database [#25-create-a-script-to-query-the-database] Create an `index.ts` file in the root directory, this will be used to query your application with Prisma ORM: ```bash touch index.ts ``` 3. Migrate the database schema [#3-migrate-the-database-schema] Update your `prisma/schema.prisma` file to include the `User` and `Post` models: ```prisma title="prisma/schema.prisma" generator client { provider = "prisma-client" output = "../generated/prisma" } datasource db { provider = "postgresql" } model User { id Int @id @default(autoincrement()) email String @unique name String? posts Post[] } model Post { id Int @id @default(autoincrement()) title String content String? published Boolean @default(false) author User @relation(fields: [authorId], references: [id]) authorId Int } ``` After adding the models, migrate your database using [Prisma Migrate](/orm/prisma-migrate): npm pnpm yarn bun ```bash npx prisma migrate dev --name init ``` ```bash pnpm dlx prisma migrate dev --name init ``` ```bash yarn dlx prisma migrate dev --name init ``` ```bash bunx --bun prisma migrate dev --name init ``` This command creates the database tables based on your schema. Now run the following command to generate the Prisma Client: npm pnpm yarn bun ```bash npx prisma generate ``` ```bash pnpm dlx prisma generate ``` ```bash yarn dlx prisma generate ``` ```bash bunx --bun prisma generate ``` 4. Send queries with Prisma ORM [#4-send-queries-with-prisma-orm] 4.1. Instantiate Prisma Client [#41-instantiate-prisma-client] Create a `lib/prisma.ts` file to instantiate Prisma Client with the driver adapter: ```typescript title="lib/prisma.ts" import "dotenv/config"; import { PrismaPg } from "@prisma/adapter-pg"; import { PrismaClient } from "../generated/prisma/client"; const connectionString = `${process.env.DATABASE_URL}`; const adapter = new PrismaPg({ connectionString }); const prisma = new PrismaClient({ adapter }); export { prisma }; ``` If you need to query your database via HTTP from an edge runtime (Cloudflare Workers, Vercel Edge Functions, etc.), use the [Prisma Postgres serverless driver](/postgres/database/serverless-driver#use-with-prisma-orm). 4.2. Write your first query [#42-write-your-first-query] Paste the following boilerplate into `index.ts`: ```ts title="index.ts" import { prisma } from "./lib/prisma"; async function main() { // ... you will write your Prisma ORM queries here } main() .then(async () => { await prisma.$disconnect(); }) .catch(async (e) => { console.error(e); await prisma.$disconnect(); process.exit(1); }); ``` This code contains a `main` function that's invoked at the end of the script. It also instantiates `PrismaClient` which you'll use to send queries to your database. 4.3. Create a new User record [#43-create-a-new-user-record] Let's start with a small query to create a new `User` record in the database and log the resulting object to the console. Add the following code to your `index.ts` file: ```ts title="index.ts" import { prisma } from "./lib/prisma"; async function main() { const user = await prisma.user.create({ // [!code ++] data: { // [!code ++] name: "Alice", // [!code ++] email: "alice@prisma.io", // [!code ++] }, // [!code ++] }); // [!code ++] console.log(user); // [!code ++] } main() .then(async () => { await prisma.$disconnect(); }) .catch(async (e) => { console.error(e); await prisma.$disconnect(); process.exit(1); }); ``` Next, execute the script with the following command: npm pnpm yarn bun ```bash npx tsx index.ts ``` ```bash pnpm dlx tsx index.ts ``` ```bash yarn dlx tsx index.ts ``` ```bash bunx --bun tsx index.ts ``` ```text no-copy { id: 1, email: 'alice@prisma.io', name: 'Alice' } ``` Great job, you just created your first database record with Prisma Postgres! 🎉 4.4. Retrieve all User records [#44-retrieve-all-user-records] Prisma ORM offers various queries to read data from your database. In this section, you'll use the `findMany` query that returns *all* the records in the database for a given model. Delete the previous Prisma ORM query and add the new `findMany` query instead: ```ts title="index.ts" import { prisma } from "./lib/prisma"; async function main() { const users = await prisma.user.findMany(); // [!code ++] console.log(users); // [!code ++] } main() .then(async () => { await prisma.$disconnect(); }) .catch(async (e) => { console.error(e); await prisma.$disconnect(); process.exit(1); }); ``` Execute the script again: npm pnpm yarn bun ```bash npx tsx index.ts ``` ```bash pnpm dlx tsx index.ts ``` ```bash yarn dlx tsx index.ts ``` ```bash bunx --bun tsx index.ts ``` ```text no-copy [{ id: 1, email: 'alice@prisma.io', name: 'Alice' }] ``` Notice how the single `User` object is now enclosed with square brackets in the console. That's because the `findMany` returned an array with a single object inside. 4.5. Explore relation queries [#45-explore-relation-queries] One of the main features of Prisma ORM is the ease of working with [relations](/orm/prisma-schema/data-model/relations). In this section, you'll learn how to create a `User` and a `Post` record in a nested write query. Afterwards, you'll see how you can retrieve the relation from the database using the `include` option. First, adjust your script to include the nested query: ```ts title="index.ts" import { prisma } from "./lib/prisma"; async function main() { const user = await prisma.user.create({ // [!code ++] data: { // [!code ++] name: "Bob", // [!code ++] email: "bob@prisma.io", // [!code ++] posts: { // [!code ++] create: [ // [!code ++] { // [!code ++] title: "Hello World", // [!code ++] published: true, // [!code ++] }, // [!code ++] { // [!code ++] title: "My second post", // [!code ++] content: "This is still a draft", // [!code ++] }, // [!code ++] ], // [!code ++] }, // [!code ++] }, // [!code ++] }); // [!code ++] console.log(user); // [!code ++] } main() .then(async () => { await prisma.$disconnect(); }) .catch(async (e) => { console.error(e); await prisma.$disconnect(); process.exit(1); }); ``` Run the query by executing the script again: npm pnpm yarn bun ```bash npx tsx index.ts ``` ```bash pnpm dlx tsx index.ts ``` ```bash yarn dlx tsx index.ts ``` ```bash bunx --bun tsx index.ts ``` ```text no-copy { id: 2, email: 'bob@prisma.io', name: 'Bob' } ``` In order to also retrieve the `Post` records that belong to a `User`, you can use the `include` option via the `posts` relation field: ```ts title="index.ts" import { prisma } from "./lib/prisma"; async function main() { const usersWithPosts = await prisma.user.findMany({ // [!code ++] include: { // [!code ++] posts: true, // [!code ++] }, // [!code ++] }); // [!code ++] console.dir(usersWithPosts, { depth: null }); // [!code ++] } main() .then(async () => { await prisma.$disconnect(); }) .catch(async (e) => { console.error(e); await prisma.$disconnect(); process.exit(1); }); ``` Run the script again to see the results of the nested read query: npm pnpm yarn bun ```bash npx tsx index.ts ``` ```bash pnpm dlx tsx index.ts ``` ```bash yarn dlx tsx index.ts ``` ```bash bunx --bun tsx index.ts ``` ```text no-copy [ { id: 1, email: 'alice@prisma.io', name: 'Alice', posts: [] }, { id: 2, email: 'bob@prisma.io', name: 'Bob', posts: [ { id: 1, title: 'Hello World', content: null, published: true, authorId: 2 }, { id: 2, title: 'My second post', content: 'This is still a draft', published: false, authorId: 2 } ] } ] ``` This time, you're seeing two `User` objects being printed. Both of them have a `posts` field (which is empty for `"Alice"` and populated with two `Post` objects for `"Bob"`) that represents the `Post` records associated with them. Next steps [#next-steps] You just got your feet wet with a basic Prisma Postgres setup. Check out the official [Quickstart](/prisma-orm/quickstart/prisma-postgres). View and edit data in Prisma Studio [#view-and-edit-data-in-prisma-studio] Prisma ORM comes with a built-in GUI to view and edit the data in your database. You can open it using the following command: npm pnpm yarn bun ```bash npx prisma studio --config ./prisma.config.ts ``` ```bash pnpm dlx prisma studio --config ./prisma.config.ts ``` ```bash yarn dlx prisma studio --config ./prisma.config.ts ``` ```bash bunx --bun prisma studio --config ./prisma.config.ts ``` With Prisma Postgres, you can also directly use Prisma Studio inside the [Console](https://console.prisma.io) by selecting the **Studio** tab in your project. Build a fullstack app with Next.js [#build-a-fullstack-app-with-nextjs] Learn how to use Prisma Postgres in a fullstack app: * [Build a fullstack app with Next.js 15](/guides/frameworks/nextjs) * [Next.js 15 example app](https://github.com/prisma/nextjs-prisma-postgres-demo) (including authentication) Explore ready-to-run examples [#explore-ready-to-run-examples] Check out the [`prisma-examples`](https://github.com/prisma/prisma-examples/) repository on GitHub to see how Prisma ORM can be used with your favorite library. The repo contains examples with Express, NestJS, GraphQL as well as fullstack examples with Next.js and Vue.js, and a lot more. These examples use SQLite by default but you can follow the instructions in the project README to switch to Prisma Postgres in a few simple steps. # Import from MySQL (/docs/prisma-postgres/import-from-existing-database-mysql) This guide provides step-by-step instructions for importing data from an existing MySQL database into Prisma Postgres. You can accomplish this migration in four steps: 1. Create a new Prisma Postgres database. 2. Connect directly to a Prisma Postgres instance using a [direct connection](/postgres/database/connecting-to-your-database). 3. Migrate your MySQL data to Prisma Postgres using [pgloader](https://pgloader.io/). 4. Configure your Prisma project for Prisma Postgres. Prerequisites [#prerequisites] * The connection URL to your existing MySQL database. * A [Prisma Data Platform](https://console.prisma.io) account. * Node.js 18+ installed. * [pgloader](https://pgloader.io/) installed. Make sure your PostgreSQL tools match the Prisma Postgres version Prisma Postgres runs PostgreSQL 17. Your `pgloader` and any other PostgreSQL tools you use need to be compatible with PostgreSQL 17. We recommend attempting this migration in a separate git development branch. 1. Create a new Prisma Postgres database [#1-create-a-new-prisma-postgres-database] Follow these steps to create a new Prisma Postgres database: 1. Log in to [Prisma Data Platform](https://console.prisma.io/) and open the Console. 2. In a [workspace](/console/concepts#workspace) of your choice, click the **New project** button. 3. Type a name for your project in the **Name** field, e.g. **hello-ppg**. 4. In the **Prisma Postgres** section, click the **Get started** button. 5. In the **Region** dropdown, select the region that's closest to your current location, e.g. **US East (N. Virginia)**. 6. Click the **Create project** button. Once your database is\*\* \*\*provisioned, find your direct Prisma Postgres connection string: 1. Navigate to your active Prisma Postgres instance. 2. Click the **Connection Strings** tab in the project's sidenav. 3. Click the **Create connection string** button. 4. In the popup, provide a **Name** for the connection string and click **Create**. 5. Copy the connection string starting with `postgres://`, this is your direct connection string. Save the connection string, as you'll need it in step 3. 2. Prepare your direct connection string [#2-prepare-your-direct-connection-string] In this step, you'll use the [direct connection string](/postgres/database/connecting-to-your-database) you obtained in step 1 to connect to your Prisma Postgres instance. Your direct connection string should look like this: ```text postgres://USER:PASSWORD@db.prisma.io:5432/?sslmode=require ``` You'll use this connection string in the next step when configuring pgloader. 3. Migrate your MySQL data to Prisma Postgres using pgloader [#3-migrate-your-mysql-data-to-prisma-postgres-using-pgloader] Now that you have an active connection to your Prisma Postgres instance, you'll use [pgloader](https://pgloader.io/) to export data from your MySQL database to Prisma Postgres. Open a separate terminal window and create a `config.load` file: ```bash touch config.load ``` Open the `config.load` file in your preferred text editor and copy-paste the following configuration: ```text title="config.load" LOAD DATABASE FROM mysql://username:password@host:PORT/database_name INTO postgres://__USER__:__PASSWORD__@db.prisma.io:5432/?sslmode=require WITH quote identifiers, -- preserve table/column name case by quoting them include drop, create tables, create indexes, reset sequences ALTER SCHEMA 'database_name' RENAME TO 'public'; ``` Make sure to update the following details in the `config.load` file: * `FROM` url (MySQL database URL): * Replace `username`, `password`, `host`, `PORT`, and `database_name` with the actual connection details for your MySQL database. * Ensure that your connection string includes `useSSL=true` if SSL is required, for example: `mysql://username:password@host:PORT/database_name?useSSL=true`. Note that when using PlanetScale, appending `sslaccept=strict` will not work. * `INTO` url (Postgres database URL): * Update this with your direct connection string from above, replacing the `__USER__` and `__PASSWORD__` placeholders. * Update the `database_name` in `ALTER SCHEMA 'database_name' RENAME TO 'public';` to exactly match the `database_name` in your MySQL connection string. After saving the configuration file with your updated credentials, in the same terminal window, execute the following command: ```bash pgloader config.load ``` You should see a log similar to this, which confirms the successful migration of your data: ```bash LOG report summary reset table name errors rows bytes total time ------------------------- --------- --------- --------- -------------- fetch meta data 0 9 2.546s Create Schemas 0 0 0.325s Create SQL Types 0 0 0.635s Create tables 0 6 5.695s Set Table OIDs 0 3 0.328s ------------------------- --------- --------- --------- -------------- public.post 0 8 0.5 kB 4.255s public."user" 0 4 0.1 kB 2.775s public._prisma_migrations 0 1 0.2 kB 4.278s ------------------------- --------- --------- --------- -------------- COPY Threads Completion 0 4 5.095s Index Build Completion 0 5 9.601s Create Indexes 0 5 4.116s Reset Sequences 0 2 4.540s Primary Keys 0 3 2.917s Create Foreign Keys 0 1 1.121s Create Triggers 0 0 0.651s Install Comments 0 0 0.000s ------------------------- --------- --------- --------- -------------- Total import time ✓ 13 0.8 kB 28.042s ``` If you see output like this, it means your data has been successfully exported to your Prisma Postgres instance. You can also use [Prisma Studio](/guides/postgres/viewing-data#viewing-and-editing-data-in-prisma-studio) to verify whether the migration was successful: npm pnpm yarn bun ```bash npx prisma studio ``` ```bash pnpm dlx prisma studio ``` ```bash yarn dlx prisma studio ``` ```bash bunx --bun prisma studio ``` 4. Configure your Prisma project for Prisma Postgres [#4-configure-your-prisma-project-for-prisma-postgres] After migrating your data, you need to set up your Prisma project to work with Prisma Postgres. The steps differ depending on whether you were already using Prisma ORM. If you were not previously using Prisma ORM [#if-you-were-not-previously-using-prisma-orm] Initialize Prisma in your project by running `npx prisma init` in your project directory. This creates a `prisma` folder with a `schema.prisma` file and `.env` file (if not already present). In the generated `.env` file, update `DATABASE_URL` to match your Prisma Postgres direct connection string that you received in [step 1](/prisma-postgres/import-from-existing-database-mysql#1-create-a-new-prisma-postgres-database): ```text title=".env" no-copy DATABASE_URL="postgres://USER:PASSWORD@db.prisma.io:5432/?sslmode=require" ``` [Introspect](/orm/prisma-schema/introspection) your newly migrated database by running: npm pnpm yarn bun ```bash npx prisma db pull ``` ```bash pnpm dlx prisma db pull ``` ```bash yarn dlx prisma db pull ``` ```bash bunx --bun prisma db pull ``` This command updates your `schema.prisma` file with models representing your migrated tables, so you can start using [Prisma Client](/orm/prisma-client/setup-and-configuration/introduction) to query your data or [Prisma Migrate](/orm/prisma-migrate/getting-started) to manage future changes. Congratulations! You've successfully migrated your MySQL database to Prisma Postgres and configured your Prisma project. Your migration tutorial is now complete. For a comprehensive guide on getting started with Prisma and Prisma Postgres, see [start from scratch with Prisma and Prisma Postgres](/prisma-orm/quickstart/prisma-postgres). If you were already using Prisma ORM [#if-you-were-already-using-prisma-orm] In your `schema.prisma` file, change the `provider` in the `datasource` block from `mysql` to `postgresql`: ```prisma title="schema.prisma" datasource db { provider = "mysql" // [!code --] provider = "postgres" // [!code ++] } ``` In the generated `.env` file, update `DATABASE_URL` to match your Prisma Postgres direct connection string that you received in [step 1](/prisma-postgres/import-from-existing-database-mysql#1-create-a-new-prisma-postgres-database): ```text title=".env" no-copy DATABASE_URL="postgres://USER:PASSWORD@db.prisma.io:5432/?sslmode=require" ``` Introspect your newly migrated Prisma Postgres database and generate Prisma Client: npm pnpm yarn bun ```bash npx prisma db pull ``` ```bash pnpm dlx prisma db pull ``` ```bash yarn dlx prisma db pull ``` ```bash bunx --bun prisma db pull ``` This command refreshes your Prisma models based on the new database schema. If you were using [Prisma Migrate](/orm/prisma-migrate/getting-started) before: * Delete your existing `migrations` folder in the `prisma` directory. * [Baseline your database](/orm/prisma-migrate/workflows/baselining#baselining-a-database) to begin creating new migrations. Congratulations! You've successfully migrated your MySQL database to Prisma Postgres and configured your Prisma project. Your migration tutorial is now complete. If you encounter any issues during the migration, please don't hesitate to reach out to us on [Discord](https://pris.ly/discord?utm_source=docs\&utm_medium=conclusion) or via [X](https://pris.ly/x?utm_source=docs\&utm_medium=conclusion). # Import from PostgreSQL (/docs/prisma-postgres/import-from-existing-database-postgresql) This guide provides step-by-step instructions for importing data from an existing PostgreSQL database into Prisma Postgres. You can accomplish this migration in three steps: 1. Create a new Prisma Postgres database. 2. Export your existing data via `pg_dump`. 3. Import the previously exported data into Prisma Postgres via `pg_restore`. In the third step, you will be using a [direct connection](/postgres/database/connecting-to-your-database) to securely connect to your Prisma Postgres database to run `pg_restore`. Prerequisites [#prerequisites] * The connection URL to your existing PostgreSQL database * A [Prisma Data Platform](https://console.prisma.io) account * Node.js 18+ installed * PostgreSQL CLI Tools (`pg_dump`, `pg_restore`) for creating and restoring backups Make sure your PostgreSQL tools match the Prisma Postgres version Prisma Postgres runs PostgreSQL 17. Your `pg_dump` and `pg_restore` tools need to be version 17 to ensure compatibility. You can check your version by running `pg_dump --version` or `pg_restore --version`. 1. Create a new Prisma Postgres database [#1-create-a-new-prisma-postgres-database] Follow these steps to create a new Prisma Postgres database: 1. Log in to [Prisma Data Platform](https://console.prisma.io/) and open the Console. 2. In a [workspace](/console/concepts#workspace) of your choice, click the **New project** button. 3. Type a name for your project in the **Name** field, e.g. **hello-ppg**. 4. In the **Prisma Postgres** section, click the **Get started** button. 5. In the **Region** dropdown, select the region that's closest to your current location, e.g. **US East (N. Virginia)**. 6. Click the **Create project** button. Once your database is provisioned, obtain your direct connection string: 1. Navigate to your active Prisma Postgres instance. 2. Click the **Connection Strings** tab in the project's sidenav. 3. Click the **Create connection string** button. 4. In the popup, provide a **Name** for the connection string and click **Create**. 5. Copy the connection string starting with `postgres://`, this is your direct connection string. Save the connection string, as you'll need it in step 3. 2. Export data from your existing database [#2-export-data-from-your-existing-database] In this step, you're going to export the data from your existing database and store it in a `.bak` file on your local machine. Make sure to have the connection URL for your existing database ready, it should be [structured](/orm/reference/connection-urls) like this: ```text postgresql://USER:PASSWORD@HOST:PORT/DATABASE ``` Expand below for provider-specific instructions that help you determine the right connection string: * Make sure to select non-pooled connection string by switching off the **Connection pooling** toggle. * The `sslmode` has to be set to `require` and appended to your Neon database URL for the command to work. * The connection URL should look similar to this: ```text postgresql://USER:PASSWORD@YOUR-NEON-HOST/DATABASE?sslmode=require ``` * Use a database connection URL that uses [Supavisor session mode](https://supabase.com/docs/guides/database/connecting-to-postgres#supavisor-session-mode). * The connection URL should look similar to this: ```text postgres://postgres.apbkobhfnmcqqzqeeqss:[YOUR-PASSWORD]@aws-0-ca-central-1.pooler.supabase.com:5432/postgres ``` Next, run the following command to export the data of your PostgreSQL database (replace the `__DATABASE_URL__` placeholder with your actual database connection URL): ```bash pg_dump \ -Fc \ -v \ -d __DATABASE_URL__ \ -n public \ -f db_dump.bak ``` Here's a quick overview of the CLI options that were used for this command: * `-Fc`: Uses the custom format for backups, recommended for `pg_restore` * `-v`: Runs `pg_dump` in verbose mode * `-d`: Specifies the database connection string * `-n`: Specifies the target PostgreSQL schema * `-f`: Specifies the output name for the backup file Running this command will create a backup file named `db_dump.bak` which you will use to restore the data into your Prisma Postgres database in the next step. 3. Import data into Prisma Postgres [#3-import-data-into-prisma-postgres] In this section, you'll use your [direct connection string](/postgres/database/connecting-to-your-database) to connect to your Prisma Postgres instance and import data via `pg_restore`. Your direct connection string from step 1 should look like this: ```text postgres://USER:PASSWORD@db.prisma.io:5432/?sslmode=require ``` Use the backup file from **Step 2** to restore data into your Prisma Postgres database with `pg_restore` by running this command (replace `__USER__`, `__PASSWORD__` with the values from your direct connection string): ```bash pg_restore \ -h db.prisma.io \ -p 5432 \ -U __USER__ \ -d postgres \ -v \ ./db_dump.bak \ && echo "-complete-" ``` When prompted, enter the `__PASSWORD__` from your direct connection string. You can also use the full connection string format: ```bash pg_restore \ -d "postgres://USER:PASSWORD@db.prisma.io:5432/postgres?sslmode=require" \ -v \ ./db_dump.bak \ && echo "-complete-" ``` Once the command completes execution, you will have successfully imported the data from your existing PostgreSQL database into Prisma Postgres 🎉 To validate that the import worked, you can use [Prisma Studio](/guides/postgres/viewing-data#viewing-and-editing-data-in-prisma-studio). Either open it in the [Platform Console](https://console.prisma.io) by clicking the **Studio** tab in the left-hand sidenav in your project or run this command to launch Prisma Studio locally: npm pnpm yarn bun ```bash npx prisma studio ``` ```bash pnpm dlx prisma studio ``` ```bash yarn dlx prisma studio ``` ```bash bunx --bun prisma studio ``` 4. Update your application code to query Prisma Postgres [#4-update-your-application-code-to-query-prisma-postgres] Scenario A: You are already using Prisma ORM [#scenario-a-you-are-already-using-prisma-orm] If you're already using Prisma ORM, you need to update your database connection URL to point to your new Prisma Postgres instance. Update the `DATABASE_URL` in your `.env` file to match your Prisma Postgres direct connection string from step 1: ```text title=".env" DATABASE_URL="postgres://USER:PASSWORD@db.prisma.io:5432/?sslmode=require" ``` Then, re-generate Prisma Client so that the updated environment variable takes effect: npm pnpm yarn bun ```bash npx prisma generate ``` ```bash pnpm dlx prisma generate ``` ```bash yarn dlx prisma generate ``` ```bash bunx --bun prisma generate ``` Once this is done, you can run your application and it should work as before. For a complete guide on setting up Prisma ORM with Prisma Postgres from scratch, including driver adapter configuration and best practices, see the [Prisma ORM with Prisma Postgres quickstart](/prisma-orm/quickstart/prisma-postgres). Scenario B: You are not yet using Prisma ORM [#scenario-b-you-are-not-yet-using-prisma-orm] If you are not yet using Prisma ORM, you'll need to go through the following steps to use Prisma Postgres from your application: 1. Install the Prisma CLI and other required dependencies in your project 2. Introspect the database to generate a Prisma schema 3. Generate Prisma Client 4. Update the queries in your application to use Prisma ORM You can find the detailed step-by-step instructions for this process in this guide: [Add Prisma ORM to an existing project](/prisma-orm/add-to-existing-project/prisma-postgres). # FAQ (/docs/accelerate/more/faq) When should I enable static IP for Prisma Accelerate? [#when-should-i-enable-static-ip-for-prisma-accelerate] Enable static IP for Accelerate when your security setup requires IP allowlisting or if you're implementing firewalls that only permit access from trusted IPs, ensuring controlled and secure database connections. Result of enabling static IP Accelerate with a database using IP allowlisting Learn more on [how to enable static IP for Accelerate in the Platform Console](/accelerate/static-ip). **What is a static IP?** A static IP address is an IPv4 or an IPv6 address that is fixed. Unlike dynamic IP addresses, which can change unpredictably, traffic from static IP addresses can be easily identified. What is a static IP > ℹ️ To enable static IP support for Accelerate within your existing or new project environment, your workspace will need to be on our **Pro** or **Business** plans. Take a look at the [pricing page](https://www.prisma.io/pricing#accelerate) for more information. Why do I sometimes see unexpected cache behavior? [#why-do-i-sometimes-see-unexpected-cache-behavior] Accelerate's cache performs best when it observes a higher load from a project. Many cache operations, such as committing data to cache and refreshing stale data, happen asynchronously. When benchmarking Accelerate, we recommend doing so with loops or a load testing approach. This will mimic higher load scenarios better and reduce outliers from low frequency operations. Prisma operations are sent to Accelerate over HTTP. As a result, the first request to Accelerate must establish an HTTP handshake and may have additional latency as a result. We're exploring ways to reduce this initial request latency in the future. What is the pricing of Accelerate? [#what-is-the-pricing-of-accelerate] You can find more details on our [Accelerate pricing page](https://www.prisma.io/pricing) VS Code does not recognize the $extends method [#vs-code-does-not-recognize-the-extends-method] If you add the Prisma Client extension for Accelerate to an existing project that is currently open in VS Code, the editor might not immediately recognize the `$extends` method. This might be an issue with the TypeScript server not yet recognizing the regenerated Prisma Client. To resolve this, you need to restart TypeScript. 1. In VS Code, open the Command Palette. You can do so when you press F1 or select **View** > **Command Palette**. 2. Enter `typescript` and select and run the **TypeScript: Restart TS server** command. VS Code should now recognize the `$extends` method. What regions are Accelerate's cache nodes available in? [#what-regions-are-accelerates-cache-nodes-available-in] Accelerate runs on Cloudflare's network and cache hits are served from Cloudflare's 300+ locations. You can find the regions where Accelerate's cache nodes are available here: [https://www.cloudflare.com/network/](https://www.cloudflare.com/network/). What regions is Accelerate's connection pool available in? [#what-regions-is-accelerates-connection-pool-available-in] When no cache strategy is specified or in the event of a cache miss, the Prisma Client query is routed through Accelerate's connection pool. Currently, queries can be routed through any chosen region among the 16 available locations. Currently, the list of available regions are: * Asia Pacific, Mumbai (`ap-south-1`) * Asia Pacific, Seoul (`ap-northeast-2`) * Asia Pacific, Singapore (`ap-southeast-1`) * Asia Pacific, Sydney (`ap-southeast-2`) * Asia Pacific, Tokyo (`ap-northeast-1`) * Canada, Central (`ca-central-1`) * Europe, Frankfurt (`eu-central-1`) * Europe, Ireland (`eu-west-1`) * Europe, London (`eu-west-2`) * Europe, Paris (`eu-west-3`) * Europe, Stockholm (`eu-north-1`) * South America, Sao Paulo (`sa-east-1`) * US East, N. Virginia (`us-east-1`) * US East, Ohio (`us-east-2`) * US West, N. California (`us-west-1`) * US West, Oregon (`us-west-2`) You can also view the available regions when you're about to set up Accelerate or by visiting the **Settings** tab for Accelerate under the **Region** section in the Prisma Cloud Platform [dashboard](https://pris.ly/pdp). How does Accelerate know what region to fetch the cache from? [#how-does-accelerate-know-what-region-to-fetch-the-cache-from] Under the hood, Accelerate uses Cloudflare, which uses [Anycast](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/) for network addressing and routing. An incoming request will be routed to the nearest data center or "node" in their network that has the capacity to process the request efficiently. To learn more about how this works, we recommend looking into [Anycast](https://www.cloudflare.com/learning/cdn/glossary/anycast-network/). How can I invalidate a cache on Accelerate? [#how-can-i-invalidate-a-cache-on-accelerate] You can invalidate the cache on-demand via the [`$accelerate.invalidate` API](/accelerate/reference/api-reference#accelerateinvalidate) if you're on a [paid plan](https://www.prisma.io/pricing#accelerate), or you can invalidate your entire cache, on a project level, a maximum of five times a day. This limit is set based on [your plan](https://www.prisma.io/pricing#accelerate). You can manage this via the Accelerate configuration page. What is Accelerate's consistency model? [#what-is-accelerates-consistency-model] Accelerate does not have a consistency model. It is not a distributed system where nodes need to reach a consensus (because data is only stored in the cache node(s) closest to the user). However, the data cached in Accelerate's cache nodes doesn't propagate to other nodes, so Accelerate by design doesn't need a consistency model. Accelerate implements a [read-through caching strategy](https://www.prisma.io/dataguide/managing-databases/introduction-database-caching#read-through) particularly suitable for read-heavy workloads. How is Accelerate different from other caching tools, such as Redis? [#how-is-accelerate-different-from-other-caching-tools-such-as-redis] * Accelerate is a *specialized* cache that allows you to optimize data access in code at the query level with a cache strategy. On the other hand, tools such as Redis and Memcached are *general-purpose* caches designed to be adaptable and flexible. * Accelerate is a managed service that reduces the time, risk, and engineering effort of building and maintaining a cache service. * By default, Accelerate is globally distributed, reducing the latency of your queries. Other cache tools would require additional configuration to make them available globally. When should I not use Accelerate's caching features? [#when-should-i-not-use-accelerates-caching-features] Accelerate is a global data cache and connection pool that allows you to optimize data access in code at the query level. While caching with Accelerate can greatly boost the performance of your app, it may not always the best choice for your use case. Accelerate's global cache feature may not be a good fit for your app if: * Your app is exclusively used within a specific region and both your application server and database are situated in that same region on the same network. For example, database queries will likely be much faster if your application server and database are in the same region and network. However, If your application server is in different regions or networks from your database, Accelerate will speed up your queries because the data will be cached in the closest data center to your application. * You *only* need a general-purpose cache. Accelerate is a connection pooler and a *specialized cache* that only caches your database query responses in code. A general-purpose cache, such as Redis, would allow you to cache data from multiple sources, such as external APIs, which Accelerate currently doesn't support. If general-purpose caching interests you, please share your feedback with us via our [Discord](https://pris.ly/discord?utm_source=docs\&utm_medium=inline_text). * Your application data *always* needs to be up-to-date on retrieval, making it difficult to establish a reasonable cache strategy. Even without using Accelerate's global cache, you can still greatly benefit from Accelerate by using its connection pool, especially in serverless or edge functions, where it is difficult to manage and scale database connections. You can learn more about the serverless challenge [here](/orm/prisma-client/setup-and-configuration/databases-connections#the-serverless-challenge). Can I use Accelerate with other ORMs/query builders/drivers? [#can-i-use-accelerate-with-other-ormsquery-buildersdrivers] No. We currently do not have any plans for supporting other ORMs/query builders or drivers. However, if you're interested in support for other libraries, feel free to reach out and let us know in our [Discord](https://pris.ly/discord?utm_source=docs\&utm_medium=inline_text) community in the `#help-and-questions` channel. What is the maximum allowed value for the ttl parameter when configuring cacheStrategy? [#what-is-the-maximum-allowed-value-for-the-ttl-parameter-when-configuring-cachestrategy] The [Time-to-live](/accelerate/caching) (`ttl`) parameter can be set for up to a *year*. However, it's important to note that items within the cache may be evicted if they are not frequently accessed. Based on our experimentation, we’ve seen cache items persist for around 18 hours. While items may remain in the cache for an extended period if they are actively accessed, there is no guarantee. > **Note**: Even frequently accessed items may occasionally be evicted from the cache. It's unlikely for an item to survive for up to or longer than a month, regardless of its activity level. Why doesn’t Accelerate fall back to the direct connection string during a service disruption? [#why-doesnt-accelerate-fall-back-to-the-direct-connection-string-during-a-service-disruption] In the rare event of a service disruption, falling back to a direct connection would bypass the connection pool. This could potentially deplete the database's available connections and cause other issues on the database level. If there is a service disruption, it's recommended to verify on the [status page](https://pris.ly/data-platform-status). You can reach out to one of Prisma's [support channels](/console/more/support) for assistance. > **Note:** Additionally, it's worth noting that some edge function runtime environments may not support direct connections with Prisma ORM. For further details, refer to our [Edge functions documentation](/orm/prisma-client/deployment/edge/overview). Are each of the queries within an interactive transaction counted separately for billing? [#are-each-of-the-queries-within-an-interactive-transaction-counted-separately-for-billing] Yes, [interactive transactions](/orm/prisma-client/queries/transactions#interactive-transactions) are billed based on the individual operations within the transaction. There is no charge for the start, commit, or rollback of the transaction itself. For example, in the following query, there are two billable queries: ```ts await prisma.$transaction(async (tx) => { await tx.user.deleteMany({ where: { name: "John Doe" } }); await tx.user.createMany({ data }); }); ``` However, when using the [`$transaction` API for sequential client operations](/orm/prisma-client/queries/transactions#sequential-operations), regardless of the number of queries within the array, it counts as only one billable query. For example: ```ts await prisma.$transaction([ prisma.user.deleteMany({ where: { name: "John Doe" } }), prisma.user.createMany({ data }), ]); ``` If you don't need [interactive transactions](/orm/prisma-client/queries/transactions#interactive-transactions), you can save costs and improve performance by using [sequential operations transactions](/orm/prisma-client/queries/transactions#sequential-operations). Sequential operations transactions perform better on Accelerate because they execute in one round-trip to the database, while interactive transactions require separate round-trips for start, commit, and each individual operation on the transaction. Can I increase my Accelerate query duration and response size limits? [#can-i-increase-my-accelerate-query-duration-and-response-size-limits] Yes, you can increase your Accelerate limits based on your subscription plan. Here are the configurable limits: | Limit | Free | Starter | Pro Plan | Business Plan | | ------------------------------------ | ---------------- | ---------------- | ---------------- | ---------------- | | **Query timeout** | Up to 10 seconds | Up to 10 seconds | Up to 20 seconds | Up to 60 seconds | | **Interactive transactions timeout** | Up to 15 seconds | Up to 15 seconds | Up to 30 seconds | Up to 90 seconds | | **Response size** | Up to 5 MB | Up to 5 MB | Up to 10 MB | Up to 20 MB | Check the [pricing page](https://www.prisma.io/pricing#accelerate) for more details on the available plans and their corresponding limits. While you can increase these limits based on your subscription plan, it's *still* recommended to optimize your database operations. [Learn more in our troubleshooting guide.](/postgres/error-reference) How long does it take to invalidate a cache query result? [#how-long-does-it-take-to-invalidate-a-cache-query-result] As the cache needs to be cleared globally, it is difficult to provide a specific time frame. However, the cached data is eventually consistent and typically propagates to all PoPs within a few seconds. In very rare cases, it may take longer. Here is a [demo app](https://pris.ly/test-cache-invalidation) to test the time it takes to invalidate a cache query result. What is the difference between Invalidate and Revalidate? [#what-is-the-difference-between-invalidate-and-revalidate] **Invalidate**: The cache entry is deleted, and new data will be fetched on the next request, causing a cache miss. This removes stale data but may lead to slower responses until the cache is repopulated. **Revalidate**: The cache entry is updated proactively, ensuring the next request uses fresh data from the cache. This keeps the cache valid and maintains faster response times by avoiding cache misses. What is on-demand cache invalidation? [#what-is-on-demand-cache-invalidation] [On-demand cache invalidation](/accelerate/caching) lets applications instantly update specific cached data when it changes, instead of waiting for regular cache refresh cycles. This keeps information accurate and up-to-date for users. When should I use the cache invalidate API? [#when-should-i-use-the-cache-invalidate-api] The [cache invalidate API](/accelerate/caching) is essential when data consistency cannot wait for the cache’s standard expiration or revalidation. Key use cases include: * **Content updates**: When critical changes occur, such as edits to a published article, product updates, or profile modifications, that need to be visible immediately. * **Inventory management**: In real-time applications, like inventory or booking systems, where stock levels, availability, or reservation statuses must reflect the latest information. * **High-priority data**: For time-sensitive data, like breaking news or urgent notifications, where it’s essential for users to see the most current information right away. Using on-demand cache invalidation in these scenarios helps keep only the necessary data refreshed, preserving system performance while ensuring accurate, up-to-date information for users. How does Accelerate count queries for billing? [#how-does-accelerate-count-queries-for-billing] Accelerate counts queries at the Prisma Client invocation level. A single Prisma query may translate into multiple SQL statements under the hood, but it will only count as one query for billing purposes. This ensures straightforward, predictable billing that reflects the Prisma Client usage rather than the complexity of the underlying SQL operations. Queries are counted regardless of whether they are served from the cache or the database. Even if a query is retrieved from the cache, it still counts toward your query limit. How do I switch from GitHub login to email and password login? [#how-do-i-switch-from-github-login-to-email-and-password-login] If you previously signed up using GitHub and want to switch to email and password login, follow these steps: 1. Verify Your GitHub Email Address [#1-verify-your-github-email-address] * Check the primary email address associated with your GitHub account (e.g., from your GitHub profile or notification settings). 2. Create a New Email/Password Account [#2-create-a-new-emailpassword-account] * Go to the email/password sign-up page. * Use the **same email address** linked to your GitHub account to create the new account. * Our system will automatically connect your new email/password account to your existing data. 3. Test Your Login [#3-test-your-login] * Log out and try logging in with your email and the password you just created. > **Note**: If you encounter any issues, please contact our support team for help linking your accounts. # Feedback (/docs/accelerate/more/feedback) You can submit any feedback about Accelerate in our [Discord server](https://pris.ly/discord?utm_source=docs\&utm_medium=intro_text). # Known limitations (/docs/accelerate/more/known-limitations) Below are descriptions of known limitations when using Accelerate. If you encounter any additional ones, please share them with us via [Discord](https://pris.ly/discord?utm_source=docs\&utm_medium=intro_text). Cannot cache raw queries [#cannot-cache-raw-queries] At the moment, it is not possible to cache the responses of [raw queries](/orm/prisma-client/using-raw-sql/raw-queries). Not compatible with the fluent API [#not-compatible-with-the-fluent-api] Client Extensions (which are used in Accelerate) currently do not correctly forward the [fluent API](/orm/prisma-client/queries/relation-queries#fluent-api) types. We hope to get a fix into Client Extensions soon. Not compatible with extremely heavy or long-running queries [#not-compatible-with-extremely-heavy-or-long-running-queries] Accelerate is designed to work with high-performance, low-latency queries. It is not intended for use with extremely heavy or long-running queries that may cause performance issues or resource contention. While limits are configurable, we recommend optimizing your queries to ensure they fit within the recommended guidelines. For queries that cannot be optimized or pared down, we recommend one of two solutions: 1. **Use the read replica extension**: The Prisma ORM [read replica extension](https://www.npmjs.com/package/@prisma/extension-read-replicas) allows you to set up two different connections: a `primary` and a `replica`. You can set up your Accelerate connection as the `primary` and then a direct connection as the `replica`. Any queries that are resource-intensive or long-running can then be routed to the `replica`, while the `primary` (your Accelerate connection) will handle normal queries. **Please note** that this solution requires you to both set up a direct connection and requires the full generated Prisma Client (i.e. without `--no-engine`). 2. **Separate analytics queries**: Our preferred solution is to separate your analytics queries into a separate application. This separate application can then use a direct connection so that it can run heavy queries without impacting the performance or cost of your Accelerate-powered application. If you have a use case that requires running extremely heavy or long-running queries and Prisma Accelerate, please reach out to us. Not compatible with direct IPv4 addresses in MongoDB connection strings [#not-compatible-with-direct-ipv4-addresses-in-mongodb-connection-strings] Accelerate does not support direct IPv4 addresses in MongoDB connection strings. When an IPv4 address is provided, Accelerate converts it to an IPv6 format to route through its NAT gateway. This conversion may cause the connection string to be considered invalid due to the formatting of the port value. **Workaround**: To resolve this issue, create a DNS record that points to your IPv4 address and use that DNS record in your connection string instead of the direct IP. Example [#example] * **IPv4 connection string** (not supported): `mongodb://user:password@192.168.1.100:27017/db_name` * **DNS record connection string** (supported): `mongodb://user:password@my-database.example.com:27017/db_name` For additional details on Accelerate’s IPv6-first design, refer to our [blog post](https://www.prisma.io/blog/accelerate-ipv6-first). # Troubleshooting (/docs/accelerate/more/troubleshoot) When working with Accelerate, you may encounter errors often highlighted by specific error codes during development and operations. It is important to understand the meaning of these errors, why they occur, and how to resolve them in order to ensure the smooth operation of your applications. This guide aims to provide insights and steps to troubleshoot specific error codes encountered with Accelerate. P6009 (ResponseSizeLimitExceeded) [#p6009-responsesizelimitexceeded] This error is triggered when the response size from a database query exceeds the configured query response size limit. We've implemented this restriction to safeguard your application performance, as retrieving data over 5MB can significantly slow down your application due to multiple network layers. Typically, transmitting more than 5MB of data is common when conducting ETL (Extract, Transform, Load) operations. However, for other scenarios such as transactional queries, real-time data fetching for user interfaces, bulk data updates, or aggregating large datasets for analytics outside of ETL contexts, it should generally be avoided. These use cases, while essential, can often be optimized to work within the configured query response size limit, ensuring smoother performance and a better user experience. Possible causes for P6009 [#possible-causes-for-p6009] Transmitting images/files in response [#transmitting-imagesfiles-in-response] This error may arise if images or files stored within your table are being fetched, resulting in a large response size. Storing assets directly in the database is generally discouraged because it significantly impacts database performance and scalability. In addition to performance, it makes database backups slow and significantly increases the cost of storing routine backups. **Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, consider storing the image or file in a BLOB store like [Cloudflare R2](https://developers.cloudflare.com/r2/), [AWS S3](https://aws.amazon.com/pm/serv-s3/), or [Cloudinary](https://cloudinary.com/). These services allow you to store assets optimally and return a URL for access. Instead of storing the asset directly in the database, store the URL, which will substantially reduce the response size. Over-fetching of data [#over-fetching-of-data] In certain cases, a large number of records or fields are unintentionally fetched, which results in exceeding the configured query response size limit. This could happen when the [`where`](/orm/reference/prisma-client-reference#where) clause in the query is incorrect or entirely missing. **Suggested solution:** Configure the query response size limit to be larger. If the limit is still exceeded, double-check that the `where` clause is filtering data as expected. To prevent fetching too many records, consider using [pagination](/v6/orm/prisma-client/queries/pagination). Additionally, use the [`select`](/orm/reference/prisma-client-reference#select) clause to return only the necessary fields, reducing the response size. Fetching a large volume of data [#fetching-a-large-volume-of-data] In many data processing workflows, especially those involving ETL (Extract-Transform-Load) processes or scheduled CRON jobs, there's a need to extract large amounts of data from data sources (like databases, APIs, or file systems) for analysis, reporting, or further processing. If you are running an ETL/CRON workload that fetches a huge chunk of data for analytical processing then you might run into this limit. **Suggested solution:** Configure the query response size limit to be larger. If the limit is exceeded, consider splitting your query into batches. This approach ensures that each batch fetches only a portion of the data, preventing you from exceeding the size limit for a single operation. P6004 (QueryTimeout) [#p6004-querytimeout] This error occurs when a database query fails to return a response within the configured query timeout limit. The query timeout limit includes the duration of waiting for a connection from the pool, network latency to the database, and the execution time of the query itself. We enforce this limit to prevent unintentional long-running queries that can overload system resources. > The time for Accelerate's cross-region networking is excluded from the configured query timeout limit. Possible causes for P6004 [#possible-causes-for-p6004] This error could be caused by numerous reasons. Some of the prominent ones are: High traffic and insufficient connections [#high-traffic-and-insufficient-connections] If the application is receiving very high traffic and there are not a sufficient number of connections available to the database, then the queries would need to wait for a connection to become available. This situation can lead to queries waiting longer than the configured query timeout limit for a connection, ultimately triggering a timeout error if they do not get serviced within this duration. **Suggested solution**: Review and possibly increase the `connection_limit` specified in the connection string parameter when setting up Accelerate in a platform environment. This limit should align with your database's maximum number of connections. By default, the connection limit is set to 10 unless a different `connection_limit` is specified in your database connection string. Long-running queries [#long-running-queries] Queries may be slow to respond, hitting the configured query timeout limit even when connections are available. This could happen if a very large amount of data is being fetched in a single query or if appropriate indexes are missing from the table. **Suggested solution**: Configure the query timeout limit to be larger. If the limit is exceeded, identify the slow-running queries and fetch only the necessary data. Use the `select` clause to retrieve specific fields and avoid fetching unnecessary data. Additionally, consider adding appropriate indexes to improve query efficiency. You might also isolate long-running queries into separate environments to prevent them from affecting transactional queries. Database resource contention [#database-resource-contention] A common yet challenging issue is when other services operating on the same database perform heavy analytics or data processing tasks, significantly consuming database resources. These operations can monopolize database connections and processing power, leading to a scenario where even simple queries cannot be executed in a timely manner. This "busy" or "noisy" database environment can cause queries that are typically fast to run slowly or even timeout, particularly during periods of high activity from other services. Users often rely on CPU and memory usage metrics to gauge database load, which can be misleading. While these are important indicators, they might not fully represent the database's operational state. Direct metrics like the number of reads, writes, and wait times offer a clearer view of the database's performance and should be monitored closely. A noticeable degradation in these metrics, especially in the absence of changes to the queries or data model, suggests that external pressures are affecting database performance. **Suggested solution**: If normally quick queries are intermittently slow or timing out without any modifications to them, it's probable that competing queries are exerting pressure on the same database tables. To diagnose this, adopt monitoring tools or leverage your database's inherent capabilities to observe reads, writes, and wait times. Such monitoring will unveil activity patterns or spikes that align with the observed performance dips. Moreover, it's crucial to periodically scrutinize and refine essential queries and verify that tables are properly indexed. This proactive approach minimizes the vulnerability of these queries to slowdowns caused by competing workloads. Considerations for P6009 and P6004 errors [#considerations-for-p6009-and-p6004-errors] For runtimes that support Prisma ORM natively, you could consider creating two `PrismaClient` Instances. One with the Accelerate connection string (prefixed with `prisma://`) and the other one with the direct database connection string (prefixed with `postgres://`, `mysql://` etc). The main idea behind this approach is to bypass Accelerate for certain specific queries. However, please note that the available connections would be split between both of your `PrismaClient` Instances. It's crucial to understand the implications of managing multiple instances, particularly in regards to direct database connections. Utilizing a `PrismaClient` instance with a direct database connection string means that this connection will interact directly with your database. This approach requires careful consideration because the direct connections and those managed by Accelerate share the same underlying database connection pool. This can lead to competition for resources, potentially affecting the performance and availability of your database services. Additionally, direct connections could have a significant impact on your database's performance and availability. Operations that consume a considerable amount of resources could potentially degrade the service for other users or processes that rely on the same database. If your application's runtime environment supports Prisma ORM natively and you're considering this strategy to circumvent P6009 and P6004 errors, you might create two `PrismaClient` instances: 1. An instance using the Accelerate connection string (prefixed with `prisma://`) for general operations. 2. Another instance with the direct database connection string (e.g., prefixed with `postgres://`, `mysql://`, etc.) for specific operations anticipated to exceed the configured query timeout limit or to result in responses larger than the configured query response size limit. ```ts export const prisma = new PrismaClient({ datasourceUrl: process.env.DIRECT_DB_CONNECTION, }); export const prismaAccelerate = new PrismaClient({ datasourceUrl: process.env.ACCELERATE_CONNECTION, }).$extends(withAccelerate()); ``` This setup allows you to strategically direct certain operations through the direct connection, mitigating the risk of encountering the aforementioned errors. However, this decision should be made with a comprehensive understanding of the potential consequences and an assessment of whether your database infrastructure can support this additional load without compromising overall performance and availability. > Also see [**why doesn’t Accelerate fall back to the direct connection string during a service disruption?**](/accelerate/more/faq#why-doesnt-accelerate-fall-back-to-the-direct-connection-string-during-a-service-disruption) P6008 (ConnectionError|EngineStartError) [#p6008-connectionerrorenginestarterror] This error indicates that Prisma Accelerate cannot establish a connection to your database, potentially due to several reasons. Possible causes for P6008 [#possible-causes-for-p6008] Database Not Publicly accessible [#database-not-publicly-accessible] If your database is within a VPC or access is limited to specific IP addresses, you might encounter this error if static IP is not enabled for Accelerate or if the static IPs are not permitted in your database firewall. **Suggested solution:** [Enable static IP for Accelerate](/accelerate/static-ip) and configure your database firewall to allow access from the provided static IP addresses. Unreachable Database Host/Port [#unreachable-database-hostport] If the database’s server address (hostname) and port are incorrect or unreachable then you may encounter this error. **Suggested solution:** Verify the hostname/port of the database connection string that was provided while creating the Prisma Accelerate project. Additionally, attempt to connect to the database using a Database GUI tool (e.g., [Prisma Studio](https://www.prisma.io/studio), [TablePlus](https://tableplus.com/), or [DataGrip](https://www.jetbrains.com/datagrip/)) for further investigation. Incorrect username/password/database name [#incorrect-usernamepassworddatabase-name] This error can happen when the wrong credentials are provided to Prisma Accelerate, preventing it from establishing a connection to your database. **Suggested solution:** Verify the correctness of your database's username, password, and name in the connection string provided to Prisma Accelerate. Ensure that these credentials match those required by your database. Testing the connection using a direct database GUI tool can also help in confirming if the provided credentials are correct. Database taking too long to respond [#database-taking-too-long-to-respond] If the database is taking too long to respond to the connection request, Prisma Accelerate may timeout and throw this error. This could happen if the database is not active or is waking up from sleep mode. **Suggested solution:** Verify that the database is active and reachable. If the database is in sleep mode, try to wake it up by sending a request to it using a direct database GUI tool or wake it up using the database's management console. P5011 (TooManyRequests) [#p5011-toomanyrequests] This error occurs when Prisma Accelerate detects a high volume of requests that surpasses allowable thresholds. It acts as a protective measure to safeguard both Prisma Accelerate and your underlying database from excessive load. Possible causes for P5011 [#possible-causes-for-p5011] Aggressive retry loops [#aggressive-retry-loops] If your application retries queries immediately or with minimal delay, especially after receiving certain errors, the rapid accumulation of requests can surpass the threshold. **Suggested solution:** * Implement an exponential backoff strategy. Rather than retrying immediately or with a fixed delay, gradually increase the delay period after each failed attempt. * This allows the system time to recover and reduces the likelihood of overwhelming Prisma Accelerate and your database. Sudden traffic spikes [#sudden-traffic-spikes] Unpredicted traffic surges (for example, during product launches, flash sales, or viral growth events) can cause the threshold to be met and result into `P5011`. **Suggested solution:** * Consider proactive scaling strategies for both Prisma Accelerate and your database. * Monitor traffic and resource usage. If you anticipate a surge, please contact [support](/console/more/support) for capacity planning and potential configuration adjustments. Prolonged or planned high workloads [#prolonged-or-planned-high-workloads] Certain processes, such as bulk data imports, ETL operations, or extended CRON jobs, can generate continuous high query volume over time. **Suggested solution:** * Use batching or chunking techniques to break large operations into smaller parts. * Establish throttling or scheduling to distribute the load more evenly. Other errors [#other-errors] Error with MySQL (Aiven): "We were unable to process your request. Please refresh and try again." [#error-with-mysql-aiven-we-were-unable-to-process-your-request-please-refresh-and-try-again] Issue [#issue] When using an Aiven MySQL connection string that includes the `?ssl-mode=REQUIRED` parameter, you may encounter the following error: ``` We were unable to process your request. Please refresh and try again. ``` Cause [#cause] The `ssl-mode=REQUIRED` parameter is incompatible with Accelerate, which leads to connection issues. Suggested solution [#suggested-solution] To resolve this error, remove the `?ssl-mode=REQUIRED` parameter from your MySQL connection string. Example [#example] * Original connection string: `mysql://username:password@host:port/database?ssl-mode=REQUIRED` * Updated connection string: `mysql://username:password@host:port/database` # API Reference (/docs/accelerate/reference/api-reference) The Accelerate API reference documentation is based on the following schema: ```prisma model User { id Int @id @default(autoincrement()) name String? email String @unique } ``` All example are based on the `User` model. cacheStrategy [#cachestrategy] With the Accelerate extension for Prisma Client, you can use the `cacheStrategy` parameter for model queries and use the [`ttl`](/accelerate/caching) and [`swr`](/accelerate/caching) parameters to define a cache strategy for Accelerate. The Accelerate extension requires that you install Prisma Client version `4.10.0`. Options [#options] The `cacheStrategy` parameter takes an option with the following keys: | Option | Example | Type | Required | Description | | ------ | ---------- | ---------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `swr` | `60` | `Int` | No | The stale-while-revalidate time in seconds. | | `ttl` | `60` | `Int` | No | The time-to-live time in seconds. | | `tags` | `["user"]` | `String[]` | No | The `tag` serves as a variable to control the invalidation of specific queries within your application. It is an optional array of strings to [invalidate](/accelerate/reference/api-reference#accelerateinvalidate) the cache, with each tag containing only alphanumeric characters and underscores, and a maximum length of 64 characters. | | Examples [#examples] Add a caching strategy to the query, defining a 60-second stale-while-revalidate (SWR) value, a 60-second time-to-live (TTL) value, and a cache tag of `"emails_with_alice"`: ```ts highlight=7:11;normal await prisma.user.findMany({ where: { email: { contains: "alice@prisma.io", }, }, cacheStrategy: { // [!code highlight] swr: 60, // [!code highlight] ttl: 60, // [!code highlight] tags: ["emails_with_alice"], // [!code highlight] }, // [!code highlight] }); ``` Supported Prisma Client operations [#supported-prisma-client-operations] The following is a list of all read query operations that support `cacheStrategy`: * [`findUnique()`](/orm/reference/prisma-client-reference#findunique) * [`findUniqueOrThrow()`](/orm/reference/prisma-client-reference#finduniqueorthrow) * [`findFirst()`](/orm/reference/prisma-client-reference#findfirst) * [`findFirstOrThrow()`](/orm/reference/prisma-client-reference#findfirstorthrow) * [`findMany()`](/orm/reference/prisma-client-reference#findmany) * [`count()`](/orm/reference/prisma-client-reference#count) * [`aggregate()`](/orm/reference/prisma-client-reference#aggregate) * [`groupBy()`](/orm/reference/prisma-client-reference#groupby) The `cacheStrategy` parameter is not supported on any write operations, such as `create()`. withAccelerateInfo [#withaccelerateinfo] Any query that supports the `cacheStrategy` can append `withAccelerateInfo()` to wrap the response data and include additional information about the Accelerate response. To retrieve the status of the response, use: ```ts const { data, info } = await prisma.user .count({ cacheStrategy: { ttl: 60, swr: 600 }, where: { myField: "value" }, }) .withAccelerateInfo(); console.dir(info); ``` Notice the `info` property of the response object. This is where the request information is stored. Return type [#return-type] The `info` object is of type `AccelerateInfo` and follows the interface below: ```ts interface AccelerateInfo { cacheStatus: "ttl" | "swr" | "miss" | "none"; lastModified: Date; region: string; requestId: string; signature: string; } ``` | Property | Type | Description | | -------------- | ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `cacheStatus` | `"ttl" \| "swr" \| "miss" \| "none" ` | The cache status of the response.
  • `ttl` indicates a cache hit within the `ttl` duration and no database query was executed
  • `swr` indicates a cache hit within the `swr` duration and the data is being refreshed by Accelerate in the background
  • `miss` indicates that both `ttl` and `swr` have expired and the database query was executed by the request
  • `none` indicates that no cache strategy was specified and the database query was executed by the request
| | `lastModified` | `Date` | The date the response was last refreshed. | | `region` | `String` | The data center region that received the request. | | `requestId` | `String` | Unique identifier of the request. Useful for troubleshooting. | | `signature` | `String` | The unique signature of the Prisma operation. | $accelerate.invalidate [#accelerateinvalidate] You can invalidate the cache using the [`$accelerate.invalidate` API](/accelerate). To invalidate cached query results on-demand, a paid plan is required. Each plan has specific limits on the number of cache tag-based invalidations allowed per day, though there are no limits on calling the `$accelerate.invalidate` API itself. See our [pricing for more details](https://www.prisma.io/pricing#accelerate). Example [#example] To invalidate the query below: ```ts await prisma.user.findMany({ where: { email: { contains: "alice@prisma.io", }, }, cacheStrategy: { swr: 60, ttl: 60, tags: ["emails_with_alice"], // [!code highlight] }, }); ``` You need to provide the cache tag in the `$accelerate.invalidate` API: ```ts try { await prisma.$accelerate.invalidate({ // [!code highlight] tags: ["emails_with_alice"], // [!code highlight] }); // [!code highlight] } catch (e) { if (e instanceof Prisma.PrismaClientKnownRequestError) { // The .code property can be accessed in a type-safe manner if (e.code === "P6003") { console.log("The cache invalidation rate limit has been reached. Please try again later."); } } throw e; } ``` You can invalidate up to 5 tags per call. $accelerate.invalidateAll [#accelerateinvalidateall] You can invalidate the entire cache using the `$accelerate.invalidateAll` API. Example [#example-1] To invalidate the query below: ```ts await prisma.user.findMany({ where: { email: { contains: "alice@prisma.io", }, }, cacheStrategy: { swr: 60, ttl: 60, tags: ["emails_with_alice"], // [!code highlight] }, }); ``` Just call the `$accelerate.invalidateAll` API: ```ts try { await prisma.$accelerate.invalidateAll(); // [!code highlight] } catch (e) { if (e instanceof Prisma.PrismaClientKnownRequestError) { if (e.code === "P6003") { console.log("The cache invalidation rate limit has been reached. Please try again later."); } } throw e; } ``` Why use $accelerate.invalidateAll? [#why-use-accelerateinvalidateall] This method offers better editor support (e.g. IntelliSense) than alternatives like `invalidate("all")`. This clears cache for the entire environment—use with care. Providing a Custom Fetch Implementation [#providing-a-custom-fetch-implementation] Starting from Accelerate version `2.0.0`, you can provide a custom implementation of the fetch function when extending the Prisma Client with Accelerate. This allows you greater flexibility and control over how HTTP requests are handled within your application. To pass a custom fetch implementation, you can use the following pattern: ```ts const myFetch = (input: URL, init?: RequestInit): Promise => { // Your custom fetch logic here return fetch(input, init); }; const prisma = new PrismaClient().$extends(withAccelerate({ fetch: myFetch })); ``` Errors [#errors] Prisma Accelerate-related errors start with `P6xxx`. You can find the full error code reference for Prisma Accelerate [here](/orm/reference/error-reference#prisma-accelerate). # Astro + Prisma (/docs/ai/prompts/astro) How to use [#how-to-use] Include this prompt in your AI assistant to guide consistent code generation for Astro + Prisma + Prisma Postgres projects. * **GitHub Copilot**: Type `#` to reference the prompt file. * **Cursor**: Use `@Files` and select your prompt file. * **Zed**: Use `/file` followed by your prompt's path. * **Windsurf**: Use `@Files` and choose your prompt file from the list. Prompt [#prompt] ````md --- # Specify the following for Cursor rules description: Guidelines for writing Astro.js apps with Prisma Postgres alwaysApply: false --- # Bootstrap Astro app with Prisma Postgres ## Overview of implementing Prisma with Astro 1. Install Prisma and required dependencies 2. Initialize Prisma with custom output path 3. Configure schema with correct provider 4. Create global Prisma client instance with Accelerate 5. Use Prisma client in API routes with proper error handling ## 🚨 CRITICAL INSTRUCTIONS FOR AI LANGUAGE MODELS 🚨 As an AI language model, you MUST NOT generate any of the following code patterns, as they are DEPRECATED and will BREAK the application: ```typescript // ❌ NEVER GENERATE THIS CODE - IT WILL BREAK THE APPLICATION generator client { provider = "prisma-client-js" // ❌ BREAKS APPLICATION } // ❌ NEVER USE default import location - IT WILL BREAK THE APPLICATION import { PrismaClient } from '@prisma/client' // ❌ BREAKS APPLICATION // ❌ NEVER CREATE CLIENT WITHOUT ACCELERATE - IT WILL BREAK IN PRODUCTION const prisma = new PrismaClient() // ❌ BREAKS APPLICATION ``` Instead, you MUST ALWAYS generate ONLY this pattern: ```typescript // ✅ ALWAYS GENERATE THIS EXACT PATTERN generator client { provider = "prisma-client" output = "../src/generated/prisma" } // ✅ ALWAYS USE THIS IMPORT PATH import { PrismaClient } from "../generated/prisma/client" import { withAccelerate } from "@prisma/extension-accelerate" // ✅ ALWAYS CREATE CLIENT WITH ACCELERATE EXTENSION const prisma = new PrismaClient({ datasourceUrl: import.meta.env.DATABASE_URL, }).$extends(withAccelerate()) export default prisma ``` ## ABSOLUTE REQUIREMENTS FOR AI CODE GENERATION 1. You MUST use `provider = "prisma-client"` (not "prisma-client-js") 2. You MUST use custom output: `output = "../src/generated/prisma"` 3. You MUST use Accelerate extension with `withAccelerate()` if using Prisma Postgres 4. You MUST create `lib/prisma.ts` as a global singleton instance 5. You MUST wrap all database calls in try-catch blocks 6. You MUST import from `'../generated/prisma/client'` (not `'@prisma/client'`) 7. You MUST use `import.meta.env.DATABASE_URL` in Astro (not `process.env`) 8. You MUST ALWAYS use the full `npx prisma init` command with ALL flags: `npx prisma init --db --output ../src/generated/prisma --generator-provider prisma-client` ## CORRECT INSTALLATION ```bash # Dev dependencies npm install prisma tsx --save-dev # Production dependencies npm install @prisma/extension-accelerate @prisma/client ``` ## CORRECT PRISMA INITIALIZATION ```npm npx prisma init --db --output ../src/generated/prisma ``` ## CORRECT SCHEMA CONFIGURATION ```prisma generator client { provider = "prisma-client" output = "../src/generated/prisma" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model YourModel { id Int @id @default(autoincrement()) createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } ``` ## CORRECT GLOBAL PRISMA CLIENT **src/lib/prisma.ts**: ```typescript import { PrismaClient } from "../generated/prisma/client"; import { withAccelerate } from "@prisma/extension-accelerate"; const prisma = new PrismaClient({ datasourceUrl: import.meta.env.DATABASE_URL, }).$extends(withAccelerate()); export default prisma; ``` ## CORRECT API ROUTE IMPLEMENTATION All API routes MUST follow this pattern with proper error handling: ```typescript import type { APIRoute } from "astro"; import prisma from "../../../lib/prisma"; export const GET: APIRoute = async () => { try { const data = await prisma.yourModel.findMany(); return new Response(JSON.stringify(data), { status: 200, headers: { "Content-Type": "application/json" }, }); } catch (error) { console.error("Error:", error); return new Response(JSON.stringify({ error: "Failed to fetch data" }), { status: 500, headers: { "Content-Type": "application/json" }, }); } }; export const POST: APIRoute = async ({ request }) => { try { const body = await request.json(); // Validate required fields if (!body.requiredField) { return new Response(JSON.stringify({ error: "Required field missing" }), { status: 400, headers: { "Content-Type": "application/json" }, }); } const result = await prisma.yourModel.create({ data: body, }); return new Response(JSON.stringify(result), { status: 201, headers: { "Content-Type": "application/json" }, }); } catch (error) { console.error("Error:", error); return new Response(JSON.stringify({ error: "Failed to create data" }), { status: 500, headers: { "Content-Type": "application/json" }, }); } }; ``` ## CORRECT USAGE IN ASTRO PAGES Server-side data fetching in Astro pages: ```astro --- import prisma from '../lib/prisma' // Fetch data on the server const data = await prisma.yourModel.findMany() --- {data.map(item => (
{item.name}
))} ``` ## AI MODEL VERIFICATION STEPS Before generating any code, you MUST verify: 1. Are you using `provider = "prisma-client"` (not "prisma-client-js")? If not, STOP and FIX. 2. Are you using `output = "../src/generated/prisma"`? If not, STOP and FIX. 3. Are you importing from `'../generated/prisma/client'`? If not, STOP and FIX. 4. Are you using `withAccelerate()` extension? If not, STOP and FIX. 5. Are you wrapping database operations in try-catch? If not, STOP and FIX. 6. Are you using `import.meta.env` (not `process.env`)? If not, STOP and FIX. 7. Did you use `--generator-provider prisma-client` (NOT `postgresql`)? If not, STOP and FIX. ## CONSEQUENCES OF INCORRECT IMPLEMENTATION If you generate code using: - `prisma-client-js` provider - Default import from `@prisma/client` - PrismaClient without Accelerate extension - Missing custom output path The implementation will: 1. Break in production 2. Cause connection pool exhaustion 3. Fail to maintain proper database connections 4. Result in import errors 5. Cause performance issues ## USEFUL COMMANDS ```npm npx prisma generate # Regenerate client after schema changes ``` ```npm npx prisma migrate dev --name X # Create and apply migration ``` ```npm npx prisma studio # Open visual database editor ``` ```npm npx prisma db push # Push schema without migration ``` ## AI MODEL RESPONSE TEMPLATE When asked about Prisma + Astro implementation, you MUST: 1. ONLY use code from this guide 2. NEVER suggest deprecated approaches 3. ALWAYS use the exact patterns shown above 4. VERIFY your response against the patterns shown here 5. ALWAYS include error handling in API routes 6. ALWAYS use the global prisma instance from `lib/prisma.ts` Remember: There are NO EXCEPTIONS to these rules. ```` Running the application [#running-the-application] Get your application running locally in three quick steps: **1. Generate the Prisma Client:** ```bash npx prisma generate --no-engine ``` **2. View your database in Prisma Studio:** ```bash npm run db:studio ``` Prisma Studio opens at `localhost:5555` where you can inspect your `User` table and see the test user stored in your database. **3. Start your Next.js development server:** ```bash npm run dev ``` Visit `http://localhost:3000` to see your Next.js application live, displaying your first user fetched directly from your Prisma Postgres database! # Next.js + Prisma (/docs/ai/prompts/nextjs) Prerequisites [#prerequisites] Before using this prompt, you need to create a new Next.js project: ```bash npx create-next-app@latest my-app cd my-app ``` When prompted, select the following recommended options: * **TypeScript**: Yes * **ESLint**: Yes * **Tailwind CSS**: Yes (optional) * **`src/` directory**: No * **App Router**: Yes * **Turbopack**: Yes (optional) * **Import alias**: Use default (`@/*`) Once your Next.js project is created, you can use the prompt below with your AI assistant to add Prisma and Prisma Postgres. How to use [#how-to-use] Include this prompt in your AI assistant to guide consistent code generation for NextJS + Prisma + Prisma Postgres projects. * **GitHub Copilot**: Type `#` to reference the prompt file. * **Cursor**: Use `@Files` and select your prompt file. * **Zed**: Use `/file` followed by your prompt's path. * **Windsurf**: Use `@Files` and choose your prompt file from the list. Video Tutorial [#video-tutorial] Watch this step-by-step walkthrough showing this prompt in action: