Development Workflow Integration


#1

My overall goal is to get Prisma to generate a client that multiple other microservices can then import and use within Docker Compose.

Unfortunately, the Prisma CLI is run on the host machine instead of within Docker according to the standard practice in the docs. This leads to a mismatch between the Prisma endpoint that is accessible to the CLI vs the endpoint that is accessible to other services run in the same Docker Compose network as the Prisma service.

In the short term, I have resolved this by manually editing the generated Prisma client after each deploy (see Prisma binding unable to connect to prisma server. request to http://localhost:4466/ failed, reason: connect ECONNREFUSED 127.0.0.1:4466).

There are several routes that I see as potentially viable to address this problem such that my workflow is less tedious and hacky.

  1. Run the CLI within the Prisma container.
    In this scenario, I would need to create a Dockerfile in my Prisma server repository (the one with prisma.yml and datamodel.prisma) that adds npm and installs the CLI (a comment in one of the server deployment scripts indicated that the base image is java-alpine). Then a folder could be mounted in the container so changes to datamodel.prisma on the host are reflected in the container without a rebuild, and have another volume that both my Prisma server service and my Prisma client services mount and from which the latter import. I could then docker-compose run prisma /bin/bash -c prisma deploy --watch and these changes should automatically be propagated to the clients (unless there are data loss migrations, but I haven’t used Prisma enough to know what happens when deploy uses the watch flag and data loss would occur; worst-case these rarer instances could be manually deployed, still within the container).
    I personally like this option somewhat because it means I could use another similar Dockerfile for CD in order to build and publish a private NPM package with my generated Prisma client, which my deployed microservices could then import.
    My problem with this option is that it seems a bit heavy handed to install all of node/npm just to run a CLI, but I suppose the image could be cleaned up for deployment (e.g. RUN some commands to install npm, RUN prisma deploy, publish the private package, then uninstall node/npm).

  2. Use the Prisma server API within the Docker container. For local development, this seems easier since I wouldn’t need to create a Dockerfile, but could simply run a wget command to send the deploy mutation. This wouldn’t (I don’t believe, but correct me if I’m wrong) allow me to watch changes, but it’s not the end of the world if I have to run that command for every redeploy. For CD deployment, I would still need to add instructions somewhere to publish a private npm package, but that seems viable.

Are there any flaws with either of these approaches before I actually spend the time to dive into them? Is one preferable for any reasons I haven’t anticipated?

Thanks in advance.


#2

Hi @Californian

You can manually instantiate the Prisma class with a custom endpoint like so, we do the same thing in the default instance we have expose:

export const db = new Prisma({
  endpoint: 'http://mycustomendpoint/service/stage',
  secret: process.env.PRISMA_SECRET ,
});  

Here is how I am doing it in one of my personal projects: https://github.com/javascript-af/javascript-af/blob/a27f7033aa6e4168cfdf7180ee50a65dc41f0ef7/packages/backend/src/apolloServer.ts#L6

So, conclusion is that instantiate the Prisma class yourself instead of using the default instance that we provide you with.

Let me know if you have any other questions.

Thanks,
Harshit


#3

Thank you so much! That worked after clearing my system of Prisma (very manually I should note; had to sudo find / -name "*prisma*" and remove all potentially-relevant instances, docker system prune --volumes, and docker rmi $(docker images -a -q) before reinstalling 1.30).