Deploying Graphcool to kubernetes


#1

Has anyone deployed their local graphcool service to Kubernetes? Doing the local deployment creates and runs 3 Docker images/containers. The project for the service itself does not have the docker files involved in this. Has anyone done a project where they were able to deploy these images to Kubernetes, instead of just running them locally or in AWS?


#2

#3

Also see https://github.com/graphcool/framework/pull/1324


#4

I’m also thinking a lot about deploying a Prisma-based application on Kubernetes. We have an existing architecture with several services that are running on a k8s cluster. The current system is suffering from our hand rolled GraphQL API which serves our clients. We would love to switch to Prisma in order to replace our own data management / query engine. Before diving straight into implementing a possible deployment strategy, I would love to hear your thoughts on this:

First of all: Am I wrong in saying that we have everything we need when adopting this guide to Kubernetes? We “only” have to make sure to port it to the Kubernetes primitives. I could imagine the following scenario:

+---------------+         +------------------+
|               |         |                  |
|   MySQL Pod   |    +---->    Prisma Pod    |
|               |    |    |                  |
+-------^-------+    |    +--------^---------+
        |            |             |
+-------v-------+    |    +--------v---------+
|               |    |    |                  |
|    Service    <----+    |     Service      |
|               |         |                  |
+---------------+         +--------^---------+
                                   |
                                   |
                                   |
                                   |
+----------------------------------v---------+
|                                            |
|           Own Prisma Application           |
|                                            |
+---------------------^----------------------+
                      |
+---------------------v----------------------+
|                                            |
|                  Ingress                   |
|                                            |
+---------------------^----------------------+
                      |
                      |
                      |
                      |
+---------------------v----------------------+
|                                            |
|                   World                    |
|                                            |
+--------------------------------------------+

With this in place, it would be possible to configure the ~/.prisma/config.yml like described in the guide. But before we are able to communicate with the cluster, we have to make sure to kubectl port-forward everytime we want to issue actions by the prisma CLI. Not the “beautifulest” solution, but it could work until the CLI supports k8s natively.

What do you think about this? Do you think that this could work?

I’m looking forward to hear / read your thoughts :slight_smile:


#5

Thanks to @andre for providing this tutorial to deploy Prisma to Kubernetes: https://www.prismagraphql.com/docs/tutorials/cluster-deployment/kubernetes-aiqu8ahgha


#6

Having difficulties in having a finished deployment of prisma on K8s:

For the API i have created, what is the best way to use it in a K8s deployment?

  1. Create an API container then use services to communicate with the prisma server?
    or
  2. Clone the API code into the prisma server container?

#7

Your Prisma deployment should be separate from your actual API deployment:

  1. Deploy a Prisma cluster as described in the tutorial
  2. Deploy your Prisma service (a.k.a. your data model) as described in the tutorial.
  3. Create the Kubernetes deployment definition for your API and pass the Prisma service connection information (endpoint and secret) via environment variables to the prisma-binding within your API

#8

managed to do as described, stuck on the services part. In the environment variables of the API deployment I used the endpoint definition as suggested earlier together with the secret but seems my api is timing out when it comes to communicating with the prisma server.

How best can one check where the problem really resides when it comes to service discovery in kubernetes?


#9

Hey @wilbrodTC - Can you shed some light on your cluster structure? Are all components in the same Kubernetes namespace? If not, make sure that you have defined the whole DNS entry, like: your-prisma-service.your-namespace.svc.cluster.local as environment variable. You can find more information about how the Kubernetes DNS works here.


#10
  1. I set up a cluster on GKE with 3 nodes
  2. Followed your tutorial on the prisma deployment on GKE up to the port forwarding section, I then deployed the prisma service and it was successful
  3. Created a K8s manifest for my API, deployed the its pod and exposed its service
  4. On my browser I can view my API but when i run a query to the prisma server via the browser it times out.

#11

@wilbrodTC There is something wrong with your PRISMA_ENDPOINT URL. In which namespace is your API running? So the Prisma service is running in the prisma namespace. I guess your API will be deployed to a separate namespace. If this is the case, you have to make sure to use the full hostname, which would be:

http://prisma.prisma.svc.cluster.local:4466/<your-service-name>/<your-stage>


#12

TL;DR

@andre I have an Azure Kubernetes Cluster and Postgres database instance with a Prisma Management container/pod. On the Prisma management container/pod I have a load balancer object and it’s external IP is exposed on the web as such http://<ip address>/serviceName/stageName. Everything works as expected, however, this probably will not be my production configuration as I’m better understanding the mechanics of Kubernetes and it might be better to leverage the internal DNS as explained in this thread to interact with the management API, for example: prisma deploy --env .env.dev. Also, I am using the primsa-nexus “SDL/Code first” to generate the schema.graphql SDL API to be exposed in the public facing graphql server. I am able to develop my API locally using this configuration, but I’m not 100% sure the best direction to go from here. Really surprised there so little information to determine the best way to deploy a graphql-yoga or apollo graphql server via Kubernetes. (I have a basic serverless server on Now https://github.com/GoodFaithParadigm8/serverless-graphql/blob/master/index.js, but not sure how that architecture translates to Kubernetes objects though)

Basic express app, routes, port mapping and corrosponding Dockerfile:
https://github.com/GoodFaithParadigm8/docker-complex/blob/master/server/index.js#L71

Graphql-Yoga/apollo-server-express and port:
https://github.com/prisma/graphql-yoga/blob/master/src/index.ts#L73

TODO:

  1. Build a Docker Container/Image to host the generated server
  2. Make that graphql server highly available
  3. Build application to consume that API with Apollo Client

#3. With Apollo Client everything is pretty straight forward: build all the assets and put the artifacts on a CDN (or maybe containerize it and host it in the Kubernetes cluster, although I don’t understand the benefit for that at the moment ) #2. Making the graphql server highly available is pretty straight forward: Ingress-Controller out in front with static IP address and scale as needed, eventually programmatically as time and effort allow…and #1. which has to be solved first as I build out v1 and assuming a future v2 of my API, there will be several iterations of that docker container which in itself doesn’t pose many challenges. The lingering questions I have as it pertains specifically to Kubernetes is:

Do I need to build that image from an nginx image?
Would it need any internal Port mapping through a service? eg. to generate a new client SDK( as of now the externalIP configuration is insecure, how does this work using Kubernetes internal DNS as the prisma-client uses it to expose prisma management API as shown here https://github.com/prisma/nexus/blob/develop/examples/nexus-prisma/src/generated/prisma-client/index.ts#L883)
Using labels for Route authorization inside Kubernetes, do we build a bunch of micro-services to achieve this or is it more effective to implement this in the client?

Also TODO:
SSO - Requirement for multiple apps across horizontal domains

Have read somewhere that Kubernetes treats its cluster like one big localhost, which makes sense, but it doesn’t exactly clarify a best practice for configuration, especially when it comes to using an exotic technology stack as such.

Open to suggestions and/or clarifications.