How to architect Prisma Cloud as microservices

prisma-cloud
prisma

#1

Hello!
Our team is trying to architect a graphql API using prisma cloud as our database, but we are a bit stuck on how best to architect it. We would like to follow a microservices-based architecture where business logic is delegated to these services which can function on their own-- the share-nothing philosophy. One way we thought of accomplishing this is by splitting up our application into services which have their own prisma databases and stitching them together (i.e. a user service, payments service, etc.). Essentially this pattern:

However, we run into the issue of nested filters. For instance, if we want to get all users where their payment method is a credit card, we wouldn’t really be able to do that since users may only reference paymentId and vice versa. The alternative is to centralize the database (a single prisma schema) and share it among the different microservices.

We are mainly looking for advice on how best to approach this.

Thank you!


#2

Hi there!

What is the reason of you trying to go for the microservices-based architecture?

Personally I think you can go a long way with a centralised database for a long time. I mean, it could be backed by Amazon RDS which is pretty high performance and fault-tolerant.

Phrase your needs a bit further! :slight_smile:

Do you have million of users or a team of hundreds of engineers, or are you a young startup?

If you do like some heavy processing/machine learning/big data stuff, sure - run that as separate services and plug it in through in an API Gateway, but I don’t see why you wouldn’t store most things in a centralised database until your service really booms. If/when it does, you’ll probably have capital to comes with that to tackle those issues.

Here’s two articles that resonates a lot with me personally.

I stand strongly against premature optimisation - I prefer shipping features for the people using the services!


#3

Hi! Thank you very much for the advice! Those articles were very helpful to read and seriously made me rethink our decision. We are a small team of remote developers across different timezones (for a Series A startup) which makes communication more difficult to be able to effectively build a monolithic application without stepping on each other’s toes. Another challenge we face is that our current monolithic REST API had been built by 20 different developers over time (many from offshore on a low budget) throughout its lifetime and handed down to our current team. The application was not well architected and dependencies were not clearly modularized, so it is very difficult to reason about.

Our current team is very proficient with devops and we wanted to separate the application into different domains, so we can all work independently and come together to integrate the services. The plan is to migrate away from our current poorly implemented RDS database (not Aurora) to Prisma (with Aurora).

Also, to answer my own question, nested filters like the one I mentioned are mainly done for business analysis purposes and a better approach for those kind of queries would be BigQuery which we already use. So, it would only be a matter of keeping our new database integrated with it.


#4

There’s no silver bullet! Always pros and cons. And they’re hard to know up front.

Here’s some of the things I personally dislike spending time on with microservices

  • Debugging - like tracing messages/logs through several apps and trying to understand where the issue lays.
  • Devops. In general. I am not a wizard and I don’t have a desire to become.
  • Developing features that span across several projects/services… and then deploying in unison without breaking shit.
  • Code sharing. Managing common libraries etc for the business, making sure your change of the lib didn’t break another app, etc.
  • Piping/joining data together between. End up with a lot of complexity, data duplication, and/or slow requests.
  • Service communication is hard. Event driven vs RPC vs HTTP requests between services, etc…
  • E2E / service integration testing. It’s messy. Need to deploy your whole infrastructure into a test cluster. And ideally you’d want to spin that up on demand so you can work on one app in isolation.

With some diligence you can modularise your monolith into different domains, but still makes it easier to share code than with separate projects and to deploy, and then once something is clear to be independent and/or needs to be scaled independently you break it out. Doesn’t have to be 1 monolith or 150 microservices either, there’s lots of in-betweens.

For the business analyst purposes I’d continue using BigQuery or Redshift or something as you’re doing, so they don’t slow the app down.


#5

We were definitely afraid of those things you mentioned, so we opted to go for a fewer number of larger services (~10). For example, the users service could be one application that manages user information as well as authentication logic, payments is one application, etc. We think this is a good middle ground and will enable us to move much more quickly. We always have the option later on to break those up into smaller pieces as the need arises.

We really appreciate your input on this! It really helped us think through our decisions.