August 21, 2018

How We're Constantly Improving the Performance of Prisma

Fast and predictable performance is crucial for software to succeed. Therefore, ensuring great performance is one of our key goals. This post discusses the practices and tools used by our engineering team to make sure that our software meets our high performance standards.

How We're Constantly Improving the Performance of Prisma

⚠️ This article is outdated as it relates to Prisma 1 which is now deprecated. To learn more about the most recent version of Prisma, read the documentation. ⚠️

Constantly evaluating performance

We're taking performance very seriously. Since we started working on Prisma, we have adopted many practices and tools that help us to constantly evaluate and optimize the performance of the software that we build.

Profiling and benchmarking are part of the engineering process

To ensure the stability of our software, we're running a unit test suite every time new features are introduced. This prevents regression bugs and guarantees our softwares behaves in the way it is expected to.

Because performance and stability are equally important to us, we're employing similar mechanisms to ensure great performance. With every code change, we're heavily testing performance by running an extensive benchmarking suite. This benchmarking suite is testing a variety of operations (e.g. relational filters and nested mutations) to cover all aspects of Prisma. The results are carefully observed and features are being optimized if needed.

Sometimes even minor code changes can have a very negative impact on the performance of an application. Catching these by hand is very difficult, without any sort of profiling tool almost impossible. The benchmarking suite provides an automated way for us to identify such issues and is absolutely crucial to avoid the accidental introduction of performance penalties.

Using FlameGraphs to identify expensive code paths

FlameGraphs are an important tool in our profiling activities. We're using them to visualize expensive code paths (in terms of memory or CPU usage) which we can then optimize. FlameGraphs are extremely helpful not only to identify low-hanging fruits for quick performance gains but also to surface problematic areas that are more deeply engrained in our codebase.

An example how we reduced memory allocation by 40%

Here is an example to illustrate how the benchmarking suite and FlameGraphs helped us identify and fix an issue that ultimately led to a 40% reduction in memory allocation for certain code paths.

  1. After having introduced a code change, the data of our benchmarking suite showed that a certain code path was notably slowing down as load increased.
  2. To identify the part in the code that was causing the performance degrade, we looked into the FlameGraph visualization.
  3. The FlameGraph showed that a Calendar instance ate lots of memory during the execution of a certain code path (the width of the purple areas indicates how much memory is occupied by the Calendar) FlameGraph of Calendar instance
  4. Further debugging showed that the Calendar was instantiated in a hot path which caused the high memory usage.
  5. In this case, the fix was simple and the Calendar instantiation could just be moved out of the hot path.
  6. The fix reduced the memory allocation by 40%.

To learn more about the details of this issue, you can check out the PR that fixed it.

Keep the eyes open for our engineering blog. The articles on the engineering blog will focus in extensive detail on our performance optimizations and other deeply technical topics.

Increasing performance in Prisma 1.14

With the latest 1.14 and 1.15-beta releases of Prisma, we're introducing a number of concrete performance improvements. Those improvements are the result of a period where we heavily invested into identifying the most expensive parts in our software and optimizing them as much as possible.

A common pattern we're seeing in our opimization activities is that it is a lot more time consuming to identify the exact part in the codebase causing a performance penalty than actually fixing it (which often is with done minimal changes to the code). The above example with the Calendar instance is a good illustration of that.

If you're curious, here's a few more PRs that brought notable performance gains through rather little changes to our codebase:

Future performance improvements

Our vision to bu ild a data layer that uses GraphQL as a universal abstraction for all databases is a technically extremely ambitious goal. Some benefits of this are: Easy data access on the application layer (similar to an ORM but without limitations), simple data modeling and migrations, out-of-the-box realtime layer for your database (similar to RethinkDB), cross-database workflows and a lot more.

These benefits provide enormous productivity boosts in modern application development and are impossible to achieve without a dedicated team focused on building such a data layer full-time. Working on this project as a company, enables us to heavily invest in specialized optimization techniques that product-focused companies could never afford to manually build into their data access layer.

In upcoming releases, we're planning to work on new features specifically designed for better performance. This includes a smart caching system, support for pre-computed views as well as support for many more databases each with their own strengths and query capabilities.



Don’t miss the next post!

Sign up for the Prisma Newsletter