Share

Scaling Flow Production Tracking Enterprise Docker

Warning:

Local installations of Flow Production Tracking are no longer offered. This documentation is intended only for those with existing instances of Shotgun Enterprise Docker. Click here for a list of our current offerings.

Note:

This article is intended for Flow Production Tracking System Administrators. It explains how to scale Flow Production Tracking to sustain the growing needs of your Studio.

Flow Production Tracking topology

Upon the initial setup, Flow Production Tracking is usually running in a single server or standalone database server configuration. These configurations are usually enough for small and medium studios to make a casual usage of their Flow Production Tracking instances. As your workflows evolve and the size of your projects increases, this may no longer be enough.

Flow Production Tracking containerized architecture allows it to scale easily. There are typically three parts of Flow Production Tracking that can be heavily solicited and that can require scaling:

  • The application module, handling, and dispatching requests to the different components, and sending back formatted answers to clients;
  • The transcoding module, which is converting your media into web playable content;
  • The database server, which handles complex queries on your production data.

Before thinking about scaling Flow Production Tracking, it is important to identify what is causing performance issues for your instance.

Identifying the bottleneck

While Flow Production Tracking must at one point be scaled, it is important to understand what the issue is before addressing it. Often, optimizing your workflows and your queries will prevent the need of changing the topology.

The following article, Diagnosing performance issues for Flow Production Tracking Enterprise Docker, will help you determine what are the bottlenecks in your instance.

Scaling the application module

You determined that Flow Production Tracking is application bound. This means that your Application server(s) can’t process requests and replies more rapidly, but that the database could take more load.

The application module can be scaled by introducing an additional host, which will share the load with the existing ones. There are three main steps to add a new application server:

  1. Set up the host
  2. Run additional application containers
  3. Set up load balancing

Specializing the application container

One way to improve the Web users experience is to dedicate a server to an application container for web requests. This allows users to have a responsive experience, even when your pipeline is heavily soliciting Flow Production Tracking.

See Application container customization for more details on how to specialize containers.

Load balancing

There is no standard way of load balancing the traffic between your multiple application servers. We recommend using a strategy with which your Studio IT team is familiar with. Load balancing traffic for Flow Production Tracking is usually not very intensive. You can use a hardware solution, but a software solution on standard hardware (like NGINX or HAproxy) will be able to sustain the load. A simple round-robin load balancing strategy is more than acceptable.

One thing you will want to consider is decoupling web request from API ones, like explained above. If you decide to go that way, it is useful to know that all script and API requests are using the route /api/ to talk to Flow Production Tracking. A simple route routing strategy will allow you to direct all the API traffic to given server(s) while directing Web traffic to another set of servers.

Scaling the database

The Flow Production Tracking database can only scale vertically for now. This means that the only ways to increase the database server capacity is to upgrade the hardware. A database is usually I/O bounded. Adding faster disks and increasing the amount of memory are the easiest ways to address performance issues.

Also, you'll want to make sure that your database server is properly configured.

Scaling the transcoding service

You determined that transcoding jobs are queuing up, and that the users must wait a long time before seeing thumbnails and being able to playback media. Scaling the transcoding service is easy. If the load on the server is not to high, you can simply scale the number of workers running on the host.

If the high on the transcoding host is already high, it is better to introduce a new transcoding server in the topology.

  1. Set up the new host.
  2. Configure properly transcode workers for your transcoding service and start multiple workers.

If properly configured, the new workers will automatically start processing jobs.

Was this information helpful?