Share

Migrate from SE Classic to SE Docker

Warning:

Local installations of Flow Production Tracking are no longer offered. This documentation is intended only for those with existing instances of Flow Production Tracking Enterprise Docker. Click here for a list of our current offerings.

asterisk Contents

This article will help Flow Production Tracking System Administrators migrate from the Classic architecture to the Docker one.

Proposed approach

Even before migrating to a container architecture, Flow Production Tracking was modular. This means that every component could be run on a dedicated host, allowing for different typologies.

We propose to put this characteristic at use while migrating to the Docker architecture.

1. Migrating the database

You need to change the owner of the functions in the database. If it's not done, the next Flow Production Tracking upgrade will fail. The database name and user should look like this: com_teststudio_shotgun.

How to check functions ownership

  • Connect to the database with psql and run \df+.
  • Check if the Owner of the functions is same as the database user. It's usually something like com_teststudio_shotgun.

How to fix

You will need to run the following command in psql shell connected to the Flow Production Tracking database using a user with SUPERUSER rights like shotgun has.

This will produce the list of commands to run. Replace the com_teststudio_shotgun by your database user.

 select 'alter function '||nsp.nspname||'.'||p.proname||'('||oidvectortypes(p.proargtypes)||') owner to com_teststudio_shotgun;'
from pg_proc p 
join pg_namespace nsp ON p.pronamespace = nsp.oid
where nsp.nspname = 'public';

Copy and paste the result in the same psql shell to execute them.

2. Migrating the application module

Start an application container

You first need a Flow Production Tracking App container running on a CentOS 7 host. Follow this procedure if this is not yet the case.

Database

Set the POSTGRES_XXX environment variables in all containers to point your existing database. The database name and user should look like this: com_teststudio_shotgun.

Also update the PGXXX environment variables in the DBOPS container.

Media

Mount your existing media folder in the app container and set the SHOTGUN_USER_ID environment variable to match the UID of the user owning the file on the host. This is usually the shotgun user.

If the files and storage folder are not under media, edit your docker-compose.yml app volumes section to look like this:

 volumes:  
  - /mnt/shotgun_files:/media/files  
  - /mnt/shotgun_storage:/media/storage  

RabbitMQ

If you are already using RabbitMQ for notifications, copy your existing bunny.yml and map it in the app container volumes section.

 volumes:  
  - bunny.yml:/var/rails/shotgun/current/config/bunny.yml 

LDAP

If LDAP was previously configured for authentication, copy your existing net_ldap.yml and map it in the app container volumes section.

 volumes:  
  - net_ldap.yml:/var/rails/shotgun/current/config/net_ldap.yml 

Restart the containers

After these changes, restart the container.

 sudo docker-compose up -d 

3. Migrating the transcoding service

First, make sure the Flow Production Tracking transcoding service is correctly setup and running using the containers. You can choose to run the service on the same database as Flow Production Tracking, or on a dedicated one.

Was this information helpful?