Migrating from an existing hosted site to a local site
Local installations of Flow Production Tracking are no longer offered. This documentation is intended only for those with existing instances of Flow Production Tracking Enterprise Docker. Click here for a list of our current offerings.
This procedure explains how to migrate a Flow Production Tracking hosted site to a local flow production tracking instance.
Requirements
This procedure assumes that:
- For the media, the host running the flow production tracking-app containers can access AWS S3 end-points.
- Your media storage is properly configured and has enough storage capacity to store all your media.
- Flow Production Tracking Enterprise (SE) Docker is already running and properly configured.
Step 0. Create a ticket with flow production tracking Support
Our support team is there to help you with the migration process. If you have not already done so, please open a ticket with Flow Production Tracking Support and they will guide you during that process.
Step 1. Restore the database
Download the DB snapshot from S3 using the credentials and path we supply. You will need to install aws-cli and configure it with the credentials we supply. We will provide the necessary <bucket>
and <prefix>
values alongside the credentials.
aws s3 cp s3://<bucket>/<prefix>/db_backup /opt/flow production tracking/se/production/db_backup --recursive
Restore the hosted database with on your flow production tracking instance.
docker-compose stop app emailnotifier
docker-compose exec dbops dropdb flow production tracking
docker-compose exec dbops createdb flow production tracking
docker cp <BACKUP_NAME> $(docker-compose ps -q dbops):/db_backup/
docker-compose exec dbops pg_restore --no-owner --verbose -Fc --no-acl --jobs 4 --dbname flow production tracking /db_backup/<backup_file>.dump
docker-compose run --rm app rake admin:init_database
docker-compose run --rm app rake admin:reset_flow production tracking_admin_password
docker-compose start app emailnotifier
Step 2. Configure access to AWS S3
This step is optional. Configuring S3 will allow you to access the media you previously injected in your hosted instance. Your local Flow Production Tracking Cluster will be functional without the media, but your Flow Production Tracking instance won’t be able to display thumbnails, filmstrip, and movies that were added while your hosted instance was used in production.
Configure flow production tracking to use AWS S3
vi /opt/flow production tracking/se/production/services.yml
Use this template and replace ACCESS_KEY_ID
and SECRET_KEY_ID
with the values provided by Flow Production Tracking Support. If you don’t have these keys, please ask for them in your Support ticket.
aws:
auth:
app:
access_key: ACCESS_KEY_ID
secret_key: SECRET_KEY_ID
s3:
sg-media-usor-01:
region: us-west-2
sg-media-tokyo:
region: ap-northeast-1
sg-media-ireland:
region: eu-west-1
For all your flow production tracking-app container(s), map the services.yml
by adding it as a volume in the docker-compose.yml
file.
app:
volumes:
- ./media:/media
- /opt/flow production tracking/se/production/services.yml:/var/rails/flow production tracking/current/config/services.yml
Restart app container
docker-compose up -d app
You should now have be able to see the media from S3 in your local Flow Production Tracking instance, typically at http://<your_site>.com
.
Download the media
Only for Flow Production Tracking SE Docker v7.4.x and later.
docker-compose run --rm app script/sync_s3_media.rb
This will run for hours if the hosted site has a lot of media.
Remove S3 access
When all the media has been downloaded, you can remove the service.yml
file and remove the host access to S3.
Troubleshooting
There is a 403 error in Flow Production Tracking while browsing media after configuring AWS S3.
If you get 403 errors on the web page, double check the server time. It must be up-to-date to the minute; the AWS API is really picky about this. A classic option is to use ntpdate
.
sudo yum install ntpdate -y && sudo ntpdate pool.ntp.org
The media downloading script failed. Can it be run again?
Yes, this script is idempotent. If it is rerun after failure, media that has already been downloaded will be skipped.
What is happening during the download process?
Files are downloaded locally on your media server, and the reference to the file is updated in the flow production tracking database. Files are not removed from S3 during that process.
Database Error
When you see this error
dropdb: database removal failed: ERROR: database "flow production tracking" is being accessed by other users
DETAIL: There is 1 other session using the database.
Please follow this:
docker-compose run dbops sh
# psql
psql (9.6.7, server 9.6.6)
Type "help" for help.
flow production tracking=# SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'flow production tracking'
AND pid <> pg_backend_pid();