Project Server Configuration
A project server eases collaboration and simplifies project management by eliminating the creation of project data on the Flame, Flare, or Flame Assist instances. The project data is stored on the centralized project server.
Pay attention to the guidelines related to the maximum number of concurrent users at the end of this topic to create a stable and performant environment.
In this topic, we create a headless project server instance to be used in a network of Flame and Burn instances that all use the same project server and networked storage.
You will:
- Set up a project server instance on AWS.
- Connect to the instance through the command line.
- Add some additional storage to store the projects metadata.
- Configure the instance as a project server.
- Configure the instance to use the networked storage.
Recommended Instance Type
The project server does not require a GPU since it is not used to decode media.
Instance Type | CPU | RAM |
---|---|---|
r5.xlarge | 16v CPUs | 32 GB |
For details on the various AWS EC2 instance types, see AWS instance documentation.
Recommended Number of Clients per Project server
Autodesk recommends a maximum of five clients per project server to minimize network congestion and issues with storage quality of service. This can be in any combination of Autodesk Flame Family products and Burn nodes, such as three Flames and two Burn nodes.
Based on your instance type configuration, storage throughput, and other factors, this guideline can vary. Work with your system integrator to design a workgroup with capabilities matching your production requirements.
As an example, the following minimal setup can expect to serve three Flame instances and two Burn node instances.
- Media storage: EBS media store
- Project storage: EBS gp3 project storage
- Project server: r5.xlarge for project server
On the other hand, the following, more expansive setup, could deliver up to eight Flame instances and eight Burn node instances:
- Media storage: Weka or FSx for OpenZFS
- Project storage: Striped EBS
- Project server: A more powerful instance with a 50Gbps network
Set Up a Project Server Instance on AWS
From the Autodesk Project Server AMI you created, deploy the project server instance with the following parameters:
Instance type: At least
r5.xlarge
. The project server does not require a GPU since it's not used to decode media.For details on the various AWS EC2 instance types, see AWS instance documentation.
Storage: You must attach two volumes to this instance: one for the OS and software, and another for the project storage.
Set the system volume to 20 GB.
Attach to your project server instance the storage required to store the project metadata shared by your Flame and Burn instances. The project metadata storage requirements are:
- Volume Type: gp3
- Size (GiB): 500
- IOPS: 3000
- Throughput (MiB/s): set to default (125 MiB/s)
- To prevent deletion of important project metadata, set the project volume to not delete on instance termination.
Important:This metadata is critical to Flame Family projects. Do not use ephemeral storage to store this metadata or you will lose critical information when you restart the project server. Always use persistent storage.
Security Groups:
- Project Server
- IGMP-multicast
Other settings should be adjusted according to your requirements, but do configure the following:
- Network
- Subnet
- Domain join directory
- IAM role
For best performance, all instances in your deployment (Flame, Burn, Project Server, NAS) must be in the same subnet/availability zone.
When you create the instance, the option Instance volume deletion on termination controls what happens to the storage when you terminate the instance. If the volume is not deleted on instance termination, while you are no longer paying for the terminated instance, you are still paying for the storage.
To avoid paying for storage you no longer need, the instance volume can be deleted automatically when the instance is terminated.
- When you launch an instance, open the storage tab.
- Select Delete on termination.
Before selecting Delete on termination, understand that data stored on the instance volume is permanently deleted when the instance is terminated.
Configure the Project Server Instance
Once you have created the project server instance, configure it using the following steps.
- Connect to the Project Server instance
From your local machine, connect to the instance through ssh. Use the flameadmin
account and the keypair of the instance.
ssh -i <keypair> flameadmin@<INSTANCE PUBLIC IP>
- Grow the project metadata storage
You set the size of the metadata storage when you created the instance. Here you resize the file system to fully use the allocated metadata storage.
In a shell, enter the following.
sudo xfs_growfs /var/opt/Autodesk
- Assign a new machine ID
In a shell, run the command to assign a new machine ID to your instance:
dbus-uuidgen | sudo tee /etc/machine-id
- Stop Autodesk Backburner and Autodesk Stone & Wire services
To stop Autodesk Backburner and Autodesk Stone & Wire Services.
sudo systemctl stop adsk_backburner
sudo systemctl stop adsk_sw
- Set the hostname of the project server
- Connect to the instance.
- Set the hostname of the instance.
sudo hostnamectl set-hostname --static <Your-host-name>
- Add the instance to the identity management system, if you have one.
Follow the instructions for your identity management system.
- Configure Backburner Manager to run without servers
In a shell, enter the following.
cat << EOF | sudo /opt/Autodesk/backburner/backburnerConfig
y
n
EOF
- Restart Autodesk Backburner and Autodesk Stone & Wire services
To restart Autodesk Backburner and Autodesk Stone & Wire Services.
sudo systemctl start adsk_backburner
sudo systemctl start adsk_sw
- Attach the NAS mount point
In a workgroup configuration, your media is stored on a network attached storage (NAS). You've configured the NAS previously, and now you mount it on the instance to use it as your media storage.
In a shell, run the following commands.
Install
autofs
.sudo dnf install autofs
Create the directory where the media is stored.
sudo mkdir /mnt/StorageMedia
Redirect to the NAS. Use the command
sudo vi /etc/auto.direct
and add the following line to the file/mnt/StorageMedia -rw,noatime,nodiratime,bg nas:/mnt/nas/StorageMedia
Note: The above is an example of an NFS exported file system from servernas
. Different shared storage solutions will use slightly different syntax. You must adapt this line to your setup.Set up the automount. Use the command
sudo vi /etc/auto.master
and add the following line at the end of the file./- auto.direct
Enable and restart
autofs
.sudo systemctl enable autofs sudo systemctl start autofs
- Create the media storage
Set up the NAS mount point according to your setup.
Refer to Create a NAS Controller.
- Configure user accounts
What you do depends on whether or not you are using an identity management system.
You have an identity management system
If you are using an identity management system, now is the time to add the instance to the system. The procedure to do so is outside the scope of this document, but here are links to the documentation of some common identity management systems.
To connect to the project server through ssh, use flameadmin
.
You don't have an identity management system
If you are not using an identity management system, you must now create on this instance the accounts of all the users that will access the project server. This means creating accounts for all the Burn and Flame users. For this you need each user account's user name, user ID, and user group as they are defined the Flame instances.
From the root account, open a shell.
Create the user account. Enter:
sudo useradd <username>
Create the user account, making sure to correctly enter their account's
<user id>
,<user group>
, and<username>
.sudo useradd -u <user id> -g <user group> <username>
Follow the onscreen instructions.
Once
passwd
is done, give the user administrative privileges:sudo usermod -aG wheel <username>
sudo
command to run commands usually reserved to the root account.