This topic details some of the steps required to create a NAS accessible to your Autodesk Flame, Autodesk Burn, and project server instances. Use these steps as a baseline for setting up your own NAS controller instance.
The sections Configure the NAS for Networked Framestore and Attaching the NAS to an Instance contain information will help you create and attach your NAS to an Autodesk Flame Family instance.
Using the base Rocky Linux 8.5 from Autodesk, deploy NAS controller instance with the following parameters:
Instance type: At least c5n.9xlarge. The NAS controller does not require a GPU since it's not used to decode media.
For details on the various AWS EC2 instance types, see AWS instance documentation.
Storage: 50 GB for the system disk is enough. Project data should be stored on an attached disk array or cloud NAS.
Security Group:
NAS
IGMP-multicast
Other settings are set according to your requirements, but make sure to set
After launching the NAS Controller instance, set the following:
You attach to your NAS instance the storage required to create the framestore shared by your Flame and Burn instances.
The recommendation is to use arrayed storage with disks capable of 3000 IOPS and 250 MB/s throughput. AWS recommendation is to use ST1 EBS volumes, as ST1 is optimized for sequential read-write workload. The achieved nominal throughput was 900+MB/s and in the 350-450 MB/s range through NFS.
You're looking for high throughput and IOPS for large file operations: this stores the media files created by the Flame instances connected to the NAS controller.
You can also use services such as WekaIO to get optimized data performance and throughput. The following instructions are only applicable if you're setting your own storage.
After launching your NAS Controller instance, you must:
Follow the instructions below. Usually, you can simply copy-paste the commands from the document to the shell.
Display the disks available to the NAS instance. These are the disks that you've previously attached to your instance.
The actual output from the command could be different, but you're looking for the presence of four disks similar to nvme1n1, nvme2n1, nvme3n1, and nvme4n1.
env LANG=en_US.iso8859-1 lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:4 0 10G 0 disk
|-nvme0n1p1 259:5 0 1G 0 part /boot
`-nvme0n1p2 259:6 0 9G 0 part
|-centos-root 253:0 0 8G 0 lvm /
`-centos-swap 253:1 0 1G 0 lvm [SWAP]
nvme1n1 259:0 0 500G 0 disk
nvme2n1 259:1 0 500G 0 disk
nvme3n1 259:2 0 500G 0 disk
nvme4n1 259:3 0 500G 0 disk
Partition the drives.
sudo parted -s -- /dev/nvme1n1 mklabel gpt mkpart primary 0% 100%
sudo parted -s -- /dev/nvme2n1 mklabel gpt mkpart primary 0% 100%
sudo parted -s -- /dev/nvme3n1 mklabel gpt mkpart primary 0% 100%
sudo parted -s -- /dev/nvme4n1 mklabel gpt mkpart primary 0% 100%
Create a physical volume (PV) for each partition.
sudo pvcreate /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
Create a volume group (VG) containing all four PVs. In our example, we name the volume vg00.
sudo vgcreate vg00 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
Get the total physical extent (PE) of the volume group.
sudo vgdisplay vg00 | grep "Total PE"
Total PE 511996
With the PE size, set the size of the logical volume (LV). The lvcreate syntax is 'lvcreate -l <Total-PE> -i <#PV> -I <stripe size> -n LV VG'. Here we create a logical volume named lvol1.
sudo lvcreate -l 511996 -i 4 -I 32 -n lvol1 vg00
Create the filesystem on the storage. We will use XFS.
Determine of allocation group size (agsize) to use. We first create a file system with 128 allocation groups to retrieve a base agsize.
sudo mkfs.xfs -d agcount=128 -f /dev/vg00/lvol1
meta-data=/dev/vg00/lvol1 isize=512 agcount=128, agsize=4095968 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524283904, imaxpct=5
= sunit=8 swidth=32 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=256000, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Multiply the agsize by 4096 and subtract 32768 to obtain the agsize to use, In our example, the result is 4095968 * 4096 - 32768 = 16777052160. Use this value to create the filesystem.
sudo mkfs.xfs -d agsize=16777052160 -f /dev/vg00/lvol1
Create the mountpoint.
sudo mkdir /mnt/nas
Make the mount persistent by adding it to /etc/fstab.
sudo vi /etc/fstab
/dev/vg00/lvol1 /mnt/nas xfs rw,noatime,inode64,nofail
Mount the volume.
sudo mount /mnt/nas
Create the directory for the shared framestore. Also set appropriate ownership and permissions.
sudo mkdir /mnt/nas/StorageMedia
sudo chmod 777 /mnt/nas/StorageMedia
Export the mount points so other hosts have access. Restrict the exports to hosts in your VPC by using its CIDR. In the following example, replace 172.16.128.0/22 with your CIDR.
echo '/mnt/nas/StorageMedia 172.16.128.0/22(rw,async)' | sudo tee --append /etc/exports
sudo exportfs -va
In a workgroup configuration, your media is stored on a network attached storage (NAS), a centralized storage. You configured the NAS previously, and now you mount it on the instance to use it as your media storage.
In a shell, run the following commands.
Install autofs.
sudo yum install autofs
Create the directory where the media is stored.
sudo mkdir /mnt/StorageMedia
Redirect to the NAS. Use the command sudo vi /etc/auto.direct and add the following line to the file
/mnt/StorageMedia -rw,noatime,nodiratime,bg nas:/mnt/nas/StorageMedia
nas. Different shared storage solutions will use slightly different syntax. You must adapt this line to your setup.Set up the automount. Use the command sudo vi /etc/auto.master and add the following line at the end of the file.
/- auto.direct
Enable and restart autofs.
sudo systemctl enable autofs
sudo systemctl start autofs