Reference: Create a NAS Controller
This topic details some of the steps required to create a NAS accessible to your Autodesk Flame, Autodesk Burn, and project server instances. Use these steps as a baseline for setting up your own NAS controller instance.
The sections Configure the NAS for Networked Media Storage and Attaching the NAS to an Instance contain information that will help you create and attach your NAS to an Autodesk Flame Family instance.
Set up the NAS controller instance
Using the base Rocky Linux AMI from Autodesk, deploy a NAS controller instance with the following parameters:
Instance type: At least
c5n.9xlarge
. The NAS controller does not require a GPU since it is not used to decode media.For details on the various AWS EC2 instance types, see AWS instance documentation.
Storage: 50 GB for the system disk is enough. Project data should be stored on an attached disk array or cloud NAS.
Security Groups:
- NAS
- IGMP-multicast
Other settings should be adjusted according to your requirements, but do configure the following:
- Network
- Subnet
- Domain join directory
- IAM role
Configure the NAS Controller Instance
After launching the NAS Controller instance, do the following:
- Create the media storage
- Configure the NAS for networked media storage
Create the Media Storage
You attach to your NAS instance the storage required to store the media shared by your Flame and Burn instances.
The recommendation is to use arrayed storage with disks capable of 3000 IOPS and 250 MB/s throughput. AWS recommendation is to use ST1 EBS volumes, as ST1 is optimized for sequential read-write workload. The achieved nominal throughput is 900+ MB/s and in the 350-450 MB/s range through NFS.
You are looking for high throughput and IOPS for large file operations: this stores the media files created by the Flame instances connected to the NAS controller.
You can also use services such as WekaIO to get optimized data performance and throughput. The following instructions are only applicable if you are configuring your own storage.
Configure the NAS for Networked Media Storage
After launching your NAS Controller instance, you must:
- Create the media storage out of volumes that you created earlier.
- Make the media storage available to the other workgroup instances.
Follow the instructions below. Usually, you can simply copy-paste the commands from the document to the shell.
Create your media storage and make it available to other instances
Display the disks available to the NAS instance. These are the disks that you've previously attached to your instance.
The actual output from the command could be different, but you are looking for the presence of four disks similar to nvme1n1, nvme2n1, nvme3n1, and nvme4n1.
env LANG=en_US.iso8859-1 lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:4 0 953.9G 0 disk |-nvme0n1p1 259:5 0 2G 0 part /boot/efi |-nvme0n1p2 259:7 0 32G 0 part [SWAP] |-nvme0n1p3 259:8 0 919.9G 0 part / nvme1n1 259:0 0 500G 0 disk nvme2n1 259:1 0 500G 0 disk nvme3n1 259:2 0 500G 0 disk nvme4n1 259:3 0 500G 0 disk
Partition the drives.
sudo parted -s -- /dev/nvme1n1 mklabel gpt mkpart primary 0% 100% sudo parted -s -- /dev/nvme2n1 mklabel gpt mkpart primary 0% 100% sudo parted -s -- /dev/nvme3n1 mklabel gpt mkpart primary 0% 100% sudo parted -s -- /dev/nvme4n1 mklabel gpt mkpart primary 0% 100%
Create a physical volume (PV) for each partition.
sudo pvcreate /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
Create a volume group (VG) containing all four PVs. In our example, we name the volume vg00.
sudo vgcreate vg00 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
Get the total physical extent (PE) of the volume group.
sudo vgdisplay vg00 | grep "Total PE" Total PE 511996
With the PE size, set the size of the logical volume (LV). The lvcreate syntax is '
lvcreate -l <Total-PE> -i <#PV> -I <stripe size> -n LV VG
'. Here we create a logical volume named lvol1.sudo lvcreate -l 511996 -i 4 -I 32 -n lvol1 vg00
Create the filesystem on the storage. We will use XFS.
Determine of allocation group size (agsize) to use. We first create a file system with 128 allocation groups to retrieve a base agsize.
sudo mkfs.xfs -d agcount=128 -f /dev/vg00/lvol1 meta-data=/dev/vg00/lvol1 isize=512 agcount=128, agsize=4095968 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=524283904, imaxpct=5 = sunit=8 swidth=32 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=256000, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Multiply the
agsize
by 4096 and subtract 32768 to obtain the agsize to use, In our example, the result is 4095968 * 4096 - 32768 = 16777052160. Use this value to create the file system.sudo mkfs.xfs -d agsize=16777052160 -f /dev/vg00/lvol1
Create the mountpoint.
sudo mkdir /mnt/nas
Make the mount persistent by adding it to
/etc/fstab.
sudo vi /etc/fstab /dev/vg00/lvol1 /mnt/nas xfs rw,noatime,inode64,nofail
Mount the volume.
sudo mount /mnt/nas
Create the directory for the shared media storage. Also set appropriate ownership and permissions.
sudo mkdir /mnt/nas/StorageMedia sudo chmod 777 /mnt/nas/StorageMedia
Export the mount points so other hosts have access. Restrict the exports to hosts in your VPC by using its CIDR. In the following example, replace
172.16.128.0/22
with your CIDR.echo '/mnt/nas/StorageMedia 172.16.128.0/22(rw,async)' | sudo tee --append /etc/exports sudo exportfs -va
Attaching the NAS to an Instance
In a workgroup configuration, your media is stored on a network attached storage (NAS), a centralized storage. You configured the NAS previously, and now you mount it on the instance to use it as your media storage.
In a shell, run the following commands.
Install
autofs
.sudo yum install autofs
Create the directory where the media is stored.
sudo mkdir /mnt/StorageMedia
Redirect to the NAS. Use the command
sudo vi /etc/auto.direct
and add the following line to the file/mnt/StorageMedia -rw,noatime,nodiratime,bg nas:/mnt/nas/StorageMedia
Note: The above is an example of an NFS exported file system from servernas
. Different shared storage solutions will use slightly different syntax. You must adapt this line to your setup.Set up the automount. Use the command
sudo vi /etc/auto.master
and add the following line at the end of the file./- auto.direct
Enable and restart
autofs
.sudo systemctl enable autofs sudo systemctl start autofs