This topic explains how to create an instance from scratch, using CentOS 7.6. With Flame Family 2023.1 Update, creating an instance with Rocky Linux 8.5 is simpler, and is described here.
This topic explains how to create an Amazon Machine Image (AMI) to run an instance loaded with an Autodesk Flame Family product: Flame, Flare, or Flame Assist.
You need to create an Open Virtualization Archive (OVA) with only a basic Operating System (OS) installation. You then use this OVA to create the AMI that holds the software to run on an AWS instance. These instructions are only a baseline for you to adapt to your configuration.
To follow the instructions, you must have:
A fully configured AWS account, with access to an access key.
A local machine running Linux (from here on referred to as the "local machine").
The AWS CLI installed and configured on your local machine.
VirtualBox installed on your local machine.
The instructions below refer to VirtualBox 6.1.26 (Linux).
To create an AMI compatible with an Autodesk Flame Family cloud deployment, you:
When creating an AMI from scratch, the first step is to create a basic OS-only AMI from the Autodesk CentOS 7.6 iso. You then install the application-specific packages (DKU, Flame) on the new image.
You need the Autodesk CentOS 7.6 ISO as the source OS, available from your Autodesk Account.
Copy the source iso to your local machine.
In VirtualBox Create a new OVA with the following settings.
Memory Size: Keep as-is.
Virtual hard disk:
Storage:
ISO: Autodesk CentOS 7.6
Once the VM is created, finalize the VM settings:
Start the VM.
Select Autodesk Flame Workstation (manual partitioning).
Partition the disk into /boot (1G) and / (the rest of the disk). Do NOT create a swap partition.
Perform the rest of the OS installation normally.
When the user creation screen appears, create a user named flameadmin:
Once the installation is complete, shut down the VM (File > Close).
In Settings > Storage, delete the IDE Controller but keep the SATA Controller.
Start the VM.
Log in with the user flameadmin.
Disable the root password:
sudo passwd -l root
Check if your network interface is enabled:
ip -br addr show | grep UP
If an IP address appears next to UP, your network interface is enabled. If there is no IP address run the following command, replacing enp0s3:
sudo ifup <interface>
Install cloud support packages and configure keypair login:
sudo yum install cloud-init cloud-utils-growpart -y
sudo sed -i 's/name: centos/name: flameadmin/' /etc/cloud/cloud.cfg
sudo sed -i 's/gecos: Cloud User/gecos: Flame Administrator/' /etc/cloud/cloud.cfg
echo "preserve_hostname: true" | sudo tee --append /etc/cloud/cloud.cfg
sudo sed -i 's/^PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^ChallengeResponseAuthentication yes/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config
Shut down the VM.
Export the VM. Make sure to disable Write Manifest File.
You now import your OVA in AWS and convert it to the AMI format. Follow the instructions, but for more information, see AWS detailed documentation.
Upload the OVA file you created to an AWS S3 Bucket.
On your local machine, create a text file with the following content, and save it as vmimport-trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:Externalid": "vmimport"
}
}
}
]
}
On your local machine, create a text file with the following content, and save it as vmimport-role-policy.json. Make sure to replace <BUCKET_NAME> on lines 16 and 17 with the name of the S3 bucket used in step 1.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"iam:CreateRole",
"iam:PutRolePolicy",
"sts:AssumeRole",
"ec2:CancelConversionTask",
"ec2:CancelExportTask",
"ec2:CreateImage",
"ec2:CreateInstanceExportTask",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:ImportInstance",
"ec2:ImportVolume",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:ImportImage",
"ec2:ImportSnapshot",
"ec2:CancelImportTask",
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
On your local machine, create a text file with the following content and save it as containers.json. Make sure to edit these placeholders:
<DESCRIPTION> on line 3 with a description of the AMI.
<BUCKET_NAME> on line 6 with your S3 bucket name.
<OVA_FILE_NAME> on line 7 with the file name of the OVA file you created.
[
{
"Description": "<DESCRIPTION>",
"Format": "ova",
"UserBucket": {
"S3Bucket": "<BUCKET_NAME>",
"S3Key": "<OVA_FILE_NAME>"
}
}
]
Create the required role and policy. On your local machine, run the following commands:
aws iam create-role --role-name vmimport --assume-role-policy-document file://vmimport-trust-policy.json
aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://vmimport-role-policy.json
Give the vmimport role the necessary permissions. In the AWS Web Console, do the following:
In IAM Management Console > Roles > search for vmimport.
Click vmimport.
Under Permissions, click Add Permissions > Attach Policies.
attach the AmazonEC2FullAccess and AmazonS3FullAccess policies
Import the VM. On your local machine, enter the following command in a shell. Make sure to edit these placeholders:
<YOUR_AWS_REGION> with your AWS region identifier, such as us-east-1.
<YOUR_DESCRIPTION> with a description of the image.
aws ec2 import-image --region <YOUR_AWS_REGION> --description "<YOUR_DESCRIPTION>" --disk-containers file://containers.json --role-name "vmimport"
The import process can take a while; you can monitor its progress with:
aws ec2 describe-import-image-tasks
Once the process is "completed", the new AMI is available under EC2 > AMIs.
To create a Flame AMI, we will start from the base AMI created in the previous step.
Upload the DKU and the Flame family product tar files that you plan on installing to an S3 bucket.
You can also upload the tar files to the instance directly with scp later, but using an S3 bucket means you won't have to re-upload the files again should you need them later.
Launch an instance from the base AMI created in the previous section. You need an instance with a GPU to be able to install the NVIDIA GRID drivers; for this purpose, any G4dn or G5 instance will do.
From your local machine, connect to the instance through ssh. Use the flameadmin account and the keypair that you selected when you finalized the VM settings.
ssh -i <keypair> flameadmin@<INSTANCE PUBLIC IP>
Create a temporary work folder for later use:
mkdir -p /tmp/provisioning
Apply some basic configuration settings:
sudo sed -ir 's/SELINUX=\(disabled\|enforcing\|permissive\)/SELINUX=disabled/' /etc/selinux/config
sudo sed -i 's/GRUB_CMDLINE_LINUX="[^"]*/& net.ifnames=0/' /etc/default/grub
sudo sed -i '/^GRUB_TERMINAL_OUTPUT=/d' /etc/default/grub
sudo tee --append /etc/default/grub << EOF
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200"
EOF
Set time-related features.
sudo timedatectl set-local-rtc 0
echo "server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4" | sudo tee --append /etc/chrony.conf
sudo systemctl start chronyd
sudo systemctl enable chronyd
Get the list of time zones with the following command.
timedatectl list-timezones
From the list of time zones, locate the one you need such as America/New_York and use it to set the time zone on the instance.
sudo timedatectl set-timezone <Your-Timezone>
Install required tools.
cd /tmp/provisioning
sudo yum install wget gcc kernel-devel-$(uname -r) -y
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -i epel-release-latest-7.noarch.rpm
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Set your credentials for the AWS CLI, based on your secret keypair.
export AWS_ACCESS_KEY_ID=<YOUR KEY ID>
export AWS_SECRET_ACCESS_KEY=<YOUR SECRET KEY VALUE>
export AWS_DEFAULT_REGION=<YOUR REGION>
Do not use aws configure or store your credentials on the instance, as this would include your credentials in the final AMI!
Install the DKU, replacing <DKU_TAR> with the full path of the tar file uploaded to S3 in step 1.
cd /tmp/provisioning
aws s3 cp <DKU_TAR> dku.tar
tar xf dku.tar
cd `tar tf dku.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_DKU --silentgeneric
sudo systemctl stop httpd
sudo systemctl disable httpd
Prepare the NVIDIA driver installation.
sudo tee --append /etc/modprobe.d/blacklist.conf << EOF
blacklist vga16fb
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
EOF
sudo sed -i 's/GRUB_CMDLINE_LINUX="[^"]*/& modprobe.blacklist=nouveau/' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo depmod -a
sudo dracut --force
Reboot the instance:
sudo reboot
After a few minutes, reconnect to your instance. From your local host:
ssh -i <keypair> flameadmin@<INSTANCE PUBLIC IP>
Again, set your credentials for the AWS CLI, based on your secret keypair.
export AWS_ACCESS_KEY_ID=<YOUR KEY ID>
export AWS_SECRET_ACCESS_KEY=<YOUR SECRET KEY VALUE>
export AWS_DEFAULT_REGION=<YOUR REGION>
Do not use aws configure or store your credentials on the instance, as this would include your credentials in the final AMI!
Install the NVIDIA GRID driver.
cd /tmp/provisioning
aws s3 cp s3://ec2-linux-nvidia-drivers/grid-12.4/NVIDIA-Linux-x86_64-460.106.00-grid-aws.run nvidia-install-script
sudo init 3
sudo /bin/sh nvidia-install-script -s --install-libglvnd
The previous commands download and install NVIDIA GRID drivers version 460.106.00 from AWS S3 storage.
For more information, see AWS NVIDIA GRID drivers documentation.
Optimize the NVIDIA GPU settings according to your instance type.
For a G4dn instance, enter the following settings on the command line.
sudo nvidia-persistenced
sudo nvidia-smi -ac 5001,1590
For a G5 instance, use the following.
sudo nvidia-persistenced
sudo nvidia-smi -ac 6250,1710
If you launch from this AMI an instance on something other than G4dn, you need to edit these values. To optimize the GPU for instances other than G4dn, see AWS documentation.
Remove the current X config file:
sudo rm -f /etc/X11/xorg.conf
Create a new /etc/X11/xorg.conf file with the following contents:
Section "ServerLayout"
Screen 0 "Screen Default 0" 0 0
# Screen 1 "Screen Dummy Dual GPU Screen" Relative "Screen Default 0" 0 0
Identifier "XFree86 Configured"
EndSection
Section "Files"
FontPath "unix/:7100"
EndSection
Section "Module"
Load "glx" # OpenGL X protocol interface
Load "extmod" # Misc. required extensions
EndSection
Section "ServerFlags"
Option "VTSysReq" "on"
Option "DontVTSwitch" "on"
Option "DontZoom" "on"
Option "AutoAddDevices" "True"
Option "AutoAddGPU" "false"
EndSection
Section "InputClass"
Identifier "Keyboard0"
MatchIskeyboard "on"
Option "XkbModel" "pc105"
Option "XkbCompat" "basic+misc"
Option "XkbLayout" "us"
Option "XkbOptions" "terminate:ctrl_alt_bksp"
EndSection
Section "InputClass"
Identifier "ERASER"
MatchDriver "wacom"
MatchProduct "eraser|Eraser|ERASER"
Option "Mode" "Absolute"
EndSection
Section "InputClass"
Identifier "cursor"
MatchDriver "wacom"
MatchProduct "cursor|Cursor|CURSOR"
EndSection
Section "InputClass"
Identifier "pad"
MatchDriver "wacom"
MatchProduct "pad|Pad|PAD"
Option "Mode" "Absolute"
EndSection
Section "InputClass"
Identifier "touch"
MatchDriver "wacom"
MatchProduct "touch|Touch|TOUCH|Finger|finger|FINGER"
Option "Touch" "off"
EndSection
Section "Monitor"
Identifier "Generic EDID"
VendorName "---"
ModelName "NVIDIA Generic EDID"
EndSection
Section "Device"
Identifier "Dual GPU"
Driver "nvidia"
BoardName "Unknown"
#Dual GPU BusId "PCI:XX:YY:ZZ"
Option "UseDisplayDevice" "none"
Option "Interactive" "False"
Option "HardDPMS" "False"
EndSection
Section "Monitor"
Identifier "Generic 1920x1200 EDID"
VendorName "---"
ModelName "NVIDIA Generic 1920x1200 EDID"
HorizSync 30.0 - 95.0
VertRefresh 50.0 - 180.0
EndSection
Section "Screen"
Identifier "Screen Dummy Dual GPU Screen"
Device "Dual GPU"
Monitor "Generic 1920x1200 EDID"
DefaultDepth 24
SubSection "Display"
Virtual 1920 1200
Depth 24
EndSubSection
EndSection
Section "Monitor"
Identifier "Generic Monitor"
EndSection
Section "Device"
Identifier "NVIDIA Generic"
Driver "nvidia"
Option "Overlay" "on"
Option "HardDPMS" "False"
Option "Interactive" "False"
Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222"
EndSection
Section "Screen"
Identifier "Screen Default 0"
Device "NVIDIA Generic"
Monitor "Generic Monitor"
DefaultDepth 24
SubSection "Display"
Depth 24
Option "FlatPanelProperties" "Dithering=Disabled"
EndSubSection
EndSection
Install the Wacom driver:
cd /tmp/provisioning
wget https://github.com/linuxwacom/input-wacom/releases/download/input-wacom-0.45.0/input-wacom-0.45.0.tar.bz2
tar xf input-wacom-0.45.0.tar.bz2
cd input-wacom-0.45.0
./configure
make && sudo make install
sudo tee --append /etc/X11/xorg.conf.d/99-wacom-pressure2k.conf << EOF
Section "InputClass"
Identifier "Wacom pressure compatibility"
MatchDriver "wacom"
Option "Pressure2K" "true"
EndSection
EOF
Install Flame family applications, replacing the <... TAR FILE> tags with the actual paths of the files uploaded in step 1.
If installing Flame.
cd /tmp/provisioning
aws s3 cp <FLAME TAR FILE> flame.tar
tar xf flame.tar
cd `tar tf flame.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_FLAME --keepxorg --noui --noagreement
sudo sed -ir 's/^Audiodevice ALSA/#Audiodevice ALSA/' /opt/Autodesk/flame*/cfg/init.cfg
sudo sed -ir 's/^#Audiodevice PulseAudio/Audiodevice PulseAudio/' /opt/Autodesk/flame*/cfg/init.cfg
echo "export DL_VM_ENVIRONMENT=true" | sudo tee --append /etc/environment
If installing Flare.
cd /tmp/provisioning
aws s3 cp <FLARE TAR FILE> flare.tar
tar xf flare.tar
cd `tar tf flare.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_FLARE --keepxorg --noui --noagreement
sudo sed -ir 's/^Audiodevice ALSA/#Audiodevice ALSA/' /opt/Autodesk/flare*/cfg/init.cfg
sudo sed -ir 's/^#Audiodevice PulseAudio/Audiodevice PulseAudio/' /opt/Autodesk/flare*/cfg/init.cfg
echo "export DL_VM_ENVIRONMENT=true" | sudo tee --append /etc/environment
If installing Flame Assist.
cd /tmp/provisioning
aws s3 cp <FLAME_ASSIST TAR FILE> flameassist.tar
tar xf flameassist.tar
cd `tar tf flameassist.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_FLAMEASSIST --keepxorg --noui --noagreement
sudo sed -ir 's/^Audiodevice ALSA/#Audiodevice ALSA/' /opt/Autodesk/flameassist*/cfg/init.cfg
sudo sed -ir 's/^#Audiodevice PulseAudio/Audiodevice PulseAudio/' /opt/Autodesk/flameassist*/cfg/init.cfg
echo "export DL_VM_ENVIRONMENT=true" | sudo tee --append /etc/environment
Clean up:
sudo sed -i 's/UUID=.*$/UUID=/' /opt/Autodesk/cfg/network.cfg
sudo sed -i 's/ID=.*$/ID=/' /opt/Autodesk/sw/cfg/sw_storage.cfg
dbus-uuidgen | sudo tee /etc/machine-id
sudo find /opt/Autodesk/ -type f \( -name '*.log' -o -name '*.log.[1-9]*' \) -exec rm -f {} \;
sudo sed -i 's|^\([# \\t]*\)Scope=224.0.0.1|Scope=239.0.0.1|' /opt/Autodesk/cfg/network.cfg
sudo sed -i 's/224.0.0.1/239.0.0.1/g' /opt/Autodesk/*/bin/verifyBurnConn
echo "net.ipv4.conf.all.force_igmp_version=2" | sudo tee --append /etc/sysctl.conf
sudo shred -u ~/.aws* /etc/ssh/*_key /etc/ssh/*_key.pub ~/.*history
cd && sudo rm -fr /tmp/provisioning
Clear the command line history and shut down the instance.
history -c
sudo shutdown now
You can now convert this instance image to a Flame AMI.
You can now use this AMI to launch Flame-ready instances.