This topic explains how to create an instance from scratch, using CentOS 7.6. With Flame Family 2023.1 Update, creating an instance with Rocky Linux 8.5 is simpler, and is described here.
This topic explains how to create an Amazon Machine Image (AMI) to run an instance as a project server for an Autodesk Flame Family cloud deployment
You need to create an Open Virtualization Archive (OVA) with only a basic Operating System (OS) installation. You then use this OVA to create the AMI that holds the software to run on an AWS instance. These instructions are only a baseline for you to adapt to your configuration.
To follow the instructions, you must have:
A fully configured AWS account, with access to an access key.
A local machine running Linux (from here on referred to as the "local machine").
The AWS CLI installed and configured on your local machine.
VirtualBox installed on your local machine.
The instructions below refer to VirtualBox 6.1.26 (Linux).
To create an AMI for a project server, you:
When creating an AMI from scratch, the first step is to create a basic OS-only AMI from the Autodesk CentOS 7.6 iso. You then install the application-specific packages on the new image.
You need the Autodesk CentOS 7.6 ISO as the source OS, available from your Autodesk Account.
Copy the source iso to your local machine.
In VirtualBox Create a new OVA with the following settings.
Memory Size: Keep as-is.
Virtual hard disk:
Storage:
ISO: Autodesk CentOS 7.6
Once the VM is created, finalize the VM settings:
Start the VM.
Select Install CentOS 7.
Partition the disk into /boot (1G) and / (the rest of the disk). Do NOT create a swap partition.
Perform the rest of the OS installation normally.
When the user creation screen appears, create a user named flameadmin:
Once the installation is complete, shut down the VM (File > Close).
In Settings > Storage, delete the IDE Controller but keep the SATA Controller.
Start the VM.
Log in with the user flameadmin.
Disable the root password:
sudo passwd -l root
Check if your network interface is enabled:
ip -br addr show | grep UP
If an IP address appears next to UP, your network interface is enabled. If there is no IP address run the following command, replacing enp0s3:
sudo ifup <interface>
Install cloud support packages and configure keypair login:
sudo yum install cloud-init cloud-utils-growpart -y
sudo sed -i 's/name: centos/name: flameadmin/' /etc/cloud/cloud.cfg
sudo sed -i 's/gecos: Cloud User/gecos: Flame Administrator/' /etc/cloud/cloud.cfg
echo "preserve_hostname: true" | sudo tee --append /etc/cloud/cloud.cfg
sudo sed -i 's/^PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^ChallengeResponseAuthentication yes/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config
Shut down the VM.
Export the VM. Make sure to disable Write Manifest File.
You now import your OVA in AWS and convert it to the AMI format. Follow the instructions, but for more information, see AWS detailed documentation.
Upload the OVA file that you created to an AWS S3 Bucket.
On your local machine, create a text file with the following content, and save it as vmimport-trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:Externalid": "vmimport"
}
}
}
]
}
On your local machine, create a text file with the following content, and save it as vmimport-role-policy.json. Make sure to replace <BUCKET_NAME> on lines 16 and 17 with the name of the S3 bucket used in step 1.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"iam:CreateRole",
"iam:PutRolePolicy",
"sts:AssumeRole",
"ec2:CancelConversionTask",
"ec2:CancelExportTask",
"ec2:CreateImage",
"ec2:CreateInstanceExportTask",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:ImportInstance",
"ec2:ImportVolume",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:ImportImage",
"ec2:ImportSnapshot",
"ec2:CancelImportTask",
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
}
]
}
On your local machine, create a text file with the following content and save it as containers.json. Make sure to edit these placeholders:
<DESCRIPTION> on line 3 with a description of the AMI.
<BUCKET_NAME> on line 6 with your S3 bucket name.
<OVA_FILE_NAME> on line 7 with the file name of the OVA file you created.
[
{
"Description": "<DESCRIPTION>",
"Format": "ova",
"UserBucket": {
"S3Bucket": "<BUCKET_NAME>",
"S3Key": "<OVA_FILE_NAME>"
}
}
]
Create the required role and policy. On your local machine, run the following commands:
aws iam create-role --role-name vmimport --assume-role-policy-document file://vmimport-trust-policy.json
aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://vmimport-role-policy.json
Give the vmimport role the necessary permissions. In the AWS Web Console, do the following:
In IAM Management Console > Roles > search for vmimport.
Click vmimport.
Under Permissions, click Add Permissions > Attach Policies.
Attach the AmazonEC2FullAccess and AmazonS3FullAccess policies
Import the VM. On your local machine, enter the following command in a shell. Make sure to edit these placeholders:
<YOUR_AWS_REGION> with your AWS region identifier, such as us-east-1.
<YOUR_DESCRIPTION> with a description of the image.
aws ec2 import-image --region <YOUR_AWS_REGION> --description "<YOUR_DESCRIPTION>" --disk-containers file://containers.json --role-name "vmimport"
The import process can take a while; you can monitor its progress with:
aws ec2 describe-import-image-tasks
Once the process is "completed", the new AMI is available under EC2 > AMIs.
You do not install an NVIDIA driver on this instance, so no need to use a G4dn instance here. A r5.xlarge is a good fit.
Main differences between a project server and a Flame AMI:
To create a project server AMI, we start from the AMI created in the previous step.
Upload the DKU and the Flame distribution tar files that you plan on installing to an S3 bucket.
You can also upload the tar files to the instance directly with scp later, but using an S3 bucket means you won't have to re-upload the files again should you need them later.
Launch an instance from the base AMI created in the previous section. You don't need an instance with a GPU.
You must attach two volumes to this instance: one for the OS and software, and another for the project storage. Set the system volume to 20 GB, and the project server volume to 1 GB.
From your local machine, connect to the instance through ssh. Use the flameadmin account and the keypair that you selected when you finalized the VM settings.
ssh -i <keypair> flameadmin@<INSTANCE PUBLIC IP>
Create a temporary work folder for later use.
mkdir -p /tmp/provisioning
Apply some basic configuration settings.
sudo sed -ir 's/SELINUX=\(disabled\|enforcing\|permissive\)/SELINUX=disabled/' /etc/selinux/config
sudo sed -i 's/GRUB_CMDLINE_LINUX="[^"]*/& net.ifnames=0/' /etc/default/grub
sudo sed -i '/^GRUB_TERMINAL_OUTPUT=/d' /etc/default/grub
tee --append /etc/default/grub << EOF
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200"
EOF
Set time-related features.
sudo timedatectl set-local-rtc 0
echo "server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4" | sudo tee --append /etc/chrony.conf
sudo systemctl start chronyd
sudo systemctl enable chronyd
Get the list of time zones with the following command.
timedatectl list-timezones
From the list of time zones, locate the one you need such as America/New_York and use it to set the time zone on the instance.
sudo timedatectl set-timezone <Your-Timezone>
Install the required tools:
cd /tmp/provisioning
sudo yum install wget gcc kernel-devel-$(uname -r) -y
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -i epel-release-latest-7.noarch.rpm
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Set your credentials for the AWS CLI, based on your secret keypair.
export AWS_ACCESS_KEY_ID=<YOUR KEY ID>
export AWS_SECRET_ACCESS_KEY=<YOUR SECRET KEY VALUE>
export AWS_DEFAULT_REGION=<YOUR REGION>
Do not use aws configure or store your credentials on the instance, as this would include your credentials in the final AMI!
Install the DKU, replacing <DKU_TAR> with the full path of the tar file uploaded to S3 in step 1.
cd /tmp/provisioning
aws s3 cp <DKU_TAR> dku.tar
tar xf dku.tar
cd `tar tf dku.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_DKU --silentgeneric --keepnvdriver
sudo systemctl stop httpd
sudo systemctl disable httpd
Enable IGMPv2.
echo "net.ipv4.conf.all.force_igmp_version=2" | sudo tee --append /etc/sysctl.conf
Create the project storage mountpoint.
sudo mkdir /var/opt/Autodesk
sudo mkfs.xfs /dev/nvme1n1
sudo mount /dev/nvme1n1 /var/opt/Autodesk
sudo tee --append /etc/fstab << EOF
UUID=$(sudo xfs_admin -u /dev/nvme1n1 | awk '{print $3}') /var/opt/Autodesk xfs defaults 0 0
EOF
Download the NVIDIA installer and install the project server software. Make sure to replace <CIDR_of_the_VPC> with your VPC CIDR.
cd /tmp/provisioning
aws s3 cp s3://ec2-linux-nvidia-drivers/grid-12.4/NVIDIA-Linux-x86_64-460.106.00-grid-aws.run .
chmod +x ./NVIDIA-Linux-x86_64-460.106.00-grid-aws.run
aws s3 cp <FLAME TAR FILE> flame.tar
tar xf flame.tar
cd `tar tf flame.tar | head -1 | cut -d "/" -f 1`
sudo ./INSTALL_PROJECTSERVER --imageprep --backburner --cidr <CIDR_of_the_VPC> --dataroot /var/opt/Autodesk --nvinst /tmp/provisioning/NVIDIA-Linux-x86_64-460.106.00-grid-aws.run
The previous commands download and install NVIDIA GRID drivers version 460.106.00 from AWS S3 storage.
For more information, see AWS NVIDIA GRID drivers documentation.
Clear the command line history and shut down the instance.
history -c
sudo shutdown now
You can now convert this instance image to a project server AMI.
You can now use this AMI to launch a project server instance.