The Terraform scripts will create the following infrastructure on AWS:
- a Virtual Private Cloud (VPC), a public subnet, and two private subnets
- two EC2 instances -
xnat_web
andxnat_cserv
for the web server and XNAT Container Service, respectively - an RDS instance -
xnat_db
- for managing the PostgreSQL database - an EFS instance used to store data uploaded to xnat; this volume is mounted on both the web server and Container Service server
- security groups to manage access to the servers
We have found that we need to use a db.t3.medium
instance the database, a m4.xlarge
instance for
the container service, and a t3.large
instance for the web server, to prevent the site from
crashing when uploading data or running containers.
Note however that this is assuming only a single user is running analyses. If you have multiple users, you may need to increase the instance type for the container service, as this is where the plugin containers are run. In particular, the FastSurfer pipeline requires a lot of memory and may fail if the instance type is too small.
You can change the instance type used by changing the ec2_instance_types
variable in your xnat-aws/provision/terraform.tfvars
file, e.g.:
ec2_instance_types = {
"xnat_web" = "t3.large"
"xnat_db" = "db.t3.medium"
"xnat_cserv" = "m4.xlarge"
}
Alternatively, you could use a GPU-enabled instance
for the container service and run the GPU version of the FastSurfer pipeline (see the
run_fastsurfer_gpu
command). However, this will significantly drive up the costs.
You may also have to increase the amount of RAM reserved for Java (and thus XNAT) in the Ansible configuration. In the file xnat-aws/configure/group_vars/xnat.yml
you would need to modify the java.mem
variable, e.g.:
java_mem:
Xms: "512M"
Xmx: "16G"
MetaspaceSize: "300M"
Notes on the infrastructure that is created
We create a public and private subnet in a single availability zone, and this is where all resources are deployed. However, we also create a second private subnet in a second availability zone, but nothing is deployed here.
This is due a requirement of RDS to have subnets defined in at least two availability zones, even if you're deploying in a single availability zone. but deploy instance in only two subnets in a single availability zone.
We create a security group for each instance - the web server, database, and container service.
The web server security group allows SSH, HTTP, and HTTPS access from the IP address from which Terraform was run (i.e. your own IP address). Access is restricted for security reasons.
SSH access is required to configure the server using Ansible.
The database security group only allows access to port 5432 (for connecting to the database). Access is limited to the web server only - all other connections will be refused.
The Container Service security group allows SSH access from the IP address from which Terraform was ran. It also allows access to port 2376 (for the Container Service) from the web server only.
SSH access is required to configure the server using Ansible.
HTTP access to the web server can be extended to other IP addresses through the extend_http_cidr
variable. For example, to allow access from all IP addresses, in the file xnat-aws/provision/terraform.tfvar
:
extend_http_cidr = [
"0.0.0.0/0",
]
Similarly, SSH access to the web server and Container Service server can be extended to other IP addresses through the extend_SSH
variable:
extend_ssh_cidr = [
"0.0.0.0/0",
]
However, extending access to all IP addresses is not recommended.
First set the necessary variables. Copy the file xnat-aws/provision/terraform.tfvars_sample
to
xnat-aws/provision/terraform.tfvars
. You shouldn't need to change any values but may do so if you
wish to e.g. use a t3.large
EC2 instance for the web server.
cd provision
cp terraform.tfvars_sample terraform.tfvars
Then, to create the infrastructure on AWS run the following commands from within the xnat-aws/provision
directory:
terraform init
teraform apply
After running terraform apply
, the following outputs will be printed:
ansible_install_xnat
: the command to run to configure the infrastructure with Ansiblexnat_web_url
: the URL of the web server for logging into XNAT
See xnat-aws/configure/README.md
for notes on running the XNAT installation.
To destroy the infrastructure, type:
terraform destroy
As part of the setup, we provide an AppStream 2.0 instance to access the files stored on the EFS volume. This allows exploring the files used and generaged by XNAT and run external software on the data. By default, the AppStream image has FSL installed.
The AppStream image is only created if the create_appstream
variable is set to true
. When
running terraform apply
, the user will be prompted to enter a value for this variable.
Alternatively, you can use
terraform apply -var 'create_appstream=true'
to skip the prompt.
Name | Version |
---|---|
terraform | >=0.15 |
terraform | >= 1.1.4 |
aws | >= 5.30.0 |
Name | Version |
---|---|
aws | 5.60.0 |
local | 2.5.1 |
Name | Source | Version |
---|---|---|
appstream | github.com/HealthBioscienceIDEAS/terraform-aws-IDEAS-appstream | n/a |
database | ./modules/database | n/a |
efs | ./modules/efs | n/a |
get_ami | ./modules/get_ami | n/a |
get_my_ip | ./modules/get_my_ip | n/a |
setup_vpc | terraform-aws-modules/vpc/aws | n/a |
web_server | ./modules/web-server | n/a |
Name | Type |
---|---|
aws_key_pair.key_pair | resource |
aws_security_group_rule.appstream_allow_all_outgoing | resource |
local_file.ansible-hosts | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
as2_desired_instance_num | Number of instances to use for the AppStream image | number |
1 |
no |
as2_image_name | Name of the AppStream image | string |
"IDEAS-FSL-AmazonLinux2-EFSMount-2023-08-30" |
no |
as2_instance_type | Instance type to use for the AppStream image | string |
"stream.standard.medium" |
no |
availability_zones | AZs to use for deploying XNAT | list(string) |
[ |
no |
aws_region | AWS region to use for deploying XNAT | string |
"eu-west-2" |
no |
create_appstream | Whether to create an AppStream image | bool |
false |
no |
ec2_instance_types | Instance type to use for each server | map(any) |
{ |
no |
extend_http_cidr | The CIDR blocks to grant HTTP access to the web server, in addition to your own IP address | list(string) |
[] |
no |
extend_https_cidr | The CIDR blocks to grant HTTSP access to the web server, in addition to your own IP address | list(string) |
[] |
no |
extend_ssh_cidr | CIDR blocks servers should permit SHH access from, in addition to your own IP address | list(string) |
[] |
no |
instance_os | OS to use for the instance - will determine the AMI to use | string |
"rocky9" |
no |
instance_private_ips | Private IP addresses for each instance | map(any) |
{ |
no |
root_block_device_size | Storage space on the root block device (GB) | number |
30 |
no |
smtp_private_ip | Private IP address to use to the SMTP mail server | string |
"192.168.56.101" |
no |
subnet_cidr_blocks | CIDR block for the VPC and subnets | map(any) |
{ |
no |
vpc_cidr_block | CIDR block for the VPC | string |
"192.168.0.0/16" |
no |
Name | Description |
---|---|
ansible_install_xnat | Run this command from the xnat-aws/configure directory to install and configure XNAT. |
xnat_web_url | Once XNAT has been installed and configured, the web server will be accessible at this URL. |