Using Terraform with multiple providers in multiple phases to deploy and configure VMware Cloud on AWS
Gilles Chekroun
Lead VMware Cloud on AWS Solutions Architect
---
With the recent development of new VMware Terraform providers for NSX-T and VMware Cloud on AWS, we have now the possibility to create code for a full automation and deployment of Infrastructure including AWS, VMC, NSX-T and vSphere.
Architecture
This code architecture will be done in 3 phases and the output of one phase will be used as input for another.
The code is done using Terraform modules. The first phase will use AWS provider combined with VMC provider.
Terraform AWS provider and the VPC module
We will start with terraform.tfvars file to hold our secret parameters.
// VMC Credentials vmc_token = "Your VMC API Token" // AWS Credentials access_key = "Your AWS Access Key" secret_key = "Your AWS Secret Key" AWS_account = "Your AWS Account Number" // ORG ID my_org_id = "Your VMC ORG ID"
The variables.tf file will hold different parameters like VPC subnets range and AWS region/key-pair.
variable "AWS_region" {default = "us-west-2"} variable "VMC_region" {default = "US_WEST_2"} variable "key_pair" {default = "my-oregon-key" } /*================ Subnets IP ranges =================*/ variable "My_subnets" { default = { SDDC_Mngt = "10.10.10.0/23" SDDC_def = "192.168.1.0/24" VPC1 = "172.201.0.0/16" Subnet10-vpc1 = "172.201.10.0/24" Subnet20-vpc1 = "172.201.20.0/24" Subnet30-vpc1 = "172.201.30.0/24" } }
Phase 1 is simple. We will set our providers with all secret parameters and call different modules to create VPC, EC2s and our SDDC.
An important point is also how we set the location of the state file. In this example, the state file will stay local but usually it can stay in AWS S3 or other locations.
An important point is also how we set the location of the state file. In this example, the state file will stay local but usually it can stay in AWS S3 or other locations.
provider "aws" { access_key = var.access_key secret_key = var.secret_key region = var.AWS_region } provider "vmc" { refresh_token = var.vmc_token } terraform { backend "local" { path = "../../phase1.tfstate" } } /*================ Create AWS VPCs The VPCs and subnets CIDR are set in "variables.tf" file =================*/ module "VPCs" { source = "../VPCs" vpc1_cidr = var.My_subnets["VPC1"] Subnet10-vpc1 = var.My_subnets["Subnet10-vpc1"] region = var.AWS_region } /*================ Create EC2s =================*/ module "EC2s" { source = "../EC2s" VM-AMI = var.VM_AMI Subnet10-vpc1 = module.VPCs.Subnet10-vpc1 Subnet10-vpc1-base = var.My_subnets["Subnet10-vpc1"] GC-SG-VPC1 = module.VPCs.GC-SG-VPC1 key_pair = var.key_pair } /*================ Create SDDC =================*/ module "SDDC" { source = "../SDDC" my_org_id = var.my_org_id SDDC_Mngt = var.My_subnets["SDDC_Mngt"] SDDC_def = var.My_subnets["SDDC_def"] customer_subnet_id = module.VPCs.Subnet10-vpc1 VMC_region = var.VMC_region AWS_account = var.AWS_account }
The goal of AWS provider is to create the VMware Cloud on AWS attached VPC and a subnet that we will link to the SDDC.
This subnet ID is one of the parameter needed for the VMC provider and we will include it in our terraform module outputs.output "Subnet10-vpc1" {value = aws_subnet.Subnet10-vpc1.id}
VPC Module
In Phase 1, the AWS provider is creating the VPC, the subnet, an IGW, an S3 endpoint, all security groups and an EC2 that we will use later in Phase 3.Below is the VPC.tf module:
variable "vpc1_cidr" {} variable "Subnet10-vpc1" {} variable "region" {} /*================ VPCs =================*/ resource "aws_vpc" "vpc1" { cidr_block = var.vpc1_cidr enable_dns_support = true enable_dns_hostnames = true tags = { Name = "GCTF-VPC1" } } /*================ IGWs =================*/ resource "aws_internet_gateway" "vpc1-igw" { vpc_id = aws_vpc.vpc1.id tags = { Name = "GCTF-VPC1-IGW" } } /*================ Subnets in VPC1 =================*/ # Get Availability zones in the Region data "aws_availability_zones" "AZ" {} resource "aws_subnet" "Subnet10-vpc1" { vpc_id = aws_vpc.vpc1.id cidr_block = var.Subnet10-vpc1 map_public_ip_on_launch = true availability_zone = data.aws_availability_zones.AZ.names[0] tags = { Name = "GCTF-Subnet10-vpc1" } } /*================ default route table VPC1 =================*/ resource "aws_default_route_table" "vpc1-RT" { default_route_table_id = aws_vpc.vpc1.default_route_table_id lifecycle { ignore_changes = [route] # ignore any manually or ENI added routes } route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.vpc1-igw.id } tags = { Name = "GCTF-RT-VPC1" } } /*================ Route Table association =================*/ resource "aws_route_table_association" "vpc1_10" { subnet_id = aws_subnet.Subnet10-vpc1.id route_table_id = aws_default_route_table.vpc1-RT.id } /*================ Security Groups =================*/ resource "aws_security_group" "GC-SG-VPC1" { name = "GC-SG-VPC1" vpc_id = aws_vpc.vpc1.id tags = { Name = "GCTF-SG-VPC1" } #SSH and all PING ingress { description = "Allow SSH" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow all PING" from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "Allow iPERF3" from_port = 5201 to_port = 5201 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } resource "aws_default_security_group" "default" { vpc_id = aws_vpc.vpc1.id ingress { description = "Default SG for VPC1" from_port = 0 to_port = 0 protocol = "-1" self = true } ingress{ description = "Include EC2 SG in VPC1 default SG" from_port = 0 to_port = 0 protocol = "-1" security_groups = ["${aws_security_group.GC-SG-VPC1.id}"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "Default VPC1-SG" } } /*================ S3 Gateway end point =================*/ resource "aws_vpc_endpoint" "s3" { vpc_id = aws_vpc.vpc1.id service_name = "com.amazonaws.${var.region}.s3" route_table_ids = [aws_default_route_table.vpc1-RT.id] } /*================ Outputs variables for other modules to use =================*/ output "VPC1_id" {value = aws_vpc.vpc1.id} output "Subnet10-vpc1" {value = aws_subnet.Subnet10-vpc1.id} output "GC-SG-VPC1" {value = aws_security_group.GC-SG-VPC1.id}
Terraform VMC provider and the SDDC Module
With my colleague and friend Nico Vibert, we did quite some tests on early access for this provider and, beside giving feedback to the development team, we also requested updates and new features.
Nico wrote a detailed post on SDDC 1 node creation.
I will emphasize the use of input parameters from AWS provider and the outputs from SDDC module needed for phase 2.
Nico wrote a detailed post on SDDC 1 node creation.
I will emphasize the use of input parameters from AWS provider and the outputs from SDDC module needed for phase 2.
module "SDDC" { source = "../SDDC" my_org_id = var.my_org_id # ORG ID from secrets SDDC_Mngt = var.My_subnets["SDDC_Mngt"] # Management IP range SDDC_def = var.My_subnets["SDDC_def"] # Default SDDC Segment customer_subnet_id = module.VPCs.Subnet10-vpc1 # VPC attached subnet VMC_region = var.VMC_region # AWS region AWS_account = var.AWS_account # Your AWS account }The Output of this module will look like:
output "proxy_url" {value = trimsuffix(trimprefix(vmc_sddc.TF_SDDC.nsxt_reverse_proxy_url, "https://"), "/sks-nsxt-manager")} output "vc_url" {value = trimsuffix(trimprefix(vmc_sddc.TF_SDDC.vc_url, "https://"), "/")} output "cloud_username" {value = vmc_sddc.TF_SDDC.cloud_username} output "cloud_password" {value = vmc_sddc.TF_SDDC.cloud_password}
- proxy_url is needed for the NSXT provider. Here I trim the "https://" prefix and "/sks-nsxt-manager" suffix.
- vc_url is our SDDC vCenter URL. Here i also trim the "https://" prefix.
- cloud_username and cloud_password are self explanatory.
After a terraform apply, this is the output we get:
Phase 2 is using the state file of phase1 and setting its own state file as coded below:terraform { backend "local" { path = "../../phase2.tfstate" } } # Import the state from phase 1 and read the outputs data "terraform_remote_state" "phase1" { backend = "local" config = { path = "../../phase1.tfstate" } }Reading the phase1.tfstate file will allow us to get the "host" variable needed for the NSX-T provider. Please note the format of the host parameter below:
provider "nsxt" { host = data.terraform_remote_state.phase1.outputs.proxy_url vmc_token = var.vmc_token allow_unverified_ssl = true enforcement_point = "vmc-enforcementpoint" }Nico did also some tests with the NSX T provider here.
NSXT Module
module "NSX" { source = "../NSX" Subnet12 = var.VMC_subnets["Subnet12"] Subnet12gw = var.VMC_subnets["Subnet12gw"] Subnet12dhcp = var.VMC_subnets["Subnet12dhcp"] Subnet13 = var.VMC_subnets["Subnet13"] Subnet13gw = var.VMC_subnets["Subnet13gw"] Subnet13dhcp = var.VMC_subnets["Subnet13dhcp"] Subnet14 = var.VMC_subnets["Subnet14"] Subnet14gw = var.VMC_subnets["Subnet14gw"] }
Creating a segment:
resource "nsxt_policy_segment" "segment12" { display_name = "segment12" description = "Terraform provisioned Segment" connectivity_path = "/infra/tier-1s/cgw" transport_zone_path = data.nsxt_policy_transport_zone.TZ.path subnet { cidr = var.Subnet12gw dhcp_ranges = [var.Subnet12dhcp] } }
Creating a Group
resource "nsxt_policy_group" "group12" { display_name = "tf-group12" description = "Terraform provisioned Group" domain = "cgw" criteria { ipaddress_expression { ip_addresses = [var.Subnet12] } } }
Reading existing resources
Since VMware Cloud SDDC is preconfigured with NSX-T Tier 0 router and 2 x Tier 1 MGW and CGW, we need to import the resources in our code.For now we need to use the terraform import capability.
Use:
- terraform import module.NSX.nsxt_policy_gateway_policy.mgw mgw/default
and
- terraform import module.NSX.nsxt_policy_gateway_policy.cgw cgw/default
Once this is imported, we can create FW rules under each resource.
Below is a CGW example:
/*======== CGW rules =========*/ resource "nsxt_policy_gateway_policy" "cgw" { category = "LocalGatewayRules" description = "Terraform provisioned Gateway Policy" display_name = "default" domain = "cgw" # New rules below . . # Order in code below is order in GUI rule { action = "ALLOW" destination_groups = [ "/infra/tier-0s/vmc/groups/connected_vpc", "/infra/tier-0s/vmc/groups/s3_prefixes" ] destinations_excluded = false direction = "IN_OUT" disabled = false display_name = "VMC to AWS" ip_version = "IPV4_IPV6" logged = false profiles = [] scope = ["/infra/labels/cgw-cross-vpc"] services = [] source_groups = [] sources_excluded = false } rule { action = "ALLOW" destination_groups = [] destinations_excluded = false direction = "IN_OUT" disabled = false display_name = "AWS to VMC" ip_version = "IPV4_IPV6" logged = false profiles = [] scope = ["/infra/labels/cgw-cross-vpc"] services = [] source_groups = [ "/infra/tier-0s/vmc/groups/connected_vpc", "/infra/tier-0s/vmc/groups/s3_prefixes" ] sources_excluded = false } rule { action = "ALLOW" destination_groups = [] destinations_excluded = false direction = "IN_OUT" disabled = false display_name = "Internet out" ip_version = "IPV4_IPV6" logged = false profiles = [] scope = ["/infra/labels/cgw-public"] services = [] source_groups = [ nsxt_policy_group.group12.path, nsxt_policy_group.group13.path ] sources_excluded = false } # Default rule rule { action = "DROP" destination_groups = [] destinations_excluded = false direction = "IN_OUT" disabled = false display_name = "Default VTI Rule" ip_version = "IPV4_IPV6" logged = false profiles = [] scope = ["/infra/labels/cgw-vpn"] services = [] source_groups = [] sources_excluded = false } }Once applied, the outcome will show in our SDDC as:
The output of this module is simple the created segments values so we can use this in the last phase with vSphere provider
output "segment12_name" {value = nsxt_policy_segment.segment12.display_name} output "segment13_name" {value = nsxt_policy_segment.segment13.display_name}Now that we have our SDDC deployed and FW rules created together with Logical segments and groups, we can use standard Terraform vSphere provider to create VMs in our environment.
Similarly to Phase2, we will have a separate state file for phase 3 and read phase 1 and 2 parameters.
terraform { backend "local" { path = "../../phase3.tfstate" } } # Import the state from phase 1and 2 and read the outputs data "terraform_remote_state" "phase1" { backend = "local" config = { path = "../../phase1.tfstate" } } data "terraform_remote_state" "phase2" { backend = "local" config = { path = "../../phase2.tfstate" } }At start, our vCenter is empty.
No VMs, no Templates, only the Management part.
Since vSphere provider is not yet supporting Content Library, we need a way to create/import a template.
For this, i will use 2 simple OVA files and prepare them for import using GOVC.
An important point is that GOVC machine needs to access ESXi host for provisioning. Trying from an external device will not work unless on VPN or Direct connect.
The easiest way is to use an EC2 on the attached VPC via the ENI directly to our SDDC.
This is a simple Shell script that will import the OVA to our vCenter.
#!/usr/bin/env bash export GOVC_URL="https://vcenter.sddc-44-231-118-110.vmwarevmc.com/sdk" export GOVC_USERNAME="cloudadmin@vmc.local" export GOVC_PASSWORD="supersecret.;)" export GOVC_INSECURE=true govc about # extract VM specs with . . . # govc import.spec ./vmc-demo.ova | python -m json.tool > vmc-demo.json # govc import.spec ./photoapp-u.ova | python -m json.tool > photoapp-u.json # and update Network govc import.ova -dc="SDDC-Datacenter" -ds="WorkloadDatastore" -pool="Compute-ResourcePool" -folder="Templates" -options=./vmc-demo.json ./vmc-demo.ova govc import.ova -dc="SDDC-Datacenter" -ds="WorkloadDatastore" -pool="Compute-ResourcePool" -folder="Templates" -options=./photoapp-u.json ./photoapp-u.ovaI am sure we can optimize that but simply had no time.
The OVA were uploaded to the EC2 machine and we need to extract the OVA parameters into a JSON file and update the Networking info.
This can be done simply with the commands:
govc import.spec ./vmc-demo.ova | python -m json.tool > vmc-demo.json govc import.spec ./photoapp-u.ova | python -m json.tool > photoapp-u.jsonEdit the "Network" under "NetworkMapping" and assign an existing segment.
{ "DiskProvisioning": "flat", "IPAllocationPolicy": "dhcpPolicy", "IPProtocol": "IPv4", "InjectOvfEnv": false, "MarkAsTemplate": true, "Name": null, "NetworkMapping": [ { "Name": "prod-cgw-network-1", "Network": "segment12" } ], "PowerOn": false, "WaitForIP": false }
Deploy OVA with GOVC
At this stage we are ready to upload our templates.
Our Templates are now in vCenter
Time to clone a create our VMs. For that the following code for vSphere provider is used:variable "data_center" {} variable "cluster" {} variable "workload_datastore" {} variable "compute_pool" {} variable "Subnet12" {} variable "Subnet13" {} data "vsphere_datacenter" "dc" { name = var.data_center } data "vsphere_compute_cluster" "cluster" { name = var.cluster datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_datastore" "datastore" { name = var.workload_datastore datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_resource_pool" "pool" { name = var.compute_pool datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_network" "network12" { name = var.Subnet12 datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_network" "network13" { name = var.Subnet13 datacenter_id = data.vsphere_datacenter.dc.id } /*================================================================= Get Templates data (templates uploaded with GOVC from EC2 instance) ==================================================================*/ data "vsphere_virtual_machine" "demo" { name = "vmc-demo" datacenter_id = data.vsphere_datacenter.dc.id } data "vsphere_virtual_machine" "photo" { name = "photoapp-u" datacenter_id = data.vsphere_datacenter.dc.id } # ================================================ resource "vsphere_virtual_machine" "vm1" { name = "terraform-testVM" resource_pool_id = data.vsphere_resource_pool.pool.id datastore_id = data.vsphere_datastore.datastore.id num_cpus = 2 memory = 1024 guest_id = "other26xLinuxGuest" network_interface { network_id = data.vsphere_network.network12.id } disk { label = "disk0" size = 20 } clone { template_uuid = data.vsphere_virtual_machine.demo.id } } resource "vsphere_virtual_machine" "vm2" { name = "terraform-photo" resource_pool_id = data.vsphere_resource_pool.pool.id datastore_id = data.vsphere_datastore.datastore.id num_cpus = 2 memory = 1024 guest_id = "ubuntu64Guest" network_interface { network_id = data.vsphere_network.network13.id } disk { label = "disk0" size = 20 thin_provisioned = false } clone { template_uuid = data.vsphere_virtual_machine.photo.id } }VMs are created and placed in the proper Networking segment.
Demo Videos
- Intro and Phase 1
- Phase 2
The complete code is published in my github. The VMC and latest NSXT providers are not yet public. Send me a note if you need early access.
Comments
Post a Comment