Using Terraform to provision vSphere Templates with GOVC and AWS S3
Gilles Chekroun
Lead VMware Cloud on AWS Solutions Architect
---
With my recent post about using Terraform for VMware Cloud on AWS provisioning, I had to provision OVA templates in my VMC vCenter so I would be able to use the Terraform vSphere provider to clone and deploy VMs.
Since this requires access to ESXi inside VMware Cloud on AWS, it's not possible to do it from an external machine like my Mac over the internet.
Only coming from a VPN connection or a Direct Connect will allow this but . . .
. . . it is possible to use an AWS EC2 instance on the attached VPC to provision and this is the goal of this post.
Change name and permissions.
Move to a proper place like /usr/local/bin.
Use yum install for jq and awscli with the -y option so we don't have any interactions.
JQ is a neat package to manipulate json. At this stage we are ready with the EC2.
With "local-exec" we are sunning code on the machine that runs the terraform (here my Mac). Let's prepare a few files:
S1="<<EOT "; S2=">" ; S3="EOT")
Use secure copy scp to transfer the outputs.json, the data.sh script and your AWS credentials. There are multitude ways to transfer AWS credentials to EC2 so we can read from S3. The best would be a IAM associated with the EC2 but secure copying my credentials to that EC2 is not too bad.
For this we will run a "remote-exec" again simply sync our S3 bucket to the local EC2. After that, we can launch the data.sh script to upload the OVA templates.
Thank you for reading!
Here is a short video of the provisioning.
Lead VMware Cloud on AWS Solutions Architect
---
With my recent post about using Terraform for VMware Cloud on AWS provisioning, I had to provision OVA templates in my VMC vCenter so I would be able to use the Terraform vSphere provider to clone and deploy VMs.
Since this requires access to ESXi inside VMware Cloud on AWS, it's not possible to do it from an external machine like my Mac over the internet.
Only coming from a VPN connection or a Direct Connect will allow this but . . .
. . . it is possible to use an AWS EC2 instance on the attached VPC to provision and this is the goal of this post.
AWS EC2 Deployment
Using AWS terraform provider, I am deploying a very simple EC2 without any initialisation.
I could do the provisioning at this time but my code needs output parameters stored in the tfstate file.
What I need now is the Public IP and the Public DNS name of my EC2 instance.
This will be part of the terraform output.
Connect to the EC2 with SSH
Before we can connect to the EC2 and start provisioning, make sure your security group allow access to SSH port 22 and that you have an IGW so we can grab GOVC software to install.
In the code below I am using a dummy null-resourceresource "null_resource" "dummy" { /*====================================== Connect to EC2 with SSH and pem key-pair =======================================*/ connection { type = "ssh" user = "ec2-user" host = var.VM1_IP private_key = file("~/AWS-SSH/my-oregon-key.pem") }Connection type is SSH; user is the standard ec2-user from AWS; host is our EC2 public IP; and my pem key-pair is provided as private_key.
Local-exec and remote exec
We will start with "remote-exec" which means that we are executing code on the remote machine - in our case the EC2/*==================================== execute code on the EC2 after creation ====================================*/ provisioner "remote-exec" { inline = [ # install GOVC: # download and unzip "wget https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_linux_amd64.gz", "gunzip govc_linux_amd64.gz", # rename "mv govc_linux_amd64 govc", "sudo chown root govc", "sudo chmod ug+r+x govc", "sudo mv govc /usr/local/bin/.", #install jq "sudo yum install jq -y", #install AWS CLI "sudo yum install awscli -y" ] }
Install GOVC, jq and awscli
Grab the code from github with wget and unzip.Change name and permissions.
Move to a proper place like /usr/local/bin.
Use yum install for jq and awscli with the -y option so we don't have any interactions.
JQ is a neat package to manipulate json. At this stage we are ready with the EC2.
With "local-exec" we are sunning code on the machine that runs the terraform (here my Mac). Let's prepare a few files:
/*===================================== execute code locally ======================================*/ provisioner "local-exec" { command = S1 cd ../../p3/main/ terraform output -state=../../phase1.tfstate -json S2 outputs.json scp -o StrictHostKeyChecking=no -i ~/AWS-SSH/my-oregon-key.pem ./outputs.json ec2-user@${var.VM1_DNS}: scp -o StrictHostKeyChecking=no -i ~/AWS-SSH/my-oregon-key.pem ./data.sh ec2-user@${var.VM1_DNS}: scp -o StrictHostKeyChecking=no -i ~/AWS-SSH/my-oregon-key.pem -r ~/.aws ec2-user@${var.VM1_DNS}: S3 }(Sorry but the HTML of my blog has difficulties accepting << and > so please substitute:
S1="<<EOT "; S2=">" ; S3="EOT")
Create the outputs.json file
The terraform output command will read the tfstate file and convert the outputs into a outputs.json file. We will use that file to grab the data we need to set the GOVC parameters.Use secure copy scp to transfer the outputs.json, the data.sh script and your AWS credentials. There are multitude ways to transfer AWS credentials to EC2 so we can read from S3. The best would be a IAM associated with the EC2 but secure copying my credentials to that EC2 is not too bad.
Setting GOVC parameters in data.sh script
#!/usr/bin/env bash export GOVC_URL=$(cat ./outputs.json | jq -r '.GOVC_vc_url.value') export GOVC_USERNAME=$(cat ./outputs.json | jq -r '.cloud_username.value') export GOVC_PASSWORD=$(cat ./outputs.json | jq -r '.cloud_password.value') export GOVC_INSECURE=true echo $GOVC_URL echo $GOVC_USERNAME echo $GOVC_PASSWORD govc about # extract VM specs with . . . # govc import.spec ./vmc-demo.ova | python -m json.tool > vmc-demo.json # govc import.spec ./photoapp-u.ova | python -m json.tool > photoapp-u.json # and update Network govc import.ova -dc="SDDC-Datacenter" -ds="WorkloadDatastore" -pool="Compute-ResourcePool" -folder="Templates" -options=./vmc-demo.json ./vmc-demo.ova govc import.ova -dc="SDDC-Datacenter" -ds="WorkloadDatastore" -pool="Compute-ResourcePool" -folder="Templates" -options=./photoapp-u.json ./photoapp-u.ovaParsing the outputs.json file with JQ is quite simple and we can retrieve the GOVC_URL, GOVC_USERNAME and GOVC_PASSWORD easily.
OVA templates and json are in AWS S3
We need to copy the files from AWS S3 bucket before we can run GOVC and upload our templates in vCenter.For this we will run a "remote-exec" again simply sync our S3 bucket to the local EC2. After that, we can launch the data.sh script to upload the OVA templates.
/*==================================== execute code on the EC2 again ====================================*/ provisioner "remote-exec" { inline = [ #sync our S3 bucket "aws s3 sync s3://terraform-ova .", #run the GOVC OVA import "./data.sh" ] }
Speed and No Data Charges
I am quite happy with the fact that the S3 endpoint of our VPC gives us a direct access to our EC2 with no data charges (read or write) and then from EC2 to VMware Cloud on AWS over the ENI without data charges as well !Thank you for reading!
Here is a short video of the provisioning.
Comments
Post a Comment