Connect VMware managed TGW to your AWS TGW in the same region using a "peering VPC"

Gilles Chekroun

Lead VMware Cloud on AWS Solutions Architect
In many designs we are facing customers that already have a TGW in a specific AWS region and VPCs attached to it.
Adding an SDDC group in the same region is problematic since AWS doesn't support TGW peering in the same region.

If the SDDC Group is in a different region, the VMC software (M15 for EA and M16 for GA) will support that but it's a very rare case and so far my Customers have TGW in the same region.

On my "physical" last Re:Invent conference in Vegas in 2019, I talked to an AWS Network engineer that indicated that we can do transitive routing via a VPC attached to two TGWs in the same region.
Yes, a VPC can be attached up to 5 different TGWs in the same region.

The setup is quite easy and simple. The throughput via this "peering VPC" is great since all attachments are VPC attachments at 50Gbps.

Nothing is required in the Peering VPC only 1 subnet in each AZ you want to connect - see below. Until AWS will provide a TGW peering INTRA region and until we will be able to add this functionality in our SDDC Group software, this is a very valid alternative.

VMC 1.12 release update

VMware decided to add a new feature in the 1.12 release that will allow the Customer to program static route in the VMware managed TGW. This is a mandatory feature for adding Security VPC, Transit VPC or Peering VPC.
A Feature Flag needs to be added called nsxGroupL3ConnectivitySecurityVpc

Lab Setup

SDDC Grouping

The SDDC is running It's attached to an SDDC Group called "peeringVPC".

The SDDC Is attached to the Group and the AWS account is coded.

When this is done, the VMC console is sharing the VMware Managed TGW with the Customer console under RAM. The Customer needs to accept the share.

The Shared TGW will appear at the Customer Console.

AWS Setup


We will create 3 VPCs:
  • VPC110 with CIDR
  • VPC120 with CIDR
  • Peering VPC with CIDR


For each VPC we create 2 subnets in 2 AZs. The subnets are /24 with range .10 and .20 as below.


On the VPC110 and 120 we will create EC2s in 2 AZs so we can ping them.

Customer TGW

The customer TGW is in the same Region (Oregon)

We keep Route table association and Propagation Disabled but in fact, Association could be left enable. 
Propagation should not be enable on the Peering VPC attachment.

Customer TGW Attachments

Every customer VPC is attached to the Customer TGW.
The Peering VPC is attached to BOTH the Customer TGW and the VMware TGW. See the AWS_side and VMC_Side below:

Static route on Customer VPC

We have added a pointing to the Peering VPC. This will send all VPC traffic back to the VMC side.
The other 2 routes are propagated from the VPCs attachements associations.

Customer VPCs Route Table

In this test a simple "send everything to the TGW" will translate to a pointing to the Customer TGW.

Peering VPC route Table

Here we need to try and summarise the routes.

I am using as a "global" representation for ALL the customer side and for the return path a to the VMware TGW.

VMC Setup

VMware TGW static route

Accept the Peering VPC attachment to the VMware TGW and as a  final step send all routes to the VMware TGW.

To do that, we need the SDDC release 1.12 or up that allows us to program static routes and add this to the "allowed Prefixes" field.

VMware Side route table

SDDC FW rules

Open the proper CGW FW rules to allow traffic from and to the vTGW

Connectivity tests

VMC to Customer VPCs

From the VM in VMC with IP let's ping the EC2 in VPC110 at

and also on the other VPC120 and other AZ at

Customer VPCs to VMC

For that we need to add an IGW let say on VPC110 and restrict my home IP to reach the EC2 public IP like:

Login to EC2 on VPC110 and ping the VM back to VMC at

Performance and throughput

The VPC attachments are all 50Gbps to all TGW.

The VMC Hosts have a 25Gbps interface. We will install an UBUNTU VM in VMC and a large EC2 (M5.24xlarge) on VPC110

VMC side iperf3 Server

Set MTU at 8500 and start iPref3 server.
Packets with a size larger than 8500 bytes that arrive at the transit gateway are dropped.

AWS side iperf3 Client

Check MTU (default is 9001) and set it to 8500 and use the command 

 iperf3 -c -P 30 -w 416K -V

Without any tuning: 12.1 Gbps sending and 12.1 Gbps receiving

AWS Side deployment with Terraform

You can find all the Terraform code for the AWS side here. We still don't have Terraform provider for SDDC Grouping so some manual setup needs to be done on the VMC side.

Thanks for reading.


  1. Thanks for this article. It's hugely disappointing that I can't do BGP between the VTGW and the TG.

    Forcing customers to create static routes isn't very manageable. My customer has simlar IP ranges on both sides preventing easy super-netting and has internet based routes on both sides so I'm left struggling to find a manageable solution.

    1. Unfortunately, AWS doesn't support BGP between TGWs even on native peering between regions. That's a limitation I agree, but this is what we have and static routes can also help you to do traffic engineering in a controlled way.

    2. In your case, BGP will not help is you have overlapping IPs on each side anyway

  2. Nice post Giles. Is this a supported design? It looks like transitive peering.

    1. Yes it’s supported until AWS will propose an intra region peering. Maybe soon. Let see


Post a Comment


Egress VPC and AWS Transit Gateway (Part1)

AWS Transitive routing with Transit Gateways in the same region

Build a VMware Cloud on AWS Content Library using AWS S3