VMware Transit Connect and SDDC Grouping

Gilles Chekroun
Lead VMware Cloud on AWS Solutions Architect


First of all, I need to say that this is one of my longest post with a lot of new information and features around networking with AWS and SDDC together.

With the  recent release of VMware Cloud on AWS ver 1.12, a major networking feature is now available and that is the VMware Managed Transit Gateway, also known as VMware Transit Connect.
This post will go through a detailed description of what that feature is and the networking capabilities it opens.

VMware Transit Connect

Until now, the only way to connect an SDDC to a Transit Gateway was via VPN. Route based VPN is described in a previous post here and how to use PowerCLI for Route based VPN here.
By default, AWS creates 2 VPN tunnels and supports Equal Cost Multi-Path (ECMP). Adding more tunnels will certainly add more bandwidth. The VPN tunnels are terminated in the SDDC and a maximum of 4 tunnels is supported. The VMware Cloud on AWS side also supports ECMP.
With ver 1.12, VMware has now the capability to connect Software Defined Data Centers with a native AWS Virtual Private Cloud attachment to a VMware Managed Transit Gateway.
VMware Transit Connect delivers easy-to-use, scalable and performant connectivity solution between VMware Cloud on AWS SDDCs within an SDDC Group. 
It leverages the AWS Transit Gateway (TGW) to enable any-to-any high bandwidth (50 Gbps), low latency connectivity between SDDC Group members in a single AWS region.
Additionally, VMware allows customer VPCs to be connected to that VMware Managed TGW as well.
The on-premises communication is done via AWS Direct Connect Gateway and Transit VIF.


The goal of this post is to describe the necessary steps to create an environment looking like:

SDDC Grouping

A  new concept of "SDDC grouping" is now available. The idea is to group SDDCs from the same AWS region together and have an AWS Transit Gateway connecting them.
Every  SDDC connectivity is a native VPC attachment to the VMware Managed TGW at 50Gbps depicted by the green lines in the diagram above.

Adding Customer VPCs

Additionally the Customer can attach native AWS VPCs depicted by the orange lines in the diagram above.


At the first release, the VMware Managed  TGW route tables will not be accessible. There are simple ALLOW or DENY rules as:
- SDDC  to SDDC to VPCs is allowed
- VPC to VPC is denied
- SDDCs to on-prem via Direct Connect Gateway is allowed
- VPCs to on-prem via Direct Connect Gateway is denied

The VMware Managed TGW will not populate VPC route tables or SDDC FW rules. This is the Customer responsibility and this is where granularity can be established for traffic management.

Attachment Rules

Since the VMware Managed TGW is an AWS TGW, we need to follow certain rules for connectivity:
- The AWS account used is the "Shadow Account" of the VMC Organisation
- SDDCs MUST be in the same AWS region (TGW  rules dictates)
- SDDCs MUST have non-overlapping management networks
- SDDCs SHOULD have non-overlapping NSX networks (overlapping networks will be rejected)

Step 1 - SDDCs Description

In our lab we have 2 SDDCs:
 - Management Networks:
        - and
 - NSX Networks
        - on both (oops - overlapping - but here on purpose)
        - 192.168.10 and 11.0/24 on SDDC1
        - 192.168.20 and 21.0/24 on SDDC2
- Virtual Machines:
        - DSL10 on segment 10 in SDDC1
        - DSL11 on segment 11 in SDDC1
        - DSL20 on segment 20 in SDDC2
        - DSL21 on segment 21 in SDDC2

Create an SDDC Group

Under "ACTIONS" on the top-right, there is a new "Create SDDC group" tab.
Give your SDDC Group a name
Connect SDDCs
Acknowledge your choice
SDDC Group has 3 Members "CONNECTED" as described below

SDDC Overview

There is now a new icon in the SDDC overview.

Step 2 - SDDCs Advertised and Learned routes

From SDDC2 we can see the advertised routes.
- Management Network
- All NSX networks
    Note that segment is advertised too (but overlapping with Terraform_SDDC1 same segment.
From Terraform_SDDC1 point of view, we have overlapping segment and this is rejected.

Step 3 - Create  SDDC Firewall rules

Having learned and advertised the SDDC routes doesn't mean that every network is connected. At this stage we need to create FW rules to allow the traffic.
These rules are Compute Gateway rules and need to be applied to the "Direct connect" interface. Yes - this is where the VPC attachment lives. It doesn't  mean it's connected to Direct Connect but internally we are using the same circuit.

Check connectivity

SDDCs are now connected
Since I was very curious about the connection speed, I setup 2 Ubuntu images on each SDDC and set the MTU to 8500 (the max for TGW). I used iPerf3 to burst traffic.
The Physical interface of the AWS i3.metal hosting our SDDC  is 25Gbps.
To my positive surprise, I got almost 12 Gbps transmit and 12Gbps receive - not bad !!

Step 4 - Add AWS customer account

To be able to connect Customer VPCs to the VMware Managed TGW, we need to indicate the customer AWS account where the TGW will be shared.
Once  done, the VMware Cloud portal will share the TGW as a resource for that account under the AWS Resource Access Manager.
Customer needs to accept that shared resource. 
Another important point is that the resource is shared in the same AWS region where the SDDCs are (obviously) so make sure you re looking in the correct region.

After accepting the share, the Customer get an acknowledgment and will be able to see the TGW in the AWS  console.

Step 5 - Adding  AWS customer VPCs

At this step, adding VPCs  is simply creating VPC  attachments to the shared TGW.
Do the same for the second VPC named VPC222.
The attachments will go "pending acceptance" waiting for acceptance on the SDDC group side.

Accept customer VPCs attachments

Going back to the SDDC Group tab, we now need to accept the Customer VPCs  attachments.

After a few mins, the attachments  will be available
and associated.
The SDDC group will show the learned  CIDR of the connected VPCs

Create appropriate route entries in the VPC route tables

Routes are propagated but it is the Customer responsibility to populate the VPCs route table.

Create appropriate FW rules on the SDDCs side

At this stage, there is a need to create FW rules on the SDDCs CGW to allow specific traffic to the VPCs.
To simplify the FW rules, 3 new groups are automatically created:
- DXGW prefixes
- VPCs prefixes
- Other SDDCs prefixes

All connectivity is now done !

Ping  tests from DSL10 VM in SDDC 1

EC2s in the attached VPCs don't communicate

Step 6 - Preparing for Direct Connect Gateway

In our lab we don't have access to a Transit VIF DX connection. I will only show the DXGW preparation and BGP prefixes.
Remember that the AWS limit is 20 prefixes per TGW association.
On AWS console, go to Direct Connect gateways and create a DXGW.
Give it a name and ASN - make sure ASN don't overlap.
Copy the DXGW ID 
and configure the VMC side
At this stage, VMC will request a TGW association with the DXGW owner.
Accept the proposed TGW Association
More BGP prefixes can be added. The association can also be edited at any time to add prefixes.
It takes 5-10 mins for updating the connectivity prefixes.
After that, the status is "CONNECTED"

Migrations for existing SDDCs with Direct Connect attachment

NO transitive routing can be done on the SDDC

If the SDDC is already attached with a Direct Connect Private VIF link to its Virtual private Gateway, there is no way to use  that connection to redirect traffic coming from other SDDCs or VPCs via VMware Managed TGW.

We need to consider 2 cases depending on the type of Direct Connect connection the Customer owns.

Case 1: Customer has a Direct Connect Dedicated Connection 

With a Direct Connect Dedicated  Connection, AWS provides 50 Public or Private Virtual Interfaces (VIFs) AND 1 Transit VIF.
In that case, the existing Private VIF can stay and a NEW Transit VIF should be used via a Direct Connect Gateway with a TGW association to the VMware Managed TGW.

Case 2: Customer has a Direct Connect Hosted Connection 

With Direct Connect Hosted Connection, AWS provides only ONE VIF. It can be a Private, a Public OR a Transit VIF.
Since we need a Transit VIF for TGW, we will need to disconnect the existing Direct Connect Private VIF and replace it with a Transit VIF.


We are all very exited at VMware with the new networking capabilities that Transit Connect is giving us.
We need  to be careful on designs and it will be a subject for future posts.

Thanks for reading.



Egress VPC and AWS Transit Gateway (Part1)

AWS Transitive routing with Transit Gateways in the same region

Build a VMware Cloud on AWS Content Library using AWS S3