VMware Transit Connect and SDDC Grouping
Gilles Chekroun
Lead VMware Cloud on AWS Solutions Architect
---
First of all, I need to say that this is one of my longest post with a lot of new information and features around networking with AWS and SDDC together.
With the recent release of VMware Cloud on AWS ver 1.12, a major networking feature is now available and that is the VMware Managed Transit Gateway, also known as VMware Transit Connect.
This post will go through a detailed description of what that feature is and the
networking capabilities it opens.
VMware Transit Connect
Until now, the only way to connect an SDDC to a Transit Gateway
was via VPN. Route based VPN is described in a previous post here and how to use PowerCLI for Route based VPN
here.
By default, AWS creates 2 VPN tunnels and supports Equal Cost Multi-Path (ECMP). Adding more tunnels will certainly add more bandwidth. The VPN tunnels are terminated in the SDDC and a maximum of 4 tunnels is supported. The VMware Cloud on AWS side also supports ECMP.
With ver 1.12, VMware has now the capability to connect Software Defined Data Centers with a native AWS Virtual Private Cloud attachment to a VMware Managed Transit Gateway.
By default, AWS creates 2 VPN tunnels and supports Equal Cost Multi-Path (ECMP). Adding more tunnels will certainly add more bandwidth. The VPN tunnels are terminated in the SDDC and a maximum of 4 tunnels is supported. The VMware Cloud on AWS side also supports ECMP.
With ver 1.12, VMware has now the capability to connect Software Defined Data Centers with a native AWS Virtual Private Cloud attachment to a VMware Managed Transit Gateway.
VMware Transit Connect delivers easy-to-use, scalable and performant connectivity solution between VMware Cloud on AWS SDDCs within an SDDC Group.
It leverages the AWS Transit Gateway (TGW) to enable any-to-any high bandwidth (50 Gbps), low latency connectivity between SDDC Group members in a single AWS region.
Additionally, VMware allows customer VPCs to be connected to that VMware Managed TGW
as well.
The on-premises communication is done via AWS Direct Connect Gateway and Transit VIF.
The on-premises communication is done via AWS Direct Connect Gateway and Transit VIF.
Description
The goal of this post is to describe the
necessary steps to create an environment looking like:
SDDC Grouping
A new concept of "SDDC grouping" is now available. The idea is to
group SDDCs from the same AWS region together and have an AWS Transit
Gateway connecting them.
Every SDDC connectivity is a native VPC attachment to the VMware Managed TGW at 50Gbps depicted by the green lines in the diagram above.
Every SDDC connectivity is a native VPC attachment to the VMware Managed TGW at 50Gbps depicted by the green lines in the diagram above.
Adding Customer VPCs
Additionally the Customer can attach native AWS VPCs depicted by the orange lines in the diagram above.
Routing
At the first release, the VMware Managed TGW route tables
will not be accessible. There are simple ALLOW or DENY rules as:
- SDDC to SDDC to VPCs is allowed
- VPC to VPC is denied
- SDDCs to on-prem via Direct Connect Gateway is allowed
- VPCs to on-prem via Direct Connect Gateway is denied
The VMware Managed TGW will not populate VPC route tables or SDDC
FW rules. This is the Customer responsibility and this is
where granularity can be established for traffic management.
Attachment Rules
Since the VMware Managed TGW is an AWS TGW, we
need to follow certain rules for connectivity:
- The AWS account used is the "Shadow Account" of the VMC Organisation
- SDDCs MUST be in the same AWS region (TGW rules dictates)
- SDDCs MUST have non-overlapping management networks
- SDDCs SHOULD have non-overlapping NSX networks (overlapping networks will be rejected)
- The AWS account used is the "Shadow Account" of the VMC Organisation
- SDDCs MUST be in the same AWS region (TGW rules dictates)
- SDDCs MUST have non-overlapping management networks
- SDDCs SHOULD have non-overlapping NSX networks (overlapping networks will be rejected)
Step 1 - SDDCs Description
In our lab we have 2 SDDCs:
- Management Networks:
- 10.10.10.0/23 and
10.20.0.0/23
- NSX Networks
- 192.168.1.0/24 on both
(oops - overlapping - but here on purpose)
- 192.168.10 and 11.0/24
on SDDC1
- 192.168.20 and 21.0/24
on SDDC2
- Virtual Machines:
- DSL10 on segment 10 in
SDDC1
- DSL11 on segment 11 in
SDDC1
- DSL20 on segment 20 in
SDDC2
- DSL21 on segment 21 in SDDC2
Create an SDDC Group
Under "ACTIONS" on the top-right, there is a new "Create SDDC
group" tab.
Give your SDDC Group a name
Connect SDDCs
Acknowledge your choice
SDDC Group has 3 Members "CONNECTED" as described below
SDDC Overview
There is now a new icon in the SDDC overview.
Step 2 - SDDCs Advertised and Learned routes
Step 3 - Create SDDC Firewall rules
Having learned and advertised the SDDC routes doesn't
mean that every network is connected. At this stage we
need to create FW rules to allow the traffic.
These rules are Compute Gateway rules and need to be
applied to the "Direct connect" interface. Yes - this is
where the VPC attachment lives. It doesn't mean
it's connected to Direct Connect but internally we
are using the same circuit.
Check connectivity
SDDCs are now connected
Since I was very curious about the connection speed, I
setup 2 Ubuntu images on each SDDC and set the MTU to
8500 (the max for TGW). I used iPerf3 to burst traffic.
The Physical interface of the AWS i3.metal hosting our
SDDC is 25Gbps.
To my positive surprise, I got almost 12 Gbps transmit
and 12Gbps receive - not bad !!
Step 4 - Add AWS customer account
To be able to connect Customer VPCs to the VMware Managed
TGW, we need to indicate the customer AWS account where the
TGW will be shared.
Once done, the VMware Cloud portal will share the TGW
as a resource for that account under the AWS Resource Access
Manager.
Another
important point is that the resource is shared in the same
AWS region where the SDDCs are (obviously) so make sure you
re looking in the correct region.
After accepting the share, the Customer get an
acknowledgment and will be able to see the TGW in the
AWS console.
Step 5 - Adding AWS customer VPCs
At this step, adding VPCs is simply creating
VPC attachments to the shared TGW.
The attachments will go "pending acceptance" waiting for
acceptance on the SDDC group side.
Accept customer VPCs attachments
Going back to the SDDC Group tab, we now need to accept the Customer VPCs attachments.
Routes are propagated but it is the Customer
responsibility to populate the VPCs route
table.
Create appropriate FW rules on the SDDCs side
At this stage, there is a need to create FW rules on
the SDDCs CGW to allow specific traffic to the
VPCs.
To simplify the FW rules, 3 new groups are automatically created:
- DXGW prefixes
- VPCs prefixes
- Other SDDCs prefixes
All connectivity is now done !
Ping tests from DSL10 VM in SDDC 1
EC2s in the attached VPCs don't communicate
Step 6 - Preparing for Direct Connect Gateway
In our lab we don't have access to a Transit VIF DX connection. I will only show the DXGW preparation and BGP prefixes.
Remember that the AWS limit is 20 prefixes per TGW association.
On AWS console, go to Direct Connect gateways and create a DXGW.
Give it a name and ASN - make sure ASN don't overlap.
Give it a name and ASN - make sure ASN don't overlap.
More BGP prefixes can be added. The association can also be edited at any time to add prefixes.
After that, the status is "CONNECTED"
Migrations for existing SDDCs with Direct Connect attachment
NO transitive routing can be done on the SDDC
If the SDDC is already attached with a Direct
Connect Private VIF link to its Virtual private
Gateway, there is no way to use that
connection to redirect traffic coming from other
SDDCs or VPCs via VMware Managed TGW.
Case 1: Customer has a Direct Connect Dedicated Connection
With a Direct Connect Dedicated Connection, AWS
provides 50 Public or Private Virtual Interfaces (VIFs) AND
1 Transit VIF.
In that case, the existing Private
VIF can stay and a NEW Transit VIF should be used via a
Direct Connect Gateway with a TGW association to the
VMware Managed TGW.
Case 2: Customer has a Direct Connect Hosted Connection
With Direct Connect Hosted Connection, AWS provides
only ONE
VIF. It can be a Private, a Public OR a Transit VIF.
Since we need a Transit VIF for TGW, we will need to
disconnect the existing Direct Connect Private VIF and
replace it with a Transit VIF.
Conclusion
We are all very exited at VMware with the new
networking capabilities that Transit Connect is giving
us.
We need to be careful on designs and it will be a subject for future posts.
Thanks for reading.
Comments
Post a Comment