VMware Cloud on AWS: SDDC Design Considerations

Gilles Chekroun
Lead VMware Cloud on AWS Specialist
---
With the recent August 2019 release of VMware Cloud on AWS 1.8, a few interesting improvements are now available concerning the vSAN and Elastic vSAN storage capabilities.
The goal of this blog article is to recap the different options around SDDC design and specifically about stretched and non-stretched clusters.

AWS EC2 Bare metal Instances

As of now, the VMware Cloud on AWS Service is available with two types of EC2 bare metal instances from AWS:
i3.metal R5.metal
The AWS i3.metal specs are:
              The AWS R5.metal specs are:
- Intel Xeon E5-2686 v4 processors - Intel® Xeon® Platinum 8000 Series (Skylake-SP)
- 36 cores - 48 cores
- 2.3 GHz - 2.5 GHz
- 512 GiB RAM - 768 GiB RAM
- 15TB NVMe flash - EBS Storage only (15-35 TB)
- 25 Gbps Networking - 14 Gbps EBS Bandwidth
- 25 Gbps Networking

Other instances in specific areas like GPU or high memory will come later.

Elastic vSAN

Elastic vSAN, with R5.metal hosts, is a VMware Cloud on AWS cluster type that gives you a choice of storage capacity options ranging from 15TiB to 35TiB per host, in increments of 5TiB. This cluster type is suitable for workloads that require high storage capacity. 
Elastic vSAN builds on automated provisioning and management of Amazon Elastic Block Store (EBS) volumes. R5.metal hosts and Elastic vSAN solution is currently available in Oregon, N. Virginia, Ohio and Frankfurt regions.

Multi-Availability Zones Stretched Cluster

This feature enables customers to deploy a single SDDC across two AWS Availability Zones to support critical applications that require high availability in the event of an AZ failure.  In a Multi-AZ Stretched Cluster, vSAN guarantees synchronous writes across two AZs and logical networks extend to support vMotion between AZs.  In the event of an AZ failure, vSphere HA attempts to restart VMs in the surviving AZ.  Customers can choose a stretched cluster configuration at SDDC creation time and are limited to a single cluster. 

SDDC Design Considerations 

As of SDDC rel. 1.8 the following rules need to be applied

With Stretched Clusters

You can have multiple R5 stretched clusters per SDCC
You can have multiple i3 stretched clusters per SDDC
You can mix i3 and R5 stretched clusters
You can NOT mix stretched and non-stretched in the same SDDC

With Non Stretched Clusters

You can mix i3 and R5 clusters.
No migration from stretched to non-stretched or vice-versa is possible today. A new SDDC will be needed.

Option 1 - R5 Stretched Clusters

SDDC rel 1.8 and up; 8+8 nodes per cluster across 2 AZs; 20 clusters max; 300 nodes max;

Option 2 - R5 non stretched Clusters

16 nodes per Cluster over one AZ; 20 clusters max; 300 nodes max;

Option 3 - i3 stretched Clusters

8+8 nodes per cluster across 2 AZs; 20 clusters max; 300 nodes max;

Option 4 - i3 non stretched Cluster

16 nodes per Cluster over one AZ; 20 clusters max; 300 nodes max;

Option 5 - Mix of i3 and R5 non stretched clusters

16 nodes per Cluster over one AZ; variable storage according to the mix of i3 and R5; Production on i3, dev-test on R5; easy migration of workloads from cluster to cluster (networks are stretched always); 20 clusters max; 300 nodes max;

Option 6 - Mix i3 and R5 stretched Clusters

SDDC rel 1.8 and up; 8+8 nodes per cluster across 2 AZs; 20 clusters max; 300 nodes max;

Option 7 - Mix i3 and R5 non stretched on multiple SDDCs


Option 8 - Stretched R5 and i3 and non-stretched R5 and i3 in Multiple SDDCs


Comments

Populars

Egress VPC and AWS Transit Gateway (Part1)

AWS Transitive routing with Transit Gateways in the same region

Build a VMware Cloud on AWS Content Library using AWS S3