Cisco Hyperflex – #700-905 – Notes

Intro to Hyper-Convergence

Started out with local storage

Couldn’t expand

Moved to centralized storage

Server availability issues

Moved to virt and converged for clusters

FlexPods, VersaStacks…chassis had limitations

Back to local storage

Scales similar to cloud

No limits

Intro to HX platform

Based on C-series

Wizard-based installer

Converged Nodes – Disk, network, cpu, memory

Data Platform

StorFS – Distributed File System

Springpath is underlying backbone

Ensures copy data is always there

High performance networking is used by StorFS log structured FS

Communication channel for VMs and management etc

FIs are hardware management platform

HX Software is not supported on C-Series

HX Installer is a VM (OVA) – requires existing vCenter

Expansion is easy and non-disruptive

Cloud Center

Intersight

Cisco Container Platform

HX Flavors and use cases

HX Edge

2-4 Nodes (ROBO) – No FI needed

HX

Converged

Compute-Only

Up to 32 converged nodes with up to an additional 32 compute only nodes

SFF Drives

Either all flash or 2.5 spinning drives

LFF Spinning – 6 or 8TB

HX240 M5

6-12 of LFF in one server

Caching and housekeeping drives are still SFF Flash in back

HK – 240GB Flash

Caching – 3.2 TB SSD

HX 3.5 introduced stretch cluster for LFF drives

HX Edge 3 node HX 220 based connected directly to ToR switches with no FI

Managed through a centralized vCenter

Central backup site (LFF Cluster) with Veeam

3.0 introduced Hyper-V support

3.5 introduced LFF Drives support but no stretch or edge for Hyper-V

Scaling and Expanding a Deployment

Scales from 3 – 64 nodes

Node – Additional memory and compute

Converged Node – includes storage

3.5 added Hyper-V supprt and stretch clusters

Node must be compatible

Storage type

Size

Generation

Chassis size

Similar in ram and compute

You can introduce M5 to M4 but not vice versa

Use HX Installer to expand

Use config file

Provide creds

UCSM/Hypervisor/vCetner

Select cluster to expand

Select unassociated server from list

Provide IP info

Provide Name

Add VLANs

Start install

Go to HX Connect

View added node

Data is automatically distributed to new disks on new nodes

You can leverage numerous platforms for compute only node

Must be connected to same domain

Must have HX Data Platform installed via Installer

Mounts HX storage via NFS

Disks are not added to shared pool (for datastore creation)

CVM on compute only require

1 vcpu

512 MB RAM

Software Components Overview

StorFS – Distributed Log-structured file system

Writes sequentially, uses pointers – increases performance

Index has references to blocks for files across the distributed log

New blocks are written, old are deleted and housekeeping occurs.

File system index is stored in memory of  a controller VM.

Runs on every node in HX cluster

Logs caching compression duplication

Disks are passed through to CVM

Space is presented as NFS

HX220

CVM needs 48 GB RAM and 10.8 GHz

HX240

CVM needs 72 GB RAM and 10.8 GHz

HX 240 LFF 78 GB RAM and 10.8 GHz

CVM is doing cluster management

HX management

HX Connect management interface

HTML 5 GUI

Monitoring capability/Utilization monitoring

Replication

Clone

Upgrades

CVM CLI – not all commands are supported through GUI

CVM CLI is ubuntu VM – connect via SSH

stcli command

IOvisor (vib) – responsible for data distro

captures data and sends to any available node for caching

Not dependent on CVM so if CVM fails, fs ops will be directed to appropriate node

VAAI is used in CVM and Hypervisor

Allows direct FS ops on a Datastore (StorFS ops)

Allows for snaps and clones (much faster) using FS

Distributed File System

StorFS

When deploying

Select RF2 or RF3

Caching Tier

In all-flash systems

Not used for read cache

In all systems

Write cache works the same way

All flash

Hybrid

Caches writes as it gets distributed

De-stages writes

Split between active and passive

Active – caches fata

Passive – moves data to capacity drives

Number of cache level segments depends on RF factor

2 for RF 2

3 for RF 3

Hybrid systems

Write cache still works the same

Caching drive is ALSO use for read caching

Frequently used

Recently used

VDI mode

Only caches most frequently accessed

Not most recently accessed

Hardware component Overview

3 tier storage

Memory Cache – volatile

Controller VM, File system metadata

Cache Drive

SAS SSD, NVMe SSD or Optane

Capacity tier

All spinning or all flash

Hybrid (blue bezel) or all flash (orange bezel)

2 chassis types

HX220 (1U) M5 (Dual Intel Skylake Platinum’s)

10 Drives

Min6 max 8 capacity

Cache drive is front mounted

M.2 drive installs esx (internal)

Housekeeping (logs, storage)

HX240 (2U) M5 (Dual Intel Skylake Platinum’s)

Cache is on back

Capacity and housekeeping on front

Min 6 up to 23 capacity drives

Up to 2 graphics cards

Memory channels

6 per proc

2 sticks per channel

Max 128 per stick

16, 32, 64, 128

6 or 12 sticks per CPU for optimal performance

If M type procs, 1.5TB per CPU (total 3TB)

1.2TB

1.8 TB for hybrid

(cost effective)

960 and 3.8 TB for all flash (performance/density)

Network:

VIC1227

Dual 10G

VIC 1387

Dual 40G

Fis

6248s, 6296, 6332, 6332-16UP

UCSM

Other Notes:

Can you install HX without network?

No

Can you use install software as NTP server?

No. 2.0 and 2.1 disables it after 15 mins

Can I install vCenter on HX?

Yes. With 4 nodes with earlier versions.

Should storage VLAN be layer 3?

No

Can you setup multiple VLANs in UCSM during install?

Yes but you have to rename them

Are jumbo frames required?

No but enable them

HX Tech Brief – An App, Any Cloud, Any Scale

ROBO – HX Edge/Nexus

Private – HX/ACI (private cloud)

Intersite federates management

Edge leverages Intersite as cloud witness

Public – Cisco Container Platform on top of HX for Kubernetes aaS with HX in Prem

Cloud center – model apps/deploy consistently

App dynamics – performance/application transaction visibility

CWOM – optimizes resource utilization for underlying infra

Tetration – workload protection – enforce versions at the host level

HX Edge 2-4 nodes (up to 2000 sites) – Intersight can deploy in parallel to multiple sites

HX – 32nodes –> 32 more compute nodes

Installation Notes:

Deploy/Launch Data platform installer – OVA – can be on a SE laptop

Root:Cisco123

Create new:

Customize your workflow

Run UCSM config (unless you have edge (no Fis)

Check everything else

Create cluster

Cluster expansion:

In UCSM, validate PID and make sure its unassociated/no profile

In installer:

Supply UCSM name/creds

vCenter creds

Hypervisor creds

Select cluster to expand

Select server

Provide VLAN configs

Use ; for multiple VLANs

Enable iSCSI/FC if needed

For mgt VLAN and Data VLAN

Provide IP for esxi host

Provide IP for storage controller

Recovery Gear

After a few outings in the Jeep, its become quite apparent how critical it is to have the right recovery equipment. One friend just slid sideways in the snow into the mountain, one popped a tire off the bead from a tree stump in the mountain, and even out in the middle of the sand dunes, I had to winch my friends RZR up a good 80′ dune after hitting a witches eye.

I have started building up my gear collection and I am listing the components here for easy reference.

  • Warn Zeon 10-S Winch – 10,000 Pounds
    • Factor 55 ProLink – 16,000 Pounds
    • Factor 55 Hawse Fairlead
  • WARN Snatch Block – 12,000 Pounds
  • Gator-Jaw PRO Soft Shackles – 52,300 Pounds
  • ARB Tree Saver – 26,500 Pounds
  • Bubba Rope Recovery Rope – 7/8 x 30ft – 28,600 Pounds
  • Factor 55 Hitchlink 2.0 (Jeep) – 9500 Pounds
  • Factor 55 Hitchlink 2.5 (F-250) – 18,000 Pounds
  • 7/8″ Galvanized Steel Shackle (F-250) – WLL 6.5T