Category Archives: Technology

InfoSec 101

Cloud Security Posture Management (CSPM): This will provide organizations with the ability to monitor and manage the security posture of their cloud infrastructure, ensuring compliance with industry standards and regulations, detecting and addressing security threats, and preventing data breaches.

  • Palo Alto Prisma Cloud
  • CheckPoint (CloudGuard)
  • CSP Native
  • AWS Config
  • Azure Security Center
  • Cloud Security Command Center

Secure Access Service Edge (SASE) platform: A cloud-based platform that combines several security functions, such as secure web gateways, firewall as a service, cloud access security brokers (CASBs), and zero-trust network access (ZTNA), into a single, integrated solution. SASE platforms provide a scalable, flexible, and cost-effective way to achieve a secure service edge.

  • Palo Alto Prisma Access
  • zScaler
  • Cisco
  • Fortinet Secure SDWAN
  • Cato Networks
  • Netskope
  • Perimeter 81
  • Forcepoint

Software-Defined Perimeter (SDP): An approach to network security that uses a “zero-trust” model to secure access to applications and data. SDP solutions create a secure, isolated network connection between users and applications, effectively hiding them from the public internet and reducing the risk of cyber attacks.

  • Palo Alto Prisma Access
  • zScaler
  • Okta
  • Cisco
  • Perimeter 81
  • Appgate
  • Symantec

Web Application Firewall (WAF): A security solution that provides an additional layer of protection for web applications by monitoring and filtering incoming web traffic based on predefined rules. WAFs can help prevent a range of cyber attacks, such as SQL injection, cross-site scripting (XSS), and file inclusion attacks.

  • F5
  • Fortinet
  • Imperva
  • Cloudflare
  • Citrix
  • Sophos

Cloud Access Security Broker (CASB): A security solution that provides visibility and control over cloud applications and services. CASBs allow organizations to monitor user activity, detect and respond to security threats, and enforce policies for data protection and compliance.

  • Palo Alto Networks (Prisma)
  • Netskope
  • Cisco CloudLock
  • BitGlass
  • Forcepoint
  • Proofpoint
  • Microsoft Cloud App Security
  • AWS CloudTrail (limited)

Identity and Access Management (IAM): A set of technologies and policies that manage user identities and access to applications and data. IAM solutions can help organizations enforce strong authentication and authorization policies, manage user privileges, and monitor user activity.

  • Okta
  • OneLogin
  • Ping
  • SailPoint
  • Azure AD

Endpoint Detection and Response (EDR): A security solution that monitors endpoint devices, such as desktops, laptops, and servers, for signs of security threats. EDR solutions can detect and respond to malware infections, advanced persistent threats (APTs), and other security incidents.

  • TrendMicro
  • CrowdStrike
  • Palo Alto Networks
  • FireEye
  • CarbonBlack
  • Sophos
  • Symantec

User Entity And Behavioral Analytics (UEBA): A solution that analyzes patterns of user and entity behavior across various data sources, such as logs, network traffic, and user activity data, to establish a baseline of normal behavior. Once the baseline is established, the UEBA solution can identify deviations from normal behavior that may indicate a security threat, such as insider threats, compromised user credentials, or advanced persistent threats (APTs).

  • Splunk
  • Rapid7
  • Securonix
  • Teramind
  • Exabeam
  • Azure Sentinel

Data Loss Prevention (DLP): A security technology that focuses on protecting sensitive data from unauthorized disclosure or theft. DLP solutions can identify and monitor sensitive data as it moves through an organization’s network, such as personally identifiable information (PII), financial data, or intellectual property. DLP solutions can also enforce data protection policies, such as blocking or encrypting sensitive data, or alerting security teams when a policy violation occurs.

  • TrendMicro
  • Symantec
  • ForcePoint

Antivirus and Antimalware: Install antivirus and antimalware software on all devices to prevent malware from infecting your network.

  • ArcticWolf
  • TrendMicro
  • Sophos
  • CarbonBlack

Password Policy: Establish a password policy that requires employees to use strong, unique passwords and change them periodically.

  • Active Directory
  • Okta
  • OneLogin
  • Azure AD

Access Control (SSO/AMFA): Implement access controls to restrict access to sensitive data and systems to only those who need it.

  • Okta
  • Active Directory
  • One Login

Immutable Data Backup: Regularly back up your data to prevent data loss in case of a disaster or a security breach.

  • Rubrik
  • Cohesity
  • Veeam
  • CSP S3

Security Awareness Training: Educate your employees on security best practices, including how to identify and avoid phishing attacks and other scams.

  • KnowBe4
  • ProofPoint
  • MimeCast
  • ArcticWolf

Incident Response Plan: Develop an incident response plan that outlines the steps to be taken in the event of a security incident.

  • ArcticWolf
  • CrowdStrike
  • FireEye
  • Kroll
  • SecureWorks

Physical Security: Implement physical security measures such as security cameras, alarm systems, and access controls to protect your premises.

  • Verkada
  • Meraki
  • HID Global
  • Tyco

Vendor Management: Ensure that all third-party vendors have appropriate security controls in place when accessing your network.

  • InterVision vCISO
  • BitSight
  • OneTrust

Regular Security Audits: Regularly audit your security controls to identify and address vulnerabilities and ensure compliance with regulations.

  • PwC
  • Deloitte
  • KPMG

DevOps Security:

Code Analysis (SAST) – practice of using automated tools to analyze code and identify potential issues, such as security vulnerabilities, performance problems, and compliance violations.

  • VeraCode
  • GitLab
  • Snyk
  • CheckMarx
  • Coverity
  • Fortify

Code Analysis (DAST) – practice of using automated scanning techniques to identify common web application vulnerabilities such as SQL injection, cross-site scripting (XSS), and insecure authentication and authorization mechanisms.

  • Rapid7
  • Qualys
  • Veracode
  • Tenable
  • Acunetix

Runtime Protection (RASP) – detect and respond to security threats in real-time

  • Aqua
  • SysDig
  • TwistLock
  • VeraCode
  • Contrast
  • Policy Enforcement
  • Twistlock
  • Snyk
  • CheckMarx
  • SonarQube
  • GitHub Actions
  • Threat Detection
  • CheckMarx
  • Veracode
  • SonarQube
  • Fortify

Vulnerability Scanning – process of identifying security vulnerabilities in software, networks, systems, or applications

  • Qualys
  • Rapid7
  • Tenable
  • Aqua
  • GitLab
  • Snyk
  • CheckMarx
  • TrendMicro

Container Scanning – process of analyzing the contents of a container image for known vulnerabilities, misconfigurations, and other security issues

  • GitLab
  • Snyk
  • CheckMarx
  • TrendMicro

SOC 2 Compliance. What, Why, and How.

What is SOC 2?

Service Organization Controls (SOC) 2 is an InfoSec Compliance Standard maintained by the American Institute of Certified Public Accountants (AICPA). It’s designed to test and demonstrate the cybersecurity of an organization. 

To get a SOC 2, companies must create a compliant cybersecurity program and complete an audit with an AICPA-affiliated CPA. The auditor reviews and tests the cybersecurity controls to the SOC 2 standard, and writes a report documenting their findings. 

The resulting SOC 2 report facilitates sales and vendor management by providing one document that sales teams can send to potential customers for review, instead of working through cybersecurity questionnaires.

5 Trust Services Criteria

  • Security (REQUIRED)
    • Guidelines on company management and culture, risk assessments, communication, control monitoring, and cybersecurity strategy
  • Availability – uptime of a vendors services
    • Controls include plans to maximize uptime and restore availability after an outage. 
    • Business continuity, data recovery, and backup plans are all important controls for this criteria.
  • Processing Integrity – how a vendor processes the data it collects
    • Controls are meant to evaluate that data processing is being performed in a consistent manner and that exceptions are handled appropriately.
      • It is challenging and laborious work to create the documentation needed to meet this criteria, because it requires SOC 2-specific content with detailed descriptions on how data is being processed. (Almost all other content used in a SOC 2 audit has applications outside of SOC 2, this does not.) 
  • Confidentiality – used to keep confidential business data confidential
    • This criteria expects vendors to identify and protect confidential data. 
    • Example controls for confidentiality include encryption and data destruction. 
  • Privacy – how personal information is kept private.
    • This criteria requires that vendors have a privacy policy, that personal data is collected legally, and is stored securely. 
    • SOC 2 Privacy is more applicable to Business-to-Consumer companies as opposed to Business-to-Business companies.

How Do I Get There?

To achieve SOC 2 Type 2 compliance, you will need to implement and maintain strong controls over your systems and processes. This will typically involve the following steps:

  • Identify the systems and processes that need to be covered by your SOC 2 Type 2 compliance efforts.
  • Develop policies and procedures to ensure that these systems and processes are secure and meet the relevant standards.
  • Implement technical and organizational controls to support these policies and procedures.
  • Test and monitor your controls to ensure that they are effective.
  • Conduct an independent audit of your controls by a certified third-party auditor.
  • Obtain a report from the auditor indicating that your controls meet the relevant standards.

Achieving SOC 2 Type 2 compliance is an ongoing process, and you will need to continually review and update your controls to ensure that they remain effective.

What Can I Do To Prepare?

  • Determine Scope
    • SOC 2 is about demonstrating your commitment to security and improving customer confidence in your security program. You should include all services and products that you expect customers will have security concerns for. 
  • Identify and fill gaps
    • Evaluate your current cybersecurity program in comparison to the SOC 2 control set. Even companies with mature cybersecurity programs do not meet every single control from the get-go.
      • There are a number of administrative and technical security controls that are often overlooked prior to getting a SOC 2, and they can be sticking points that generate a lot of additional work before and during the audit process
  • Document – create and edit security policies and other documentation
    • Define access controls. Who is required to have it? What types of apps are required to use it, versus which ones are not? What authenticator apps are allowable?
      • Most controls need to have a policy and evidence your organization is sticking to the policy created for them. It’s a lot of work – but your company will become much more secure in the process. 
  • Modify internal procedures.

What Are The Benefits?

  • Demonstrating to customers that you have strong controls in place to protect their data can help you build trust with them and increase customer satisfaction.
  • Achieving SOC 2 Type 2 compliance can help you comply with regulatory requirements, such as the Payment Card Industry Data Security Standard (PCI DSS) and the Health Insurance Portability and Accountability Act (HIPAA).
  • Demonstrating that you have been independently audited and have met industry standards for information security can give your business a competitive advantage.
  • Achieving SOC 2 Type 2 compliance can also protect your business from potential liabilities, as it shows that you have taken reasonable steps to protect your customers’ data.
  • Finally, implementing strong controls as part of your SOC 2 Type 2 compliance efforts can help you improve the security and reliability of your systems, which can ultimately benefit your business by reducing the risk of data breaches, system failures, and other security incidents.

How Can We Help?

  • Perform a Gap Assessment – A gap assessment is crucial for taking stock of an existing cybersecurity program and finding gaps that need to be filled to get your company audit-ready.
  • Acquire and implement technical controls – if there’s a deficit, consultants help companies add those needed controls to to improve security and ensure compliance.
  • Adjust policies and procedures – As we just mentioned, policies and procedures are likely not be audit-ready until efforts are made to make them so.
  • Create content – The content that’s created is going to be key documentation for a SOC 2 audit. Policies, procedures, reports – they can write it and get it in place. 
  • Project manage – Virtual CISOs can project-manage the whole audit project. There’s something to be said about domain-expert project managers. 
  • Perform risk assessments – if this is not something that you were doing before you will now! Risk Assessments are mandatory for SOC 2 compliance, and a Virtual CISO can perform the assessment and write the report. 
  • Perform vendor evaluations – Vendor management is a part of every SOC 2 compliance program. If this is not already in practice at an organization, it can valuable to outsource the activity to an expert. 
  • Perform “External Internal Audit” – Internal audits are necessary for SOC 2 compliance – they help make sure that your company is doing everything needed before the auditor catches you. Some firms don’t have an internal audit function, so an “External Internal Auditor” who is familiar with the standards and can keep the organization accountable is helpful.
  • Select an Auditor – A good Virtual CISO will know what makes a good SOC 2 auditor and can remove auditor selection from your plate. 
  • Advocate on your behalf with the Auditor – Your Virtual CISO will be with you for every audit call. They will advocate on your behalf, ensuring the auditor sets realistic compliance expectations for your organization. 

Reach out to us if you need help @

Minecraft 1.17 Java Error

I downloaded the new 1.17 jar and updated my server as I have done numerous times. I received the error below this time around:

Error: LinkageError occurred while loading main class net.minecraft.server.Main
java.lang.UnsupportedClassVersionError: net/minecraft/server/Main has been compiled by a more recent version of the Java Runtime (class file version 60.0), this version of the Java Runtime only recognizes class file versions up to 55.0

I updated my java version using the commands below.

sudo add-apt-repository ppa:linuxuprising/java
sudo apt update
sudo apt install oracle-java16-installer --install-recommends

After updating it, it started up with no problems.

Terraform on Windows 101

Create a folder called “bin” in %USERPROFILE%

Start–>Run–>%USERPROFILE%–>create a folder called “bin”

Download Terraform
Save the .exe in the “bin” folder you created

Set Windows “PATH” Variable

System Properties–>Environment Variables
Highight PATH
Click “Edit”
Click “New”

Create a user in AWS for Terraform

In AWS, go to IAM
Create a user called “terraform”
programmatic access only
Attach existing policies directly
Administrator access (proceed with caution!)
Copy the Access Key ID (save to credentials store like KeePass or an excel spreadsheet for now)
Copy the Secret Access Key (save to credentials store)
Or download the .CSV and grab the values

Create a folder called .aws on your PC

Make sure to add a “.” at the end of the folder name or it will throw an error

Create a credentials file

Create a new file called “credentials” in the .aws directory (remove the extension)
Using the ID and Key from above, make it look like this:
Line 1: [default]
Line 2: aws_access_key_id=your_key_id_here
Line 3: aws_secret_access_key=your_access_key_here
Save the file (again, make sure to remove the .txt extension or it wont work)

Download and install Git for Windows

Create a folder called TF_Code for your working files

I created mine on my desktop

Open Git Bash, navigate to your working directory

cd desktop
cd TF_Code

Make the directory a Git repository

git init

Create a new file with VI

Line 1: provider “aws” {
Line 2: profile = “default”
Line 3: region = “us-west-2”
Line 4: }
Line 6: resource “aws_s3_bucket” “tf_course” {
Line 7: bucket = “tf-course-uniqueID”
Line 8: acl = “private”
Line 9: }

Commit the code

git add
git commit -m “some commit message”

Try Terraform! (in Git Bash)

terraform init
Downloads and Initializes plugins

Apply the code

terraform apply
yes (to perform the actions)

Check your AWS account (S3), you should see a new S3 bucket!

Delete the bucket

terraform plan -destroy -out=example.plan
terraform apply example.plan

Your bucket will now be deleted!

To recreate the bucket, just run the ‘terraform apply’ command again, say yes, and…BOOM, your bucket is created again!

Hope that helps. Good luck and happy computing!

SSH from PuTTY to GCP Compute Engine

First off, if you are trying to securely connect to your enterprise production network and instances, there are better (safer) methods (architectures) to do this. OSLogin or federating your Azure AD for instance, might be more secure and scalable. I run a pointless website (this one) with nothing to really lose across a handful of instances. This is a hobby.

Second, I recently got a dose of humble pie when trying to use PuTTY on Windows to connect to a Ubuntu instance in GCP. I was generally using gCloud command-line for getting my app running but I got a wild hair up my ass this morning to try and just use PuTTY to avoid the step of logging into Google Cloud (via Chrome) for administration. I am fairly use to AWS where I just create an instance, download the .pem file, convert it to a ppk with PuTTYgen, and then use that along with the default login (ec2-user or ubuntu) to connect to my minecraft and web servers. GCP was a little different.

Once I read a few docs from Google searches, it became much more apparent vs reading the GCP docs. Here is how I did it.

Download PuTTYgen if you dont have it already.

Launch PuTTYgen.

Click on “Generate“. I used a 2048 bit RSA key.

Move your mouse around the box to generate a key.

In the “Key comment” field, replace the data there with a username you want to use to connect to your Compute Engine instance (highlighted)

Copy the ENTIRE contents of the public key (the data in the “public key for pasting…”) box. It should end with the username you want to connect with if you scroll down.

Click on “Save Private Key” and select a location/path that is secure (and one that you will remember!).

Create a new Compute Engine instance or go to an existing instance. From the VM instances page, click on the instance name. In my case it was “minecraft001”.

At the top of the page, click on “Edit“.

Scroll almost all the way to the bottom and you will see an “SSH Keys” section.

Click on “show and edit

Click on “+ Add Item

Paste in the key data you copied from PuTTYgen from the step above.

  • You will notice that it extracts the username from your key on the left. This is the username you will use from PuTTY.

On the same page, click on “Save” at the bottom of the page.

On the VM instance details page, find the “External IP” section and copy the IP address (the cascaded window icon will add it to your buffer).

Now open or go back to your PuTTY client (not PuTTYgen).

Paste the IP address into your PuTTY client.

On the left side of the PuTTY client, scroll down to the “Connection” section and click the “+” to expand it

Click the “+” next to the “SSH” section

HIGHLIGHT the “Auth” section. Dont expand it.

Click on “Browse…

Find the Private Key file you saved from earlier (should have a .ppk file extension). Double click to select and use it.

Scroll back up and highlight the Session category.

From here you can either name your connection and Save it under “saved sessions“…or just click the “Open” button.

It should make a connection to your Compute Instance and ask for a username. Supply the username you specified in the step above and voila! I used “jonny” in my example.

That’s it! Happy computing!

Pushing Docker Containers to GitHub

I recently went through the process of building a dockerfile from scratch. I wont get into the details of that process but I did come across an error when trying to publish my package to GitHub Packages.

I tried to do a sudo docker push (my repo) and was thrown the error:

unauthorized: Your request could not be authenticated by the GitHub Packages service. Please ensure your access token is valid and has the appropriate scopes configured.

Its pretty clear what needed to happen but I thought my credentials would be enough since I wasnt using a script per se. I used docker login and provided my username and password and tried the command again. Same error.

After doing some reading, I discovered that you need to pass a “Personal Access Token” as a password. I generated a PAT under Settings–> Developer Settings –> Personal Access Tokens. I gave the token the access to the repo and to read and write packages. I then used docker login and passed the token string to login. After that, I was able to use docker push to upload my image.

Minikube on VirtualBox on Ubuntu on VirtualBox

I recently needed a small lab environment to sharpen my Kubernetes skills. I setup Minikube on an Ubuntu VM running 18.04.4 LTS (bionic). This VM was created on my Windows Desktop in VirtualBox. Confused yet? Some of the commands can leave your environment insecure so do not do this in your Production Internet facing environment.

To get started, I downloaded and installed VirtualBox onto my Windows PC. I then created an Ubuntu 18.04 VM and make sure the number of vCPUs on your VM is greater than or equal to 2.

First step is to update your VM.

  • sudo apt-get update
  • sudo apt-get install apt-transport-https (if using 1.4 or earlier)
  • sudo apt-get upgrade

Install VirtualBox on your Ubuntu VM

  • sudo apt install virtualbox virtualbox-ext-pack

Download Minikube

  • wget

Make it executable

  • sudo chmod +x minikube-linux-amd64

Move it so its in path

  • sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Download kubectl

  • curl -LO -s

Make it executable

  • chmod +x ./kubectl
  • sudo mv ./kubectl /usr/local/bin/kubectl

Check that its working properly

  • kubectl version -o json

I received an error saying docker wasn’t in $PATH. You may or may not see this error.

Install docker

  • curl -fsSL | sh

Start Minikube

  • sudo minikube start –vm-driver=virtualbox

Start the Kubernetes Dashboard

  • minikube dashboard
  • minikube dashboard –url

If you want to view the dashboard remotely, you will need to run the following commands:

  • sudo kubectl proxy –address=’′ –disable-filter=true

You will get a message saying “Starting to serve on [::]:8001”

Hopefully this helps. If you get stuck or have a way to optimize this, please comment below.

Kudos to for helping me get started.

Cisco Hyperflex – #700-905 – Notes

Intro to Hyper-Convergence

Started out with local storage

Couldn’t expand

Moved to centralized storage

Server availability issues

Moved to virt and converged for clusters

FlexPods, VersaStacks…chassis had limitations

Back to local storage

Scales similar to cloud

No limits

Intro to HX platform

Based on C-series

Wizard-based installer

Converged Nodes – Disk, network, cpu, memory

Data Platform

StorFS – Distributed File System

Springpath is underlying backbone

Ensures copy data is always there

High performance networking is used by StorFS log structured FS

Communication channel for VMs and management etc

FIs are hardware management platform

HX Software is not supported on C-Series

HX Installer is a VM (OVA) – requires existing vCenter

Expansion is easy and non-disruptive

Cloud Center


Cisco Container Platform

HX Flavors and use cases

HX Edge

2-4 Nodes (ROBO) – No FI needed




Up to 32 converged nodes with up to an additional 32 compute only nodes

SFF Drives

Either all flash or 2.5 spinning drives

LFF Spinning – 6 or 8TB

HX240 M5

6-12 of LFF in one server

Caching and housekeeping drives are still SFF Flash in back

HK – 240GB Flash

Caching – 3.2 TB SSD

HX 3.5 introduced stretch cluster for LFF drives

HX Edge 3 node HX 220 based connected directly to ToR switches with no FI

Managed through a centralized vCenter

Central backup site (LFF Cluster) with Veeam

3.0 introduced Hyper-V support

3.5 introduced LFF Drives support but no stretch or edge for Hyper-V

Scaling and Expanding a Deployment

Scales from 3 – 64 nodes

Node – Additional memory and compute

Converged Node – includes storage

3.5 added Hyper-V supprt and stretch clusters

Node must be compatible

Storage type



Chassis size

Similar in ram and compute

You can introduce M5 to M4 but not vice versa

Use HX Installer to expand

Use config file

Provide creds


Select cluster to expand

Select unassociated server from list

Provide IP info

Provide Name


Start install

Go to HX Connect

View added node

Data is automatically distributed to new disks on new nodes

You can leverage numerous platforms for compute only node

Must be connected to same domain

Must have HX Data Platform installed via Installer

Mounts HX storage via NFS

Disks are not added to shared pool (for datastore creation)

CVM on compute only require

1 vcpu

512 MB RAM

Software Components Overview

StorFS – Distributed Log-structured file system

Writes sequentially, uses pointers – increases performance

Index has references to blocks for files across the distributed log

New blocks are written, old are deleted and housekeeping occurs.

File system index is stored in memory of  a controller VM.

Runs on every node in HX cluster

Logs caching compression duplication

Disks are passed through to CVM

Space is presented as NFS


CVM needs 48 GB RAM and 10.8 GHz


CVM needs 72 GB RAM and 10.8 GHz

HX 240 LFF 78 GB RAM and 10.8 GHz

CVM is doing cluster management

HX management

HX Connect management interface


Monitoring capability/Utilization monitoring




CVM CLI – not all commands are supported through GUI

CVM CLI is ubuntu VM – connect via SSH

stcli command

IOvisor (vib) – responsible for data distro

captures data and sends to any available node for caching

Not dependent on CVM so if CVM fails, fs ops will be directed to appropriate node

VAAI is used in CVM and Hypervisor

Allows direct FS ops on a Datastore (StorFS ops)

Allows for snaps and clones (much faster) using FS

Distributed File System


When deploying

Select RF2 or RF3

Caching Tier

In all-flash systems

Not used for read cache

In all systems

Write cache works the same way

All flash


Caches writes as it gets distributed

De-stages writes

Split between active and passive

Active – caches fata

Passive – moves data to capacity drives

Number of cache level segments depends on RF factor

2 for RF 2

3 for RF 3

Hybrid systems

Write cache still works the same

Caching drive is ALSO use for read caching

Frequently used

Recently used

VDI mode

Only caches most frequently accessed

Not most recently accessed

Hardware component Overview

3 tier storage

Memory Cache – volatile

Controller VM, File system metadata

Cache Drive

SAS SSD, NVMe SSD or Optane

Capacity tier

All spinning or all flash

Hybrid (blue bezel) or all flash (orange bezel)

2 chassis types

HX220 (1U) M5 (Dual Intel Skylake Platinum’s)

10 Drives

Min6 max 8 capacity

Cache drive is front mounted

M.2 drive installs esx (internal)

Housekeeping (logs, storage)

HX240 (2U) M5 (Dual Intel Skylake Platinum’s)

Cache is on back

Capacity and housekeeping on front

Min 6 up to 23 capacity drives

Up to 2 graphics cards

Memory channels

6 per proc

2 sticks per channel

Max 128 per stick

16, 32, 64, 128

6 or 12 sticks per CPU for optimal performance

If M type procs, 1.5TB per CPU (total 3TB)


1.8 TB for hybrid

(cost effective)

960 and 3.8 TB for all flash (performance/density)



Dual 10G

VIC 1387

Dual 40G


6248s, 6296, 6332, 6332-16UP


Other Notes:

Can you install HX without network?


Can you use install software as NTP server?

No. 2.0 and 2.1 disables it after 15 mins

Can I install vCenter on HX?

Yes. With 4 nodes with earlier versions.

Should storage VLAN be layer 3?


Can you setup multiple VLANs in UCSM during install?

Yes but you have to rename them

Are jumbo frames required?

No but enable them

HX Tech Brief – An App, Any Cloud, Any Scale

ROBO – HX Edge/Nexus

Private – HX/ACI (private cloud)

Intersite federates management

Edge leverages Intersite as cloud witness

Public – Cisco Container Platform on top of HX for Kubernetes aaS with HX in Prem

Cloud center – model apps/deploy consistently

App dynamics – performance/application transaction visibility

CWOM – optimizes resource utilization for underlying infra

Tetration – workload protection – enforce versions at the host level

HX Edge 2-4 nodes (up to 2000 sites) – Intersight can deploy in parallel to multiple sites

HX – 32nodes –> 32 more compute nodes

Installation Notes:

Deploy/Launch Data platform installer – OVA – can be on a SE laptop


Create new:

Customize your workflow

Run UCSM config (unless you have edge (no Fis)

Check everything else

Create cluster

Cluster expansion:

In UCSM, validate PID and make sure its unassociated/no profile

In installer:

Supply UCSM name/creds

vCenter creds

Hypervisor creds

Select cluster to expand

Select server

Provide VLAN configs

Use ; for multiple VLANs

Enable iSCSI/FC if needed

For mgt VLAN and Data VLAN

Provide IP for esxi host

Provide IP for storage controller

NetApp A800 Specs

NetApp introduced its flagship model, the A800 and ONTAP version 9.4 which included a number of enhancements. Some details are below.

48 internal NVMe drives – 4U

1.1 Million IOPS – 200-500 microsecond latency

25 GBps sequential reads/sec

Scales 12 or 24 nodes (SAN/NAS respectively)

12PB or 6.6M IOPS for SAN (effective)

24PB or 11.4M IOPS for NAS (effective)

Nearly 5:1 efficiency using dedupe, compression, compaction, and cloning

Support for NVMeoF — connect your existing applications using FC to faster drives. Increase performance vs FC connected devices by 2x.

FabricPool 2.0 allows for tiering cold data to Amazon, Azure or any object store such as StorageGRID. You can now move active filesystem data to the cloud and bring it back when needed.


SolidFire Install Notes and Guide

I had the pleasure of installing a 4-Node SolidFire 19210 recently. While the system was being used strictly for Fiber Channel block devices to the hosts, there was a little bit of networking involved. There are diagrams below that detail what I have explained for the most part if you want a TL;DR version.

The system came with 6 Dell 1U nodes. 4 of the nodes were full of disks. The other two had no disks but extra IO cards in the back (FC and 10Gb).

First step was to image the nodes to Return them To Factory Image (RTFI).  I used version 10.1 and downloaded the “rtfi” ISO image at the link below.

With the ISO you can create a bootable USB key:

  1. Use a Windows based computer and format a USB key in FAT 32 default block size. (a full format works better than a quick format)
  2. Download UNetbootin from
  3. Install the program on your windows system
  4. Insert USB key before starting UNetBootin
  5. Start UNetBootin
  6. After the program opens select the following options:
  7. Select your downloaded image […]
  8. Select Type : USB drive
  9. Drive: Select your USB key
  10. Press okay and wait for key creation to complete (Can take 4 to 5 minutes)
  11. Ensure process completes then close UNetBootin once done

Simply place the USB key in the back of the server and reboot. On boot, it will as a couple of questions, it will start installing Element OS, and then shut itself down. It will boot up into what they call the Terminal User Interface (TUI). It will come up in DHCP mode. If you have DHCP, it will pick up an IP address.  You will be able to see which IP it obtained in the TUI. You can use your web browser to connect to that IP. Alternately, using the TUI, you can set a static IP address for the 1GbBond interface. I connected to the management interface on the nodes once the IPs were set to continue my configs although you can continue using the TUI. To connect to the management interface, go to https://IP_Address:442. As for now, this is called the Node UI.

Using the Node UI, set the 10Gb IP information and hostname. The 10Gb network should be a private non-routable network with jumbo frames enabled through and through. After setting the IPs, I rebooted the servers. I then logged back in and set the cluster name on each node and rebooted again. They came back up in a pending state. To create the cluster, I went to one of the nodes IP addresses and it brought up the cluster creation wizard (this is NOT on port 442 instead, port 80). Using the wizard, I created the cluster. You will assign an “mVIP” and and “sVIP”. These are clustered IP addresses for the node IPs. mVIP for the 1GbBonds, and sVIP for the 10GbBond. Management interface is at the mVIP and storage traffic runs over the sVIP.

Once the cluster was created, we downloaded the mNode OVA. This is a VM that sends telemetry data to Active IQ, runs SNMP, handles logging and a couple of other functions. We were using VMWare so we used the image with “VCP” in the filename since it has the plugin.

Using this link and the link below, we were able to import the mNode into vCenter quickly. Once it had an IP, we used that IP to connect to port 9443 in a web browser and register the plugin to vCenter with credentials.

I then connected to the mVIP and under Clusters–>FC Ports, I retrieved the WWPNs for zoning purposes.

Your SAN should be ready for the most part. You will need to create some logins at the mVIP along with volumes and Volume Access Groups. Once you zone your hosts in using single initiator zoning, you should be able to scan you disk bus on your hosts and see your LUNs!

As mentioned there were some network connections to deal with. My notes are below.

I am not going to cover the 1GbE connections. Those are really straight forward. One to each switch. Make sure the ports are on the right VLAN. The host does NIC teaming. Done.

iDRAC is even easier. Nothing special. Just run a cable from the iDRAC port to your OOB switch and you are done with that too. IP address is set through the BIOS.

FC Gateways (F-nodes)

These nodes have 2x 10Gb ports onboard and 2x 10Gb ports in Slot 1 along with 2x Dual Port FC cards.

For the FC gateways (F-nodes), there were 4x 10Gb ports on each node. 2 were onboard like the S-nodes and 2 are in a card in Slot 1.

From F-node1, we sent port0 (onboard) and port0 (card in slot 1) to switch0/port1 and port2. From F-node2, we sent port0 (onboard) and port0 (card in slot 1) to switch1/port1 and port2. We then created an LACP bundle with switch0/port1 and port2 and switch1/port1 and port2. One big 4 port LACP bundle.

Then, back to F-node1, we sent port1 (onboard) and port1 (card in slot 1) to switch0/port3 and port4. From F-node2 we sent port1 (onboard) and port1 (card in slot 1) to switch1/port3 and port4. We then created another LACP bundle with switch0/port3 and port4 and switch1/port3 and port4. Another big 4 port LACP bundle.

We set private network IPs on the 10g Bond interfaces on all nodes (S and F alike). Ensure jumbo frames is enabled throughout the network else you may receive errors when trying to create the cluster (xDBOperation Timeouts). Alternately, you can set the MTU for the 10GBond interfaces down to 1500 and test cluster creation to verify that jumbo frames are causing issues but this is not recommended for a production config. It should simply be used to rule out jumbo frames config issues.

As mentioned, each F-node has 2x Dual Port FC cards. From Node 1, Card0/Port0 goes to FC switch0/Port0. Card1/Port0 goes to FC switch0/Port1. Card0/Port1 goes to FC switch1/Port0 and Card1/Port1 goes to FC switch1/Port1.

Repeat this pattern for Node 2 but use ports 2 and 3 on the FC switches.

Storage Nodes (S-Nodes)

On all S-nodes, the config is pretty simple. Onboard 10Gb port0 goes to switch0 and onboard 10Gb port1 goes to switch1. Create an LACP port across the two ports for each node.

If the environment has two trunked switches, where the switches appear as one, you must use LACP (802.3ad) bonding. The two switches must appear as one switch, either by being different switch blades that share a back-plane, or have software installed to make it appear as a “stacked” switch. The two ports on either switch must be in a LACP trunk to allow the failover from one port to the next to happen successfully. If you want to use LACP bonding, you must ensure that the switch ports between both switches allow for trunking at the specific port level.


There may be other ways to do this. If you have comments or suggestions, please leave them below. If I left something out, let me know.

Some of this was covered nicely in the Setup Guide and the FC Setup Guide but not at length or detail enough. I had more questions and had to leverage good network connections to get answers.

Thanks for reading. Cheers.