In the spirit of trying to push myself to document things a bit more in a blog format, I was made aware of a really good resource for port diagrams pushed out by a colleague, Artur Krzywdzinski , over at his blog vmwaremine.com.
While these diagrams MAY be dated (always check the latest documentation) since the original post was put up in 2014 (it’s definitely been updated recently) he does point to the originating KB’s and allows you to receive the diagrams in Visio format in order to update them as your needs dictate.
The info in these diagrams can be found on the Nutanix portal with a quick search, for example:
This diagram shows the common AOS ports for both ESXi and AHV hypervisors:
This is a really good image to include with any communication with an orgs network/security team to provide a visual understanding of each of the services vs just a list of TCP and/or UDP ports and an inbound or outbound designation, which can often times get confused.
The below diagram shows the port diagram for AOS and the ESXi hypervisor:
Similarly, the below shows the port diagram for AOS and the AHV hypervisor:
Artur has a number of other useful port diagrams, LEAP DR, Files, Objects, etc so please go check out his very well put together blog post.
I have really neglected my blogging and would like to start putting some more posts up. We will see if my work/life balance will allow for that. As for the current post, I just wanted to provide a quick update.
Since my last post, nearly 5 years ago, much has changed both for me personally, as well as for the industry. ScaleIO is no longer a “thing” It’s now an appliance and a converged widget from DellEMC called VxRack Flex. In my final year or so at DellEMC I did not see much traction for Flex as a product, so I cant really speak for the future of the product formerly known as ScaleIO, but the proof of commitment lies in the speed of development which has always been at a snails pace. I was a huge fan of ScaleIO once it made it to the 2.x line, but it was lacking some very basic functionality and was a nightmare from an operational perspective.
Given my personal passion for HCI that started 5+ years ago, about 18 months ago I decided to make a move to Nutanix. Given that, I will begin (hopefully) to put out some content focused around anything I find interesting in our product base. I will also likely start using this blog as a place to document/archive the details of any technical challenges I run into or to discuss anything else that sparks my desire to express myself via this medium.
I hope to see more of you (I realize I’m just shouting into the void at the moment).
In my previous post, we set up our base OS images. In this post, we will walk through what it takes to get a ScaleIO SDS array up and running on Ubuntu 14.04.
First things first, lets make sure we have a supported version of the kernel on our Ubuntu machine. go out to ftp://QNzgdxXix:Aw3wFAwAq3@ftp.emc.com/Ubuntu/2.0.5014.0/ to ensure there is a directory for your kernel version. I’m running 4.2.0-30, you can check your version by running:
$ uname -r
The FTP site provides the SDC kernel driver for Ubuntu, so it’s important that the hosts you want to use to consume the ScaleIO storage are at one of the supported kernel versions.
The version currently available is 2.0.0.0 as of this blog post.
Once downloaded, unzip the file somewhere. I just downloaded it into my “Downloads” folder on my Windows desktop. Go into the ScaleIO_2.0.0.0_Gateway_for_Linux_Download folder and, using something like WinSCP, copy the emc-scaleio-gateway_2.0-5014.0_amd64.deb file to the host you intend to use as your gateway (I just copied it into the home directory of my user account.
Now we log in to the host we’ll be using as our Gateway server. From the CLI, execute the following command:
I have another process utilizing port 80, so I needed to change the default ports that the Gateway listens on. To do that, you will need to edit the following fields in the gateway config which can be found at /opt/emc/scaleio/gateway/conf/catalina.properties
http.port=443
ssl.port=80
While we’re editing files, we should also edit the following fields in the /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/gatewayUser.properties file:
These edits will allow us to download the SDC kernel drivers for our kernel during the deployment of the SDC.
Now we need to restart the gateway.
$ sudo service scaleio-gateway restart
At this point you can log into your ScaleIO gateway using your web browser http://<your_gateway>:<your_port> and login with the username “admin” and the password you set when installing the gateway.
Once logged in, we need to upload the installation tar files from the ScaleIO_2.0.0_UBUNTU_14.04_Download folder that was contained in the zip file.
At this point we need to create our deployment csv. You can get details on all the fields in the csv from the ScaleIO deployment guide that is included with the download. I am using a separate storage network (Infiniband) from my management network, so my csv looks like this (TablePress makes it look pretty, but it should be a plain .csv file):
Domain
Username
Password
Operating System
Is MDM/TB
MDM Mgmt IP
MDM IPs
MDM Name
perfProfileForMDM
Is SDS
SDS Name
SDS All IPs
SDS-SDS Only IPs
SDS-SDC Only IPs
Protection Domain
Fault Set
SDS Device List
StoragePool List
SDS Device Names
RFCache
RFCache Device List
RFCache Pool List
perfProfileForSDS
Is SDC
perfProfileForSDC
SDC Name
root
password
linux
Master
10.0.0.11
10.1.1.11
MDM1
Default
No
Yes
SDC5
root
password
linux
Slave
10.0.0.12
10.1.1.12
MDM2
Default
Yes
SDS1
10.1.1.12
domain1
fs1
/dev/sdb
pool1
Yes
SDC1
root
password
linux
Slave
10.0.0.13
10.1.1.13
MDM3
Default
Yes
SDS2
10.1.1.13
domain1
fs2
/dev/sdb
pool1
Yes
SDC2
root
password
linux
TB
10.0.0.14
10.1.1.14
TB1
Default
Yes
SDS3
10.1.1.14
domain1
fs3
/dev/sdb
pool1
Yes
SDC3
root
password
linux
TB
10.0.0.15
10.1.1.14
TB2
Default
Yes
SDS4
10.1.1.15
domain1
fs4
/dev/sdb
pool1
Yes
SDC4
Now, from the Install tab, we can upload our .csv, select new installation, and click “Upload Installation csv”. Next you will walk through each phase of the installation.
If you’ve not fat-fingered anything, each step should be a succes and you should end up with a topology that looks like this:
If you have done the appropriate pre-work, and ensured you are running a supported kernel on all your SDCs, ScaleIO really is the easiest storage array to deploy. In 5 minutes from the time you install the gateway you will have an operable SDS array that, depending on the backing storage you use, can be as performant as anything on the market.
To manage the ScaleIO cluster, you can either use the CLI, or install the management GUI. On Windows, ensure you are running the 64-bit version of the JRE, the web based installer only installs the 32-bit version, just an FYI.
In Part3, we will begin to deploy OpenStack Mitaka on Ubuntu.
So, this will be a multi-part series documenting my adventures in deploying OpenStack Mitaka on Ubuntu 14.04 leveraging EMC’s ScaleIO storage (along with some iSCSI sitting on ZFS). This post is meant to document how I’ve set up my environment, and any issues i’ve encountered.
The environment:
I have 6 total servers in the environment all connected via Mellanox 20Gb/s Infiniband and 1Gb/s IP.
Server #1 – this is an older Dell Optiplex with 4 CPU cores and 16GB of RAM. I currently have Docker installed running a handful of containers supporting this site, as well as some internal apps. Additionally, this server is set up as the ScaleIO gateway.
Server #2 – this is a 2U Supermicro box with 12 CPU cores and 64GB of RAM. Currently this server is tasked to run the bulk of the supporting OpenStack services and databases. It also contains ~10TB of useable capacity on ZFS on Linux that is exported via NFS, and eventually via iSCSI.
Servers #3 through 6 – these are my Nova compute nodes, as well as my ScaleIO SDS nodes. Each contains 8 CPU cores and 32GB of RAM.
Each of these servers contains an identical base build at this point.
The Setup:
When installing Ubuntu, the only server role i’ve given them is OpenSSH server. Once the base install is complete, I’ve installed the opensm package in order to bring up the Subnet Manager in the IB network, and I’ve loaded the appropriate modules for my environment:
$ sudo apt-get install opensm
Once the OpenSM package is installed, I load the appropriate modules:
I also add these modules to the existing /etc/modules file so it looks like this:
$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
lp
rtc
mlx4_ib
ib_umad
ib_ipoib
Now that I have my modules loaded, I can set up my network interfaces for both my OpenStack provider network, and my IB storage network. I edit the /etc/network/interfaces file to this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The Management network interface
auto eth0
iface eth0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
dns-nameservers 10.0.0.2
dns-search battledome.lcl
# The Provider network Interface
auto eth1
iface eth1 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
# The Infiniband Network
auto ib0
iface ib0 inet static
address 10.1.1.10/24
post-up echo connected > /sys/class/net/ib0/mode
post-up ifconfig ib0 mtu 65520
Once the /etc/network/interfaces file is ready, simply bring up the eth1 and ib0 interfaces to ensure they’re operational.
$ sudo ifup eth1 && sudo ifup ib0
Next we want to update our packages using apt-get. DO NOT upgrade the Kernel just yet.
$ sudo apt-get update
$sudo apt-get upgrade
Now, we want to update the Kernel to the latest SUPPORTED Kernel for ScaleIO. as of this post, the latest SUPPORTED for Ubuntu 14.04 is 4.2.0-30. If you blindly do a apt-get dist-upgrade, you will get a newer kernel than what is supported. This is BAD, the ScaleIO Kernel module for the SDC’s will not load. To get around this, we will install the 4.2.0-30 kernel, then ensure that apt-get HOLDS at that level.
Once the kernel install completes, we will set apt-get to HOLD at this level
$ sudo apt-mark hold install linux-headers-4.2.0-30-generic linux-headers-4.2.0-30 linux-image-4.2.0-30-generic linux-image-extra-4.2.0-30-generic
REBOOT
**NOTE: There doesn’t appear to be a similar way to hold packages with the apt command. I know there’s a way to hold packages with dpkg, but I havent tested this with apt.
Installing components necessary for ScaleIO
There’s not a lot in terms of prerequisites fro ScaleIO on Ubuntu. we need to install Java 8, numactl, libaio, and bash-completion.
Do this on each MDM, SDS, and SDC node you plan to use in your environment.
Additionally, it appears that you need to enable the root account and enable it to login via SSH for the Gateway installer to work. I don’t believe this is a requirement for a manual install.
$ sudo passwd root
Allow root to login via ssh by modifying the /etc/ssh/sshd_config file with the following parameter:
PermitRootLogin yes
Restart the ssh service.
$ sudo service ssh restart
Repeat this fro each nod you will be using as an MDM, SDS, or SDC in your environment.
You are now ready to begin deploying ScaleIO. I will cover that in Part 2.
I’ve recently been acquiring servers and building up an Openstack on ScaleIO lab. During this exercise I’ve been using bonded ethernet and IBoIP interfaces in order to trunk the multiple VLANs I need.
On Ubuntu 16.04, the “ifenslave” and “vlan” modules are included in the base server install. I’m not sure if these packages are included by default on the desktop version, if they aren’t, the instructions for adding them are included below with the 14.04 specific info.
Installing required modules
# apt-get install ifenslave
Add the following modules in /etc/modules to load on reboot
# nano /etc/modules
Add these lines:
lp
rtc
loop
bonding
Edit the /etc/network/interface
$ sudo nano /etc/network/interfaces
Add/modify these lines:
#First Ethernet port
auto eth0
iface eth0 inet manual
bond-master bond0
#Second Ethernet port
auto eth1
iface eth1 inet maunal
bond-master bond0
#Create the bonded interface
auto bond0
iface bond0 inet static
address X.X.X.X/24
gateway X.X.X.X
dns-nameservers X.X.X.X
dns-search xyz.com
bond-mode 4
bond-miimon 100
bond-lacp-rate 1
bond-slaves eth0 eth1
bond-xmit_hash_policy layer3+4
At this point you need to either reboot, or do the following:
You must be logged in to post a comment.