Posts Tagged ‘Linux’

Site Recovery Manager Firewall Rules for Windows Server

April 29th, 2020

I have a hunch Google sent you here. Before we get to what you’re looking for, I’m going to digress a little. tl;dr folks feel free to jump straight to the frown emoji below for what you’re looking for.

Since the Industrial Revolution, VMware has supported Microsoft Windows and SQL Server platforms to back datacenter and cloud infrastructure products such as vCenter Server, Site Recovery Manager, vCloud Director (rebranded recently to VMware Cloud Director), and so on. However, if you’ve been paying attention to product documentation and compatibility guides, you will have noticed support for Microsoft platforms diminishing in favor of easy to deploy appliances based on Photon OS and VMware Postgres (vPostgres). This is a good thing – spoken by a salty IT veteran with a strong Windows background.

2019 is where we really hit a brick wall. vCenter Server 6.7 is the last version that supports installation on Windows and that ended on Windows Server 2016 – there was never support for Windows Server 2019 (reference VMware KB 2091273 – Supported host operating systems for VMware vCenter Server installation). In vSphere 7.0, vCenter Server for Windows has been removed and support is not available. For more information, see Farewell, vCenter Server for Windows. Likewise, Microsoft SQL Server 2016 was the last version to support vCenter Server (matrix reference).

Site Recovery Manager (SRM) is in the same boat. It was born and bred on Winodws and SQL Server back ends. But once again we find a Photon OS-based appliance with embedded vPostres available along with product documentation which highlights diminishing support for Microsoft Windows and SQL.

Taking a closer look at the documentation…

Compatibility Matrices for VMware Site Recovery Manager 8.2

Compatibility Matrices for VMware Site Recovery Manager 8.3

  • “Site Recovery Manager Server 8.3 supports the same Windows host operating systems that vCenter Server 7.0 supports.” SRM 8.3 supports vCenter Server 6.7 as well so that should been included here also but was left out, probably an oversight.
  • Supported host operating systems for VMware vCenter Server installation (2091273)
  • Takeaway: vCenter Server 7 cannot be installed on Windows. This implies SRM 8.3 supports no version of Windows Server for installation (this implication is not at all correct as SRM 8.3 ships as a Windows executable installation for vSphere 6.x environments). Not a great spot to be in since the Photon OS-based SRM appliance employs a completely different Storage Replication Adapter (SRA) than the Windows installation and not all storage vendors support both (yet).

Ignoring the labyrinth of supported product and platform compatibility matrices above, one may choose to forge ahead and install SRM on Windows Server 2019 anyway. I’ve done it several times in the lab but there was a noted takeaway.

When I logged into the vSphere Client, the SRM plug-in was not visible. In my travels, there’s a few reasons why this symptom can occur.

  • The SRM services are not started.
  • The logged on user account is not a member of the SRM Administrators group (yes even super users like administrator@vsphere.local will need to be added to this group for SRM management).
  • The Windows Firewall is blocking ports used to present the plug-in.

Wait, what? The Windows Firewall wasn’t typically a problem in the past. That is correct. The SRM installation does create four inbound Windows Firewall rules (none outbound) on Windows Server up through 2016. However, for whatever reason, the SRM installation does not create these needed firewall rules on Windows Server 2019. The lack of firewall rules allowing SRM related traffic will block the plug-in. Reference Network Ports for Site Recovery Manager.

One obvious workaround here would be to disable the Windows Firewall but what fun would that be? Also this may violate IT security plans, trigger an audit, require exception filings. Been there, done that, ish. Let’s dig a little deeper.

The four inbound Windows Firewall rules ultimately wind up in the Windows registry. A registry export of the four rules actually created by an SRM installation is shown below. Through trial and error I’ve found that importing the rules into the Windows registry with a .reg file results in broken rules so for now I would not recommend that method.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\FirewallRules]
"{B37EDE84-6AC1-4F7D-9E42-FA44B8D928D0}"="v2.26|Action=Allow|Active=TRUE|Dir=In|App=C:\\Program Files\\VMware\\VMware vCenter Site Recovery Manager\\bin\\vmware-dr.exe|Name=VMware vCenter Site Recovery Manager|Desc=This feature allows connections using VMware vCenter Site Recovery Manager.|"
"{F6EAE3B7-C58F-4582-816B-C3610411215B}"="v2.26|Action=Allow|Active=TRUE|Dir=In|Protocol=6|LPort=9086|Name=VMware vCenter Site Recovery Manager - HTTPS Listener Port|"
"{F6E7BE93-C469-4CE6-80C4-7069626636B0}"="v2.26|Action=Allow|Active=TRUE|Dir=In|App=C:\\Program Files\\VMware\\VMware vCenter Site Recovery Manager\\external\\commons-daemon\\prunsrv.exe|Name=VMware vCenter Site Recovery Manager Client|Desc=This feature allows connections using VMware vCenter Site Recovery Manager Client.|"
"{66BF278D-5EF4-4E5F-BD9E-58E88719FA8E}"="v2.26|Action=Allow|Active=TRUE|Dir=In|Protocol=6|LPort=443|Name=VMware vCenter Site Recovery Manager Client - HTTPS Listener Port|"

The four rules needed can be created by hand in the Windows Firewall UI, configured centrally via Group Policy Object (GPO), or scripted with netsh or PowerShell. I’ve chosen PowerShell and created the script below for the purpose of adding the rules. Pay close attention in that two of these rules are application path specific. Change the drive letter and path to the applications as necessary or the two rules won’t work properly.

# Run this PowerShell script directly on a Windows based VMware Site
# Recovery Manager server to add four inbound Windows firewall rules
# needed for SRM functionality.
# Jason Boche
# http://boche.net
# 4/29/20

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager" -Description "This feature allows connections using VMware vCenter Site Recovery Manager." -Direction Inbound -Program "C:\Program Files\VMware\VMware vCenter Site Recovery Manager\bin\vmware-dr.exe" -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager - HTTPS Listener Port" -Direction Inbound -LocalPort 9086 -Protocol TCP -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager Client" -Description "This feature allows connections using VMware vCenter Site Recovery Manager Client." -Direction Inbound -Program "C:\Program Files\VMware\VMware vCenter Site Recovery Manager\external\commons-daemon\prunsrv.exe" -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager Client - HTTPS Listener Port" -Direction Inbound -LocalPort 443 -Protocol TCP -Action Allow

Test execution was a success.

PS S:\PowerShell scripts> .\srmaddwindowsfirewallrules.ps1


Name                  : {10ba5bb3-6503-44f8-aad3-2f0253c980a6}
DisplayName           : VMware vCenter Site Recovery Manager
Description           : This feature allows connections using VMware vCenter Site Recovery Manager.
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {ea88dee8-8c96-4218-a23d-8523e114d2a9}
DisplayName           : VMware vCenter Site Recovery Manager - HTTPS Listener Port
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {a707a4b8-b0fd-4138-9ffa-2117c51e8ed4}
DisplayName           : VMware vCenter Site Recovery Manager Client
Description           : This feature allows connections using VMware vCenter Site Recovery Manager Client.
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {346ece5b-01a9-4a82-9598-9dfab8cbfcda}
DisplayName           : VMware vCenter Site Recovery Manager Client - HTTPS Listener Port
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local



PS S:\PowerShell scripts>

After logging out of the vSphere Client and logging back in, the Site Recovery plug-in loads and is available.

Feel free to use this script but be advised, as with anything from this site, it comes without warranty. Practice due diligence. Test in a lab first. Etc.

With virtual appliance being fairly mainstream at this point, this article probably won’t age well but someone may end up here. Maybe me. It has happened before.

RHEL 7, open-vm-tools, and guest customization

August 9th, 2015

Update 5/26/18: For RHEL 7.2 and newer, be sure to read the 5/26/18 update below as some of the steps below are no longer necessary.

I spent some time this weekend working with vCloud Director 5.5.4 build 2831206 (on vSphere 6) and Red Hat Enterprise Linux vApp/guest customization. I’m not a *nix guru but I’m comfortable enough with legacy versions of RHEL 5 and 6 as I’ve worked with them quite a bit, particularly for vSphere applications and solutions such as vCloud Director to provide just one example. Quite honestly internet research or peer networking provides supplemental knowledge for whatever I can’t figure out. However I hadn’t spent much time with RHEL 7. There are some new twists and this blog post is an attempt to document what I’ve uncovered to answer questions and hopefully save myself some time in the future. If you’re in a hurry, skip to the “Tying It All Together” section at the end.

vSphere Templates and vCloud vApp Templates

When it comes to vSphere templates that I use myself, I’ll bake in commonly utilized software packages, patches, as well as tweaks and best practices. However, when it comes to shared vApp Templates in a vCloud Catalog, I employ more of a purist philosophy to minimize issues or questions raised regarding the DNA of the OS build I’m sharing with the organization which serves as their base starting point for their vApp. Aside from installing VMware Tools, my Windows 2012 R2 vApp is about as vanilla as it gets. The same can be said for my RHEL 5 and RHEL 6 vApps. When I applied that same approach to RHEL 7, that’s where some noticeable changes became apparent.

The RHEL 7 Minimal Install

The mere existence of this blog post stems from here. The default installation of RHEL 7 is a Minimal Install. While it’s not encumbered with extra software that may never be used depending on the server’s role, it’s also missing packages commonly installed in the past. Some of which are core dependencies in a virtualized datacenter. However, not knowing this, I gladly accepted the opportunity of a minimalist installation. And that’s exactly what I got.

VMware Tools

After completing a rather uneventful RHEL 7 installation, typically the first and last order of business is to install VMware Tools. Those who attempt it on RHEL 7 (as well as other newer versions of *nix such as CentOS 7) will be greeted with rather stern wording that VMware Tools should be avoided and rather the OS provided open-vm-tools should be used instead. VMware support of open-vm-tools (2073803) provides background information, detail, and outlines the benefits of open-vm-tools. It’s not that you can’t install VMware Tools on RHEL 7, you can, but VMware is not recommending it at this point. In the previously linked KB article:

VMware recommends using open-vm-tools redistributed by operating system vendors.

VMware fully supports…

VMware aids in the development of…

VMware does not recommend removing open-vm-tools redistributed by operating system vendors.

Those who choose to install VMware Tools anyway on a RHEL 7 Minimal Install will soon discover that they cannot do so without installing some additional support RHEL 7 packages. VMware Tools cannot be installed on RHEL 7 due to missing ifconfig (2075519) explains that net-tools is missing and must be installed as follows (you’ll need a yum repository; the next section covers that):

#sudo yum install net-tools

I’d also argue that you’re going to need to install supporting PERL packages to execute the /usr/bin/vmware-config-tools.pl script because it’s also missing in a RHEL 7 Minimal Install. More on that a little later but for now, the other packages that are needed can be installed as follows:

# yum install perl gcc make kernel-headers kernel-devel -y

Creating A Local DVD Repository For YUM (from Red Hat)

Without a Red Hat subscription (I fall into this category), or the networking means to reach your subscription on the internet, you’ll need to rely on your RHEL 7 DVD or .iso to install necessary packages such as net-tools mentioned above. In order to access these packages, you’ll need to mount the DVD and create a local DVD repository.

Mounting the DVD:

mount /dev/cdrom /mnt/

Creating the local DVD repository is slightly more involved but the steps are easy to follow. Create the file /etc/yum.repos.d/dvd.repo. The file should contain the following text:

[rhel7-dvd]
name=rhel7-dvd
baseurl=file:///mnt
enabled=1
gpgcheck=0

The local DVD repository is now available and its existence can now be queried (note that it only remains available for as long as the RHEL 7 DVD is mounted):

yum repolist all

An example of installing a yum package is shown above although it does not always require the use of sudo.

Open VM Tools (from Red Hat)

Red Hat Enterprise Linux 7 Guest Operating System Installation Guide documents the process of installing open-vm-tools. Remember that open-vm-tools is distributed by the OS vendor so everything you need from that respect is available from the RHEL 7 DVD and the local DVD repository created above. That said, installing open-vm-tools is straightforward:

# yum install open-vm-tools

Verify open-vm-tools has been installed in the guest:

# yum search open-vm-tools

With open-vm-tools installed, the guest now has the following vSphere feature functionality:

  • Synchronization of the guest OS clock with the virtualization platform
  • Enables the virtual infrastructure to perform graceful power operations (shut down) and file system quiescing of the virtual machine
  • Provides a heartbeat from guest to the virtualization infrastructure to support vSphere High Availability (HA)
  • Publishes information about the guest OS to the virtualization platform, including resource utilization and networking information
  • Provides a secure and authenticated mechanism to perform various operations within the guest OS from the virtualization infrastructure
  • Accepts additional plug-ins that can extend or customize open-vm-tools functionality

Guest customization and the deployPkg Tools Plug-in (from VMware)

Looking at the bulleted list above, a number of features are provided by open-vm-tools. Unfortunately guest customization isn’t one of them (guest customization is typically used in deploying templates in vSphere as well as deploying available vApps from a vCloud Director catalog). At this point if you attempt to clone a RHEL 7 guest with open-vm-tools, you’ll get the exact same VM over and over again with no unique guest customization. The last bullet speaks to a plug-in architecture for which a guest customization plug-in is available from VMware called the deployPkg Tools Plug-in.

Red Hat Enterprise Linux 7 Guest Operating System Installation Guide talks about the plug-in and while it appears to provide the installation instructions, it’s missing a few required steps for installing the VMware Packaging Public Keys so refer to Installing the deployPkg plug-in in a Linux virtual machine (2075048) for the correct process. In this process, yum will be used to install a package available via the internet from VMware instead of from the local DVD repository described previously.

Download the two VMware Packaging Public Keys from VMware at http://packages.vmware.com/tools/keys

Copy them to /tmp/ on the RHEL 7 guest

Import each of the two keys (that’s a double dash in front of import):

# rpm –import /tmp/VMWARE-PACKAGING-GPG-DSA-KEY.pub

# rpm –import /tmp/VMWARE-PACKAGING-GPG-RSA-KEY.pub

Create the yum repository by creating a file called /etc/yum.repos.d/vmware-tools.repo containing the following text:

[vmware-tools]
name = VMware Tools
baseurl = http://packages.vmware.com/packages/rhel7/x86_64/
enabled = 1
gpgcheck = 1

Execute the command

sudo yum install open-vm-tools-deploypkg

Followed by

sudo systemctl restart vmtoolsd

At this point, both open-vm-tools from Red Hat as well as open-vm-tools-deploypkg from VMware have been installed and guest customization should work and you’d be done, except…

RHEL 7 Guest Customization Fails Because The Minimal Install Is Missing PERL

Under the RHEL 7 Minimal Install, guest customization still does not produce unique VMs during a cloning process. Taking a look at the clone in /var/log/vmware-imc/toolsDeployPkg.log, I noticed the following:

Launching deployment /usr/bin/perl -I/tmp/.vmware/linux/deploy/scripts /tmp/.vmware/linux/deploy/scripts/Customize.pl /tmp/.vmware/linux/deploy/cust.cfg.

Command to exec : /usr/bin/perl

Customization command output:

Deploy error: Deployment failed. The forked off process returned error code.

Package deploy failed in DeployPkg_DeployPackageFromFile

The folder /usr/bin/perl/ does not exist.

So then where is PERL? I already know the answer before I’m told.. it doesn’t exist under a RHEL 7 Minimal Install

[root@localhost ~]# whereis perl
perl:[root@localhost ~]#

Install PERL from the RHEL 7 local DVD repository. This installation should be performed on the template or vApp before it’s placed into the catalog so that the resulting guest customization works (obviously it has little effect on a guest customization which has already failed):

# yum install perl gcc make kernel-headers kernel-devel -y

PERL is now installed and can be called upon for guest customization:

[root@localhost ~]# whereis perl
perl: /usr/bin/perl /usr/share/man/man1/perl.1.gz
[root@localhost ~]#

RHEL 7 Guest Agents

The RHEL 7 Minimal Install turned out to be a bit of learning process. A more streamlined approach, if available, would be to utilize the Infrastructure Server base environment during the RHEL 7 installation instead of the Minimal Install. Infrastructure Server is going to automatically include PERL and net-tools. It’s also going to expose the ability to install the Guest Agents Add-On. It’s talked about in full in the Red Hat Enterprise Linux 7 Guest Operating System Installation Guide. Installing the Guest Agents includes open-vm-tools from the RHEL 7 DVD without the extra steps of manually creating the RHEL 7 local DVD repository.

While this is certainly more efficient, the one remaining caveat is that Guest Agents does not include the deployPkg Tools Plug-in from VMware. The plug-in will still need to be manually installed from the VMware repository if customization of the VM or vApp is required. For templates, this is almost always a necessity.

RHEL 7 Networking

One last note is that networking in RHEL 7 has seen some changes. For openers, legacy device names such as eth0, eth1, etc. are replaced by a profile name such as eno16780032 (the corresponding files reflect these name changes in /etc/sysconfig/network-scripts/). Menu driven network configuration (previously accessed from setup) has been replaced by a Network Manager which is accessible via nmtui (Network Manager Text User Interface), nmcli (Network Manager Command Line Interface), or Network Scripts. Also recall from the top of the article that the old standby ifconfig will not be present under a Minimal Install – it requires the net-tools package. Last but not least, detected Ethernet adapters in a Minimal Installation are not automatically enabled for use. Discovered Ethernet devices can be enabled during the initial RHEL 7 setup (I believe it’s under Hostname and Network), or it can be enabled after installation by running nmtui and check the Automatically Connect box for the appropriate Edit a connection menu. A detected Ethernet adapter listing can be obtained at any time with nmcli d.

Although this article was specific RHEL 7, open-vm-tools is available with the following operating systems as documented by VMware support of open-vm-tools (2073803)

  • Fedora 19 and later releases
  • Debian 7.x and later releases
  • openSUSE 11.x and later releases
  • Recent Ubuntu releases (12.04 LTS, 13.10 and later)
  • Red Hat Enterprise Linux 7.0 and later releases
  • CentOS 7 and later releases
  • Oracle Linux 7 and later releases
  • SUSE Linux Enterprise 12 and later releases

RHEL 7 Templates – Tying It All Together

In the end, there are a few different Base Environment types available with varying steps for building a RHEL 7 image which supports guest customization in vSphere or vCloud Director.

Minimal Install (default)

  1. Choose Minimal Install Base Environment
  2. Enable Ethernet card to automatically connect (at install or later using nmtui)
  3. Add RHEL 7 local DVD repository
  4. Install net-tools
  5. Install PERL
  6. Install open-vm-tools
  7. Add yum repository for VMware (for RHEL 7.2 and newer, be sure to read the 5/26/18 update below as this step is no longer necessary)
  8. Install the deployPkg Tools Plug-in (for RHEL 7.2 and newer, be sure to read the 5/26/18 update below as this step is no longer necessary)

Infrastructure Server

  1. Choose Infrastructure Server Base Environment and Guest Agents Add-On (open-vm-tools will automatically be installed)(for RHEL 8.0 and newer, open-vm-tools is automatically installed with the default Base Environment of Server with GUI)
  2. Enable Ethernet card to automatically connect (at install or later using nmtui)
  3. Add yum repository for VMware (for RHEL 7.2 and newer, be sure to read the 5/26/18 update below as this step is no longer necessary)
  4. Install the deployPkg Tools Plug-in (for RHEL 7.2 and newer, be sure to read the 5/26/18 update below as this step is no longer necessary)

Clearly the Minimal Install route has more steps while the Infrastructure Server route has less steps and is quicker. Regardless of Base Environment type, VMware does not recommend the installation of VMware Tools.

I’ve linked several resources throughout this article. Just about all of the information was available, it was merely a matter of finding and reading the relevant documentation which isn’t always in one place. The only dots I had to connect on my own which I didn’t see mentioned anywhere was the lack of PERL for the deployPkg Tools Plug-in from VMware as well as for the installation of VMware Tools on RHEL 7 which isn’t recommended by VMware.

Update 8/22/15: vCloud Director guest customization is also problematic with CentOS 7 but with one additional hang up. I’ve found several references on the internet with the workaround including one I’ll link here from my good friend Bob Plankers/etc/redhat-release must read Red Hat Enterprise Linux Server release 7.0 (Maipo)

Update 9/11/15: Brian Graf authored a nice piece yesterday titled Open-VM-Tools (OVT): The Future of VMware Tools for Linux which anyone who wound up here should find interesting.

Update 5/26/18: As noted by daVikes in the comment section below (thank you daVikes!), relevant updates have been made as of RHEL 7.2 which streamlines templates and guest customization even further. In short, deployPkg is included with open-vm-tools. Whether you choose to start with a minimal installation and install open-vm-tools afterwards, or you choose the Infrastructure Server and install Guest Agents (this installs open-vm-tools), deployPkg will be included with open-vm-tools. Do note that if choosing the Infrastructure Server method, downloading the two VMware Packaging Public Keys and creating the /etc/yum.repos.d/vmware-tools.repo yum repository is no longer necessary. I still make it a habit to follow the steps for Creating A Local DVD Repository For YUM.

Update 6/28/19: RHEL 8.0: open-vm-tools is automatically installed with the default Base Environment of Server with GUI

vCloud Director 5.6.4 Remote consoleproxy issues

June 12th, 2015

vCloud Director is a wonderful IaaS addition to any lab, development, or production environment. When it’s working properly, it is a very satisfying experience wielding the power of agility, consistency, and efficiency vCD provides. However, like many things tech with upstream and human dependencies, it can and does break. Particularly in lab or lesser maintained environments that don’t get all the care and feeding production environments benefit from. When it breaks, it’s not nearly as much fun.

This week I ran into what seemed like a convergence of issues with vCD 5.6.4 relating to the Remote Console functionality in conjunction with SSL certificates, various browser types, networking, and 32-bit Java. As is the case often, what I’m documenting here is really more for my future benefit as there were a number of sparse areas I covered which I won’t necessarily retain in memory long but as it goes with blogs and information sharing, sharing is caring.

The starting point was a functional vCD 5.6.4-2496071 environment on vSphere 5.5. Everything historically and to date working normally with the exception of the vCD console which had stopped working recently in Firefox and Google Chrome browsers. Opening the console in either browser from seemingly any client workstation yielded the pop out console window with toolbar buttons along the top, but there was no guest OS console painted in the main window area. It was blank. The status of the console would almost immediately change to Disconnected. I’ve dealt with permutations of this in the past and I verified all of the usual suspects: NTP, DNS, LDAP, storage capacity, 32-bit Java version, blocked browser plug-ins, etc. No dice here.

In Firefox, the vCD console status shows Disconnected while the Inspect Element console shows repeated failed attempts to connect to the consoleproxy address.

10:11:30.195 "10:11:30 AM [TRACE] mks-connection: Connecting to wss://172.16.21.151/902;cst-t3A6SwOSPRuUqIz18QAM1Wrz6jDGlWrrTlaxH8k6aYuBKilv/1mc7ap50x3sPiHiSJYoVhyjlaVuf6vKfvDPAlq2yukO7qzHdfUTsWvgiZISK56Q4r/4ZkD7xWBltn15s5AvTSSHKsVbByMshNd9ABjBBzJMcqrVa8M02psr2muBmfro4ZySvRqn/kKRgBZhhQEjg6uAHaqwvz7VSX3MhnR6MCWbfO4KhxhImpQVFYVkGJ7panbjxSlXrAjEUif7roGPRfhESBGLpiiGe8cjfjb7TzqtMGCcKPO7NBxhgqU=-R5RVy5hiyYhV3Y4j4GZWSL+AiRyf/GoW7TkaQg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--"1 debug.js:18:12

10:11:30.263 Firefox can't establish a connection to the server at wss://172.16.21.151/902;cst-t3A6SwOSPRuUqIz18QAM1Wrz6jDGlWrrTlaxH8k6aYuBKilv/1mc7ap50x3sPiHiSJYoVhyjlaVuf6vKfvDPAlq2yukO7qzHdfUTsWvgiZISK56Q4r/4ZkD7xWBltn15s5AvTSSHKsVbByMshNd9ABjBBzJMcqrVa8M02psr2muBmfro4ZySvRqn/kKRgBZhhQEjg6uAHaqwvz7VSX3MhnR6MCWbfO4KhxhImpQVFYVkGJ7panbjxSlXrAjEUif7roGPRfhESBGLpiiGe8cjfjb7TzqtMGCcKPO7NBxhgqU=-R5RVy5hiyYhV3Y4j4GZWSL+AiRyf/GoW7TkaQg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--.1 wmks.js:321:0

tail -f /opt/vmware/vcloud-director/logs/vcloud-container-debug.log |grep consoleproxy revealed:
2015-06-12 10:50:54,808 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x22c9c990 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61719]] |
2015-06-12 10:50:54,854 | DEBUG    | consoleproxy              | ReadOperation                  | IOException while reading data: java.io.IOException: Broken pipe |
2015-06-12 10:50:54,855 | DEBUG    | consoleproxy              | ChannelContext                 | Closing channel java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61719] |
2015-06-12 10:50:55,595 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0xd191a58 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61720]] |
2015-06-12 10:50:55,648 | DEBUG    | pool-consoleproxy-4-thread-289 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |
2015-06-12 10:50:56,949 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x3f0c025b [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61721]] |
2015-06-12 10:50:57,003 | DEBUG    | pool-consoleproxy-4-thread-301 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |
2015-06-12 10:50:59,902 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x1bda3590 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61723]] |
2015-06-12 10:50:59,959 | DEBUG    | pool-consoleproxy-4-thread-295 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |

In Google Chrome, the vCD console status shows Disconnected while the Inspect element console (F12) shows repeated failed attempts to connect to the consoleproxy address.

10:26:43 AM [TRACE] init: attempting ticket acquisition for vm vcdclient
10:26:44 AM [TRACE] plugin: Connecting vm
10:26:44 AM [TRACE] mks-connection: Connecting to wss://172.16.21.151/902;cst-f2eeAr8lNU6BTmeVelt9L8VKoe92kJJMxZCC2watauBV6/x…fmI8Xg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--
WebSocket connection to 'wss://172.16.21.151/902;cst-f2eeAr8lNU6BTmeVelt9L8VKoe92kJJMxZCC2watauBV6/x…fmI8Xg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--' failed: WebSocket opening handshake was canceled
10:26:46 AM [ERROR] mks-console: Error occurred: [object Event]
10:26:46 AM [TRACE] mks-connection: Disconnected [object Object]

tail -f /opt/vmware/vcloud-director/logs/vcloud-container-debug.log |grep consoleproxy revealed:
2015-06-12 10:48:35,760 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x55efffb3 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61675]] |
2015-06-12 10:48:39,754 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x3f123a13 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61677]] |
2015-06-12 10:48:42,658 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x7793f0a [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61679]] |

If you have acute attention to detail, you’ll notice the time stamps from the cell logs don’t correlate closely with the time stamps from the browser Inspect element console. Normally this would indicate time skew or an NTP issue which does cause major headaches with functionality but that’s by design here for my various screen captures and log examples aren’t from the exact same point in time. So it’s safe to move on.

Looking at the most recent vCloud Director For Service Providers installation documentation, I noticed a few things.

  1. Although I did upgrade vCD a few months ago to the most current build at the time, there’s a newer build available: 5.6.4-2619597
  2. Through repetition, I’ve gotten quite comfortable with the use of Java keytool and its parameters. However, additional parameters have been added to the recommended use of the tool. Noted going forward.
  3. VMware self signed certificates expire within three (3) months. Self signed certificates were in use in this environment. I haven’t noticed this behavior in the past nor has it presented itself as an issue but after a quick review, the self signed certificates generated a few months ago with the vCD upgrade had indeed expired recently.

At this point I was quite sure the expired certificates was the problem although it seemed strange the vCD portal was still usable while only the consoleproxy was giving me fits.  So I went through the two minute process of regenerating and installing new self signed certificates for both http and the consoleproxy.  The vCD installation guide more or less outlines this process as it is the same for a new cell installation as it is for replacing certificates. VMware also has a few KB articles which address it as well (1026309, 2014237). For those going through this process, you should really note the keytool parameter changes/additions in the vCD installation guide.

While I was at it, I also built a new replacement cell on a newer version of RHEL 6.5, performed the database upgrades, extended the self signed certificate default expiration from three months to three years, and I retired the older RHEL 6.4 cell. Fresh new cell. New certs. Ready to rock and roll.

Not so much. I still had the same problem with the console showing Disconnected. However, the Inspection element console for each browser are now indicating some new error message which I don’t have handy at the moment but basically it can’t talk to the consoleproxy adddress at all. I tried to ping the address and it was dead from a remote station point of view although it was quite alive at a RHEL 6.5 command prompt. Peters Virtual Notes had this one covered thankfully. According to https://access.redhat.com/site/solutions/53031, a small change is needed for the file /etc/sysctl.conf.

net.ipv4.conf.default.rp_filter = 1

must be changed to

net.ipv4.conf.default.rp_filter = 2

Success. Surely consoleproxy will work now. Unfortunately it still does not want to work. Back to the java.io.IOException: Broken pipe SSL handshake issues although the new certificate for vCD’s http address is registered and working fine (remembering again each vCD cell has two IP addresses, one for http access and one for consoleproxy functionality – each requires a trusted SSL certificate or an exception).

The last piece of the puzzle was something I have never had to do in the past and that is to manually add an exception (Firefox) for the consoleproxy self signed certificate and install it (Google Chrome). For each browser, this is a slightly different process.

For Firefox, browse to the https:// address of the consoleproxy, don’t worry, nothing visible should be displayed when it does not receive a properly formatted request. The key here is to add an exception for the certificate associated specifically to the consoleproxy address.

Once this certificate exception is added, the consoleproxy certificate is essentially trusted and so is the IP address for the host and the console service it is supposed to provide.

To resolve the consoleproxy issue for Google Chrome, the process is slightly different. Ironically I found it easiest to use Internet Explorer for this. Open Internet Explorer and when you do so, be sure to right click on the IE shortcut and Run as administrator (this is key in a moment). Browse to the https:// address of the consoleproxy, don’t worry, nothing visible should be displayed when it does not receive a properly formatted request. Continue to this website and then use the Certificate Error status message in the address bar to view the certificate being presented. The self signed consoleproxy certificate needs to be installed. Start this task using the Install Certificate button. This button is typically missing when launching IE normally but it is revealed when launching IE with Run as administrator rights.

Browse for the location to install the self signed certificate. Tick the box Show physical stores. Drill down under Third-Party Root Certification Authorities. Install the certificate in the Local Computer folder. This folder is typically missing when launching IE normally but it is revealed when launching IE with Run as administrator rights.

Once this certificate is installed, the consoleproxy certificate is essentially trusted in Google Chrome. Just as with the Firefox remedy, the Java SSL handshake with the consoleproxy succeeds and the vCD remote console is rendered.

Note that for Google Chrome, there is another quick method to temporarily gain functional console access without installing the consoleproxy certificate via Internet Explorer.

  1. Open a Google Chrome browser and browse to the https:// address of the consoleproxy.
  2. When prompted with Your connection is not private, click the Advanced link.
  3. Click the Proceed to <console proxy IP address> (unsafe) link.
  4. Nothing will visibly happen except Google Chrome will now temporarily trust the consoleproxy certificate and the vCD remote console will function for as long as a Google Chrome tab remains open.
  5. Without closing Google Chrome, now continue into the vCD organization portal and resume business as usual with functional remote consoles.

On the topic of Google Chrome, internet searches will quickly reveal vCloud Director console issues with Google Chrome and NPAPI. VMware discusses this in the vCloud Director 5.5.2.1 Release Notes:

Attempts to open a virtual machine console on Google Chrome fail
When you attempt to open a virtual machine console on a Google Chrome browser, the operation fails. The occurs due to the deprication of NPAPI in Google Chrome. vCloud Director 5.5.2.1 uses WebMKS instead of the VMware Remote Console to open virtual machine consoles in Google Chrome, which resolves this issue.

I was working with vCD 5.6.x which leverages WebKMS in lieu of NPAPI so the NPAPI issue was not relevant in this case but if you are running into an NPAPI issue, Google offers How to temporarily enable NPAPI plugins here.

Update 8/8/15: Josiah points out a useful VMware forum thread which may help resolve this issue further when FQDNs are defined in DNS for remote console proxies or where multiple vCloud cells are installed in a cluster behind a front end load balancer, NAT/reverse proxy, or firewall.

Update 7/17/20: The VMware Cloud Director virtual appliance with embedded PostgreSQL database by default uses eth0 for the console proxy address along with port 8443. ie. https://100.88.144.13:8443. This is the URL that must be trusted in order to open a VMware Cloud Director remote console without the dreaded Disconnected message. Find this address and port combination to trust in a Disconnected console browser window by pressing SHIFT + CTRL + J or F12 which opens the Elements window. This information was previously published in VMware KB 2058496 Cannot connect to vCloud Director WebMKS console with Mozilla Firefox or Google Chrome which has been taken down but the cached version of the page still remains.

vCloud Director, RHEL 6.3, and Windows Server 2012 NFS

July 16th, 2013

One of the new features introduced in vCloud Director 5.1.2 is cell server support on the RHEL 6 Update 3 platform (you should also know that cell server support on RHEL 5 Update 7 was silently removed in the recent past – verify the version of RHEL in your environment using cat /etc/issue).  When migrating your cell server(s) to RHEL 6.3, particularly from 5.x, you may run into a few issues.

First is the lack of the libXdmcp package (required for vCD installation) which was once included by default in RHEL 5 versions.  You can verify this at the RHEL 6 CLI with the following command line:

yum search libXdmcp

or

yum list |grep libXdmcp

Not to worry, the package is easily installable by inserting/mounting the RHEL 6 DVD or .iso, copying the appropriate libXdmcp file to /tmp/ and running either of the following commands:

yum install /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

or

rpm -i /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

Update 6/22/18: It is really not necessary to point to a package file location or a specific version (this overly complicates the task) when a YUM repository is created. Also… RHEL7 Infrastructure Server base environment excludes the following packages required by vCloud Director 9.1 for Service Providers:

  • libICE
  • libSM
  • libXdmcp
  • libXext
  • libXi
  • libXt
  • libXtst
  • redhat-lsb

If the YUM DVD repository has been created and the RHEL DVD is mounted, install the required packages with the following one liner:

yum install -y libICE libSM libXdmcp libXext libXi libXt libXtst redhat-lsb

Next up is the use of Windows Server 2012 (or Windows 8) as NFS for vCloud Transfer Server Storage in conjunction with the newly supported RHEL 6.3.  Creating the path and directory for the Transfer Server Storage is performed during the initial deployment of vCloud Director using the command mkdir -p /opt/vmware/vcloud-director/data/transfer. When mounting the NFS export for Transfer Server Storage (either manually or via /etc/fstab f.q.d.n:/vcdtransfer/opt/vmware/vcloud-director/data/transfer nfs rw 0 0 ), the mount command fails with error message mount.nfs: mount system call failed. I ran across this in one particular environment and my search turned up Red Hat Bugzilla – Bug 796352.  In the bug documentation, the problem is identified as follows:

On Red Hat Enterprise Linux 6, mounting an NFS export from a Windows 2012 server failed due to the fact that the Windows server contains support for the minor version 1 (v4.1) of the NFS version 4 protocol only, along with support for versions 2 and 3. The lack of the minor version 0 (v4.0) support caused Red Hat Enterprise Linux 6 clients to fail instead of rolling back to version 3 as expected. This update fixes this bug and mounting an NFS export works as expected.

Further down in the article, Steve Dickson outlines the workarounds:

mount -o v3 # to use v3

or

Set the ‘Nfsvers=3’ variable in the “[ Server “Server_Name” ]”
section of the /etc/nfsmount.conf file
An Example will be:
[ Server “nfsserver.lab.local” ]
Nfsvers=3

The first option works well at the command line but doesn’t lend itself to /etc/fstab syntax so I opted for the second option which is to establish a host name and NFS version in the /etc/nfsmount.conf file.  With this method, the mount is attempted as called for in /etc/fstab and by reading /etc/nfsmount.conf, the mount operation succeeds as desired instead of failing at negotiation.

There is a third option which would be to avoid the use of /etc/fstab and /etc/nfsmount altogether and instead establish a mount -o v3 command in /etc/rc.local which is executed at the end of each RHEL boot process.  Although this may work, it feels a little sloppy in my opinion.

Lastly, one could install the kernel update (Red Hat reports as being fixed in kernel-2.6.32-280.el6). The kernel package update is located here.

Update 5/27/18: See also http://www.boche.net/blog/2012/07/03/creating-vcloud-director-transfer-server-storage-on-nfs/ for other new requirements when trying to mount NFS exports with RHEL 7.5.

Adding an IP Alias to the vCloud Director Cell Server

July 5th, 2012

Hola! Yo Soy Dora!  I hope you are having a great week and for those in the US, I hope your 4th of July holiday was fun and relaxing.

Here’s another “how to” for those not real familiar with Linux when standing up a vCloud Director infrastructure.  If you’re following the documentation, you’ll notice on page 13 of the vCloud Director Installation and Configuration Guide that two NICs or an IP alias are required to support two separate SSL connections on each vCloud Director cell server.  One IP is used for the vCloud Director HTTP service and the other is used for the console proxy service.  I’ve deployed both methods, multiple NICs and IP aliasing, for the VCD cell server.  Neither method has a distinct advantage over the other in terms of performance or other important metrics.  Where both the http and console proxy addresses are on the same subnet, I prefer to use the IP Alias method to keep things a little cleaner but using two NICs is better at full disclosure in terms of how the VCD Cell Server is built and configured from a network standpoint.

To wrap some visualization around the two options, if you’re not familiar with Linux IP Aliasing, you’d probably deploy each VCD cell server in a multihomed configured with a minimum of two NICs and two IP addresses required for VCD, one IP established for each of the required SSL connections.

Snagit Capture

The IP Alias method involves just a single NIC with two IP addresses on the same subnet sharing a common mask and default gateway for the two required SSL connections.  Don’t forget that with either method, without routed NFS on the network, each VCD cell server would likely have one additional NIC dedicated to an NFS network for vCloud Director Transfer Storage assuming the clustered cell configuration recommended for production and highly available cloud infrastructures.

Snagit Capture

I think everyone knows how to install and configure a multihomed server, so this writing will focus on adding an IP alias to a NIC in RHEL 5 Update 7, or at least it will focus on how I learned to do it via the command line.  I’ll also show a second method to accomplish adding an IP alias through the GUI (X is enabled by default in RHEL 5.7).

Assuming RHEL 5 Update 7 is already installed with a NIC having an IP address 192.168.0.10, adding an additional IP address via an alias takes just a few steps via CLI.

  1. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0 to edit the network configuration for eth0.  If it exists, remove the line GATEWAY=192.168.0.1 or comment it out by placing a hash (#) character at the beginning of the line like so: # GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  2. Make a copy of ifcfg-eth0 to use for the IP alias.  Do this with the command cp /etc/sysconfig/network-scripts/ifcfg-eth0/etc/sysconfig/network-scripts/ifcfg-eth0:0
  3. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0:0 to edit the network configuration for eth0:0.  Change DEVICE=eth0 to read DEVICE=eth0:0.  Change IPADDR=192.168.0.10 to read IPADDR=192.168.0.11  Change ONBOOT=yes to read ONPARENT=yes  Save and exit nano with CTRL+X.
  4. Use nano -w /etc/sysconfig/network to add a commonly shared default gateway for eth0 and eth0:0.  Add the line GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  5. Restart networking with service network restart

At this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.

A second method to accomplish the above would be through the GUI by running the Networking application in RHEL 5 Update 7.

Seen here, eth0 is already configured.  Click the New button to add an IP alias:

Snagit Capture

Select Ethernet connection, choose the existing NIC for eth0, assign the IP address, Subnet Mask, and Default Gateway for the alias, and then lastly click on the Activate button with eth0:1 highlighted.

Snagit Capture

Once again, at this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.  Highlighted in yellow below is the IP alias or second IP address bound to eth0:

Snagit Capture

I’ve found that the GUI approach obsoletes steps 1 and 4 from the CLI approach above.  Basically it strips out the steps where the Default Gateway configuration is moved from the individual ifcfg-eth0 network startup scripts to the centralized /etc/sysconfig/network location.  It further affirms the GATEWAY= entry may remain in each of the individual ifcfg-eth0 network startup scripts.  In the end, both methods work for a vCloud Director cell server however I imagine adding an additional NIC hard wired to an access port not on the 192.168.0.0 subnet will have issues with a GATEWAY=192.168.0.1 in /etc/sysconfig/network.

Creating vCloud Director Transfer Server Storage on NFS

July 3rd, 2012

Six months ago I wrote an article about Expanding vCloud Director Transfer Storage on a local block storage device.  Today I take a step back and document the process of instantiating vCloud Director Transfer Storage on an NFS export which is where all scalable VCD implementations in production should reside.  The process is not extremely difficult but it can be difficult to remember the fine details if Linux is not your native OS.  Basically run through the following steps on each VCD cell server in the server group before installing vCloud Director.  I’ll be performing these steps on a RHEL 5 Update 7 distribution.

First create the directory structure which the NFS export will be mounted to (the -p argument creates the entire path of directories as necessary):

mkdir -p /opt/vmware/vcloud-director/data/transfer

Update 5/27/18: I happened to notice with RHEL 7.5 (could impact earlier builds as well) that mounting NFS exports now requires nfs-utils. Install this from the local DVD repository for YUM using the command yum install nfs-utils.

As a verification that NFS and networking is configured properly, use the showmount -e command to list mounts from the NFS server:

[root@vcdcell1 transfer]# showmount -e tsfiles.techsol.local
Export list for tsfiles.techsol.local:
/isos (everyone)
/oracle (everyone)
/unix (everyone)
/vcdtransfer (everyone)
/vcdtransfer2 (everyone)
[root@vcdcell1 transfer]#

Next, mount the NFS export manually:

mount nfshost.fqdn.orip:/nfs_export_name /opt/vmware/vcloud-director/data/transfer

Finally, let’s make sure the NFS export auto mounts each time the cell is rebooted.  This is done by editing /etc/fstab

nano -w /etc/fstab

Add the following line to /etc/fstab:

nfshost.fqdn.orip:/nfs_export_name      /opt/vmware/vcloud-director/data/transfer       nfs     rw      0 0

Exit nano using CTRL + X. Save /etc/fstab when prompted.

Proceed with the vCloud Director cell installation.  If using the mount path in the example above, it is safe and convenient to press Enter through the default prompt relating to the Transfer Server Storage installation path.

I’ll close by pointing out that although the Transfer Server Storage is used as a temporary holding tank for vApp and catalog media imports and exports, critical cell data is also stored in this repository.  If the Transfer Server Storage area is unavailable (ie. issues with NFS or the network), the VCD cell will not function properly, yielding a range of symptoms such as not being able to authenticate at the provider or organization portals.

StarWind Releases iSCSI SAN Software Enhanced by VM Backup Technology

January 17th, 2012

Press Release:

New StarWind iSCSI SAN v5.8 and Hyper Backup Plug-in are a New Level of Data Protection

SnagIt CaptureBurlington, MA – January 13, 2012StarWind Software Inc., an innovative provider of SAN software for iSCSI storage and VM Backup technology, today announced the release of new StarWind iSCSI SAN v5.8 and Hyper-V Backup Plug-in. The iSCSI SAN software is enhanced by the powerful VM Backup technology that is included as a plug-in.

Backup plug-in is built specifically for Hyper-V-based environments to provide fast backup and restore for Hyper-V virtual machines. The backup solution delivered by StarWind performs all operations on the Hyper-V host level thus it requires no backup agents to be installed on virtual machines (Agentless Architecture).

Hyper-V Backup Plug-in makes fast backups and allows quick, reliable restore of both virtual machines and individual files. It utilizes advanced technologies for maximum disk space saving (Global Deduplication). This backup tool is integrated with StarWind Centralized Management Console that enables managing backup and storage from a single window.

Additionally, a new version of HA plug-in is presented in StarWind iSCSI SAN v5.8 that allows use of raw basic images to create HA targets. A new replication engine based on own technology instead of MS iSCSI transport creates higher performance and reliability. This new engine permits use of multiple network interfaces for synchronization and heartbeat.

To simplify the replacement of equipment and recovery of fatal failures, StarWind Software has implemented the ability to change the partner node to any other StarWind server without any downtime and on the fly. Synchronization engine is improved, and this version allows both nodes to sync automatically even in the case of a full blackout of both servers.

“With the release of StarWind iSCSI SAN v5.8 our company is happy to provide our customers with highly available storage and fast backup software developed by the same vendor,” said Artem Berman, Chief Executive Officer of StarWind Software. “Now small and medium-sized companies have an opportunity to achieve higher performance and absolute data protection.”

About StarWind Software Inc.
StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software focuses on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated HA Storage Node Failover and Failback (High Availability), Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

Since 2003, StarWind has pioneered the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in more than 100 countries and from small and midsize companies to governments and Fortune 1000 companies.

For more information on StarWind Software Inc., visit: www.starwindsoftware.com