Posts Tagged ‘Linux’

vCloud Director 5.6.4 Remote consoleproxy issues

June 12th, 2015

vCloud Director is a wonderful IaaS addition to any lab, development, or production environment. When it’s working properly, it is a very satisfying experience wielding the power of agility, consistency, and efficiency vCD provides. However, like many things tech with upstream and human dependencies, it can and does break. Particularly in lab or lesser maintained environments that don’t get all the care and feeding production environments benefit from. When it breaks, it’s not nearly as much fun.

This week I ran into what seemed like a convergence of issues with vCD 5.6.4 relating to the Remote Console functionality in conjunction with SSL certificates, various browser types, networking, and 32-bit Java. As is the case often, what I’m documenting here is really more for my future benefit as there were a number of sparse areas I covered which I won’t necessarily retain in memory long but as it goes with blogs and information sharing, sharing is caring.

The starting point was a functional vCD 5.6.4-2496071 environment on vSphere 5.5. Everything historically and to date working normally with the exception of the vCD console which had stopped working recently in Firefox and Google Chrome browsers. Opening the console in either browser from seemingly any client workstation yielded the pop out console window with toolbar buttons along the top, but there was no guest OS console painted in the main window area. It was blank. The status of the console would almost immediately change to Disconnected. I’ve dealt with permutations of this in the past and I verified all of the usual suspects: NTP, DNS, LDAP, storage capacity, 32-bit Java version, blocked browser plug-ins, etc. No dice here.

In Firefox, the vCD console status shows Disconnected while the Inspect Element console shows repeated failed attempts to connect to the consoleproxy address.

10:11:30.195 "10:11:30 AM [TRACE] mks-connection: Connecting to wss://172.16.21.151/902;cst-t3A6SwOSPRuUqIz18QAM1Wrz6jDGlWrrTlaxH8k6aYuBKilv/1mc7ap50x3sPiHiSJYoVhyjlaVuf6vKfvDPAlq2yukO7qzHdfUTsWvgiZISK56Q4r/4ZkD7xWBltn15s5AvTSSHKsVbByMshNd9ABjBBzJMcqrVa8M02psr2muBmfro4ZySvRqn/kKRgBZhhQEjg6uAHaqwvz7VSX3MhnR6MCWbfO4KhxhImpQVFYVkGJ7panbjxSlXrAjEUif7roGPRfhESBGLpiiGe8cjfjb7TzqtMGCcKPO7NBxhgqU=-R5RVy5hiyYhV3Y4j4GZWSL+AiRyf/GoW7TkaQg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--"1 debug.js:18:12

10:11:30.263 Firefox can't establish a connection to the server at wss://172.16.21.151/902;cst-t3A6SwOSPRuUqIz18QAM1Wrz6jDGlWrrTlaxH8k6aYuBKilv/1mc7ap50x3sPiHiSJYoVhyjlaVuf6vKfvDPAlq2yukO7qzHdfUTsWvgiZISK56Q4r/4ZkD7xWBltn15s5AvTSSHKsVbByMshNd9ABjBBzJMcqrVa8M02psr2muBmfro4ZySvRqn/kKRgBZhhQEjg6uAHaqwvz7VSX3MhnR6MCWbfO4KhxhImpQVFYVkGJ7panbjxSlXrAjEUif7roGPRfhESBGLpiiGe8cjfjb7TzqtMGCcKPO7NBxhgqU=-R5RVy5hiyYhV3Y4j4GZWSL+AiRyf/GoW7TkaQg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--.1 wmks.js:321:0

tail -f /opt/vmware/vcloud-director/logs/vcloud-container-debug.log |grep consoleproxy revealed:
2015-06-12 10:50:54,808 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x22c9c990 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61719]] |
2015-06-12 10:50:54,854 | DEBUG    | consoleproxy              | ReadOperation                  | IOException while reading data: java.io.IOException: Broken pipe |
2015-06-12 10:50:54,855 | DEBUG    | consoleproxy              | ChannelContext                 | Closing channel java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61719] |
2015-06-12 10:50:55,595 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0xd191a58 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61720]] |
2015-06-12 10:50:55,648 | DEBUG    | pool-consoleproxy-4-thread-289 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |
2015-06-12 10:50:56,949 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x3f0c025b [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61721]] |
2015-06-12 10:50:57,003 | DEBUG    | pool-consoleproxy-4-thread-301 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |
2015-06-12 10:50:59,902 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x1bda3590 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61723]] |
2015-06-12 10:50:59,959 | DEBUG    | pool-consoleproxy-4-thread-295 | SSLHandshakeTask               | Exception during handshake: java.io.IOException: Broken pipe |

In Google Chrome, the vCD console status shows Disconnected while the Inspect element console (F12) shows repeated failed attempts to connect to the consoleproxy address.

10:26:43 AM [TRACE] init: attempting ticket acquisition for vm vcdclient
10:26:44 AM [TRACE] plugin: Connecting vm
10:26:44 AM [TRACE] mks-connection: Connecting to wss://172.16.21.151/902;cst-f2eeAr8lNU6BTmeVelt9L8VKoe92kJJMxZCC2watauBV6/x…fmI8Xg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--
WebSocket connection to 'wss://172.16.21.151/902;cst-f2eeAr8lNU6BTmeVelt9L8VKoe92kJJMxZCC2watauBV6/x…fmI8Xg==--tp-B5:85:69:FF:C3:0A:39:36:77:F0:4F:7C:CA:5F:FE:B1:67:21:61:53--' failed: WebSocket opening handshake was canceled
10:26:46 AM [ERROR] mks-console: Error occurred: [object Event]
10:26:46 AM [TRACE] mks-connection: Disconnected [object Object]

tail -f /opt/vmware/vcloud-director/logs/vcloud-container-debug.log |grep consoleproxy revealed:
2015-06-12 10:48:35,760 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x55efffb3 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61675]] |
2015-06-12 10:48:39,754 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x3f123a13 [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61677]] |
2015-06-12 10:48:42,658 | DEBUG    | consoleproxy              | SimpleProxyConnectionHandler   | Initiated handling for channel 0x7793f0a [java.nio.channels.SocketChannel[connected local=/172.16.21.151:443 remote=/172.31.101.6:61679]] |

If you have acute attention to detail, you’ll notice the time stamps from the cell logs don’t correlate closely with the time stamps from the browser Inspect element console. Normally this would indicate time skew or an NTP issue which does cause major headaches with functionality but that’s by design here for my various screen captures and log examples aren’t from the exact same point in time. So it’s safe to move on.

Looking at the most recent vCloud Director For Service Providers installation documentation, I noticed a few things.

  1. Although I did upgrade vCD a few months ago to the most current build at the time, there’s a newer build available: 5.6.4-2619597
  2. Through repetition, I’ve gotten quite comfortable with the use of Java keytool and its parameters. However, additional parameters have been added to the recommended use of the tool. Noted going forward.
  3. VMware self signed certificates expire within three (3) months. Self signed certificates were in use in this environment. I haven’t noticed this behavior in the past nor has it presented itself as an issue but after a quick review, the self signed certificates generated a few months ago with the vCD upgrade had indeed expired recently.

At this point I was quite sure the expired certificates was the problem although it seemed strange the vCD portal was still usable while only the consoleproxy was giving me fits.  So I went through the two minute process of regenerating and installing new self signed certificates for both http and the consoleproxy.  The vCD installation guide more or less outlines this process as it is the same for a new cell installation as it is for replacing certificates. VMware also has a few KB articles which address it as well (1026309, 2014237). For those going through this process, you should really note the keytool parameter changes/additions in the vCD installation guide.

While I was at it, I also built a new replacement cell on a newer version of RHEL 6.5, performed the database upgrades, extended the self signed certificate default expiration from three months to three years, and I retired the older RHEL 6.4 cell. Fresh new cell. New certs. Ready to rock and roll.

Not so much. I still had the same problem with the console showing Disconnected. However, the Inspection element console for each browser are now indicating some new error message which I don’t have handy at the moment but basically it can’t talk to the consoleproxy adddress at all. I tried to ping the address and it was dead from a remote station point of view although it was quite alive at a RHEL 6.5 command prompt. Peters Virtual Notes had this one covered thankfully. According to https://access.redhat.com/site/solutions/53031, a small change is needed for the file /etc/sysctl.conf.

net.ipv4.conf.default.rp_filter = 1

must be changed to

net.ipv4.conf.default.rp_filter = 2

Success. Surely consoleproxy will work now. Unfortunately it still does not want to work. Back to the java.io.IOException: Broken pipe SSL handshake issues although the new certificate for vCD’s http address is registered and working fine (remembering again each vCD cell has two IP addresses, one for http access and one for consoleproxy functionality – each requires a trusted SSL certificate or an exception).

The last piece of the puzzle was something I have never had to do in the past and that is to manually add an exception (Firefox) for the consoleproxy self signed certificate and install it (Google Chrome). For each browser, this is a slightly different process.

For Firefox, browse to the https:// address of the consoleproxy, don’t worry, nothing visible should be displayed when it does not receive a properly formatted request. The key here is to add an exception for the certificate associated specifically to the consoleproxy address.

Once this certificate exception is added, the consoleproxy certificate is essentially trusted and so is the IP address for the host and the console service it is supposed to provide.

To resolve the consoleproxy issue for Google Chrome, the process is slightly different. Ironically I found it easiest to use Internet Explorer for this. Open Internet Explorer and when you do so, be sure to right click on the IE shortcut and Run as administrator (this is key in a moment). Browse to the https:// address of the consoleproxy, don’t worry, nothing visible should be displayed when it does not receive a properly formatted request. Continue to this website and then use the Certificate Error status message in the address bar to view the certificate being presented. The self signed consoleproxy certificate needs to be installed. Start this task using the Install Certificate button. This button is typically missing when launching IE normally but it is revealed when launching IE with Run as administrator rights.

Browse for the location to install the self signed certificate. Tick the box Show physical stores. Drill down under Third-Party Root Certification Authorities. Install the certificate in the Local Computer folder. This folder is typically missing when launching IE normally but it is revealed when launching IE with Run as administrator rights.

Once this certificate is installed, the consoleproxy certificate is essentially trusted in Google Chrome. Just as with the Firefox remedy, the Java SSL handshake with the consoleproxy succeeds and the vCD remote console is rendered.

Note that for Google Chrome, there is another quick method to temporarily gain functional console access without installing the consoleproxy certificate via Internet Explorer.

  1. Open a Google Chrome browser and browse to the https:// address of the consoleproxy.
  2. When prompted with Your connection is not private, click the Advanced link.
  3. Click the Proceed to (unsafe) link.
  4. Nothing will visibly happen except Google Chrome will now temporarily trust the consoleproxy certificate and the vCD remote console will function for as long as a Google Chrome tab remains open.
  5. Without closing Google Chrome, now continue into the vCD organization portal and resume business as usual with functional remote consoles.

On the topic of Google Chrome, internet searches will quickly reveal vCloud Director console issues with Google Chrome and NPAPI. VMware discusses this in the vCloud Director 5.5.2.1 Release Notes:

Attempts to open a virtual machine console on Google Chrome fail
When you attempt to open a virtual machine console on a Google Chrome browser, the operation fails. The occurs due to the deprication of NPAPI in Google Chrome. vCloud Director 5.5.2.1 uses WebMKS instead of the VMware Remote Console to open virtual machine consoles in Google Chrome, which resolves this issue.

I was working with vCD 5.6.x which leverages WebKMS in lieu of NPAPI so the NPAPI issue was not relevant in this case but if you are running into an NPAPI issue, Google offers How to temporarily enable NPAPI plugins here.

Update 8/8/15: Josiah points out a useful VMware forum thread which may help resolve this issue further when FQDNs are defined in DNS for remote console proxies or where multiple vCloud cells are installed in a cluster behind a front end load balancer, NAT/reverse proxy, or firewall.

vCloud Director, RHEL 6.3, and Windows Server 2012 NFS

July 16th, 2013

One of the new features introduced in vCloud Director 5.1.2 is cell server support on the RHEL 6 Update 3 platform (you should also know that cell server support on RHEL 5 Update 7 was silently removed in the recent past – verify the version of RHEL in your environment using cat /etc/issue).  When migrating your cell server(s) to RHEL 6.3, particularly from 5.x, you may run into a few issues.

First is the lack of the libXdmcp package (required for vCD installation) which was once included by default in RHEL 5 versions.  You can verify this at the RHEL 6 CLI with the following command line:

yum search libXdmcp

or

yum list |grep libXdmcp

Not to worry, the package is easily installable by inserting/mounting the RHEL 6 DVD or .iso, copying the appropriate libXdmcp file to /tmp/ and running either of the following commands:

yum install /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

or

rpm -i /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

Next up is the use of Windows Server 2012 (or Windows 8) as NFS for vCloud Transfer Server Storage in conjunction with the newly supported RHEL 6.3.  Creating the path and directory for the Transfer Server Storage is performed during the initial deployment of vCloud Director using the command mkdir -p /opt/vmware/vcloud-director/data/transfer. When mounting the NFS export for Transfer Server Storage (either manually or via /etc/fstab f.q.d.n:/vcdtransfer/opt/vmware/vcloud-director/data/transfer nfs rw 0 0 ), the mount command fails with error message mount.nfs: mount system call failed. I ran across this in one particular environment and my search turned up Red Hat Bugzilla – Bug 796352.  In the bug documentation, the problem is identified as follows:

On Red Hat Enterprise Linux 6, mounting an NFS export from a Windows 2012 server failed due to the fact that the Windows server contains support for the minor version 1 (v4.1) of the NFS version 4 protocol only, along with support for versions 2 and 3. The lack of the minor version 0 (v4.0) support caused Red Hat Enterprise Linux 6 clients to fail instead of rolling back to version 3 as expected. This update fixes this bug and mounting an NFS export works as expected.

Further down in the article, Steve Dickson outlines the workarounds:

mount -o v3 # to use v3

or

Set the ‘Nfsvers=3′ variable in the “[ Server “Server_Name” ]”
section of the /etc/nfsmount.conf file
An Example will be:
[ Server “nfsserver.lab.local” ]
Nfsvers=3

The first option works well at the command line but doesn’t lend itself to /etc/fstab syntax so I opted for the second option which is to establish a host name and NFS version in the /etc/nfsmount.conf file.  With this method, the mount is attempted as called for in /etc/fstab and by reading /etc/nfsmount.conf, the mount operation succeeds as desired instead of failing at negotiation.

There is a third option which would be to avoid the use of /etc/fstab and /etc/nfsmount altogether and instead establish a mount -o v3 command in /etc/rc.local which is executed at the end of each RHEL boot process.  Although this may work, it feels a little sloppy in my opinion.

Lastly, one could install the kernel update (Red Hat reports as being fixed in kernel-2.6.32-280.el6). The kernel package update is located here.

Adding an IP Alias to the vCloud Director Cell Server

July 5th, 2012

Hola! Yo Soy Dora!  I hope you are having a great week and for those in the US, I hope your 4th of July holiday was fun and relaxing.

Here’s another “how to” for those not real familiar with Linux when standing up a vCloud Director infrastructure.  If you’re following the documentation, you’ll notice on page 13 of the vCloud Director Installation and Configuration Guide that two NICs or an IP alias are required to support two separate SSL connections on each vCloud Director cell server.  One IP is used for the vCloud Director HTTP service and the other is used for the console proxy service.  I’ve deployed both methods, multiple NICs and IP aliasing, for the VCD cell server.  Neither method has a distinct advantage over the other in terms of performance or other important metrics.  Where both the http and console proxy addresses are on the same subnet, I prefer to use the IP Alias method to keep things a little cleaner but using two NICs is better at full disclosure in terms of how the VCD Cell Server is built and configured from a network standpoint.

To wrap some visualization around the two options, if you’re not familiar with Linux IP Aliasing, you’d probably deploy each VCD cell server in a multihomed configured with a minimum of two NICs and two IP addresses required for VCD, one IP established for each of the required SSL connections.

Snagit Capture

The IP Alias method involves just a single NIC with two IP addresses on the same subnet sharing a common mask and default gateway for the two required SSL connections.  Don’t forget that with either method, without routed NFS on the network, each VCD cell server would likely have one additional NIC dedicated to an NFS network for vCloud Director Transfer Storage assuming the clustered cell configuration recommended for production and highly available cloud infrastructures.

Snagit Capture

I think everyone knows how to install and configure a multihomed server, so this writing will focus on adding an IP alias to a NIC in RHEL 5 Update 7, or at least it will focus on how I learned to do it via the command line.  I’ll also show a second method to accomplish adding an IP alias through the GUI (X is enabled by default in RHEL 5.7).

Assuming RHEL 5 Update 7 is already installed with a NIC having an IP address 192.168.0.10, adding an additional IP address via an alias takes just a few steps via CLI.

  1. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0 to edit the network configuration for eth0.  If it exists, remove the line GATEWAY=192.168.0.1 or comment it out by placing a hash (#) character at the beginning of the line like so: # GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  2. Make a copy of ifcfg-eth0 to use for the IP alias.  Do this with the command cp /etc/sysconfig/network-scripts/ifcfg-eth0/etc/sysconfig/network-scripts/ifcfg-eth0:0
  3. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0:0 to edit the network configuration for eth0:0.  Change DEVICE=eth0 to read DEVICE=eth0:0.  Change IPADDR=192.168.0.10 to read IPADDR=192.168.0.11  Change ONBOOT=yes to read ONPARENT=yes  Save and exit nano with CTRL+X.
  4. Use nano -w /etc/sysconfig/network to add a commonly shared default gateway for eth0 and eth0:0.  Add the line GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  5. Restart networking with service network restart

At this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.

A second method to accomplish the above would be through the GUI by running the Networking application in RHEL 5 Update 7.

Seen here, eth0 is already configured.  Click the New button to add an IP alias:

Snagit Capture

Select Ethernet connection, choose the existing NIC for eth0, assign the IP address, Subnet Mask, and Default Gateway for the alias, and then lastly click on the Activate button with eth0:1 highlighted.

Snagit Capture

Once again, at this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.  Highlighted in yellow below is the IP alias or second IP address bound to eth0:

Snagit Capture

I’ve found that the GUI approach obsoletes steps 1 and 4 from the CLI approach above.  Basically it strips out the steps where the Default Gateway configuration is moved from the individual ifcfg-eth0 network startup scripts to the centralized /etc/sysconfig/network location.  It further affirms the GATEWAY= entry may remain in each of the individual ifcfg-eth0 network startup scripts.  In the end, both methods work for a vCloud Director cell server however I imagine adding an additional NIC hard wired to an access port not on the 192.168.0.0 subnet will have issues with a GATEWAY=192.168.0.1 in /etc/sysconfig/network.

Creating vCloud Director Transfer Server Storage on NFS

July 3rd, 2012

Six months ago I wrote an article about Expanding vCloud Director Transfer Storage on a local block storage device.  Today I take a step back and document the process of instantiating vCloud Director Transfer Storage on an NFS export which is where all scalable VCD implementations in production should reside.  The process is not extremely difficult but it can be difficult to remember the fine details if Linux is not your native OS.  Basically run through the following steps on each VCD cell server in the server group before installing vCloud Director.  I’ll be performing these steps on a RHEL 5 Update 7 distribution.

First create the directory structure which the NFS export will be mounted to (the -p argument creates the entire path of directories as necessary):

mkdir -p /opt/vmware/vcloud-director/data/transfer

Next, mount the NFS export manually:

mount nfshost.fqdn.orip:/nfs_export_name /opt/vmware/vcloud-director/data/transfer

Finally, let’s make sure the NFS export auto mounts each time the cell is rebooted.  This is done by editing /etc/fstab

nano -w /etc/fstab

Add the following line to /etc/fstab:

nfshost.fqdn.orip:/nfs_export_name      /opt/vmware/vcloud-director/data/transfer       nfs     rw      0 0

Exit nano using CTRL + X. Save /etc/fstab when prompted.

Proceed with the vCloud Director cell installation.  If using the mount path in the example above, it is safe and convenient to press Enter through the default prompt relating to the Transfer Server Storage installation path.

I’ll close by pointing out that although the Transfer Server Storage is used as a temporary holding tank for vApp and catalog media imports and exports, critical cell data is also stored in this repository.  If the Transfer Server Storage area is unavailable (ie. issues with NFS or the network), the VCD cell will not function properly, yielding a range of symptoms such as not being able to authenticate at the provider or organization portals.

StarWind Releases iSCSI SAN Software Enhanced by VM Backup Technology

January 17th, 2012

Press Release:

New StarWind iSCSI SAN v5.8 and Hyper Backup Plug-in are a New Level of Data Protection

SnagIt CaptureBurlington, MA – January 13, 2012StarWind Software Inc., an innovative provider of SAN software for iSCSI storage and VM Backup technology, today announced the release of new StarWind iSCSI SAN v5.8 and Hyper-V Backup Plug-in. The iSCSI SAN software is enhanced by the powerful VM Backup technology that is included as a plug-in.

Backup plug-in is built specifically for Hyper-V-based environments to provide fast backup and restore for Hyper-V virtual machines. The backup solution delivered by StarWind performs all operations on the Hyper-V host level thus it requires no backup agents to be installed on virtual machines (Agentless Architecture).

Hyper-V Backup Plug-in makes fast backups and allows quick, reliable restore of both virtual machines and individual files. It utilizes advanced technologies for maximum disk space saving (Global Deduplication). This backup tool is integrated with StarWind Centralized Management Console that enables managing backup and storage from a single window.

Additionally, a new version of HA plug-in is presented in StarWind iSCSI SAN v5.8 that allows use of raw basic images to create HA targets. A new replication engine based on own technology instead of MS iSCSI transport creates higher performance and reliability. This new engine permits use of multiple network interfaces for synchronization and heartbeat.

To simplify the replacement of equipment and recovery of fatal failures, StarWind Software has implemented the ability to change the partner node to any other StarWind server without any downtime and on the fly. Synchronization engine is improved, and this version allows both nodes to sync automatically even in the case of a full blackout of both servers.

“With the release of StarWind iSCSI SAN v5.8 our company is happy to provide our customers with highly available storage and fast backup software developed by the same vendor,” said Artem Berman, Chief Executive Officer of StarWind Software. “Now small and medium-sized companies have an opportunity to achieve higher performance and absolute data protection.”

About StarWind Software Inc.
StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software focuses on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated HA Storage Node Failover and Failback (High Availability), Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

Since 2003, StarWind has pioneered the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in more than 100 countries and from small and midsize companies to governments and Fortune 1000 companies.

For more information on StarWind Software Inc., visit: www.starwindsoftware.com

Collecting diagnostic information for VMware vCloud Director

December 12th, 2011

I’ve gone a few rounds with VMware vCloud Director in as many weeks recently.  I’ve got an upcoming blog post on a vCenter Proxy Service issue I’ve been dealing with but until I collect the remaining details on that, I thought I’d point out VMware KB 1026312 Collecting diagnostic information for VMware vCloud Director.  This knowledge base article details the steps required to collect the necessary support logs for both vCD versions 1.0 and 1.5.

The vmware-vcd-support script collects host log information as well as these vCloud Director logs. The script is located in the following folders:

For vCloud Director 1.0, run /opt/vmware/cloud-director/bin/vmware-vcd-support

For vCloud Director 1.5, run /opt/vmware/vcloud-director/bin/vmware-vcd-support

Once executed, the script will bundle the following log files from /opt/vmware/vcloud-director/logs/ into a .tgz tarball saving it in the directory from which the script was run (providing there is enough storage available):

  1. cell.log – Console output from the vCloud Director cell.
  2. diagnostics.log – Cell diagnostics log. This file is empty unless diagnostics logging is enabled in the local logging configuration.
  3. vcloud-container-info.log – Informational log messages from the cell. This log also shows warnings or errors encountered by the cell.
  4. vcloud-container-debug.log – Debug-level log messages from the cell.
  5. vcloud-vmware-watchdog.log – Informational log messages from the cell watchdog. It records when the cell crashes, is restarted, etc.

On the subject of vCD log files, also mentioned in the KB article is VMware KB 1026815 Configuring logging for VMware vCloud Director.  The information in this article is useful for specifying the quantity and size of vCD log files to be maintained on the cell server.

Once the log files have been collected, you may analyze them offline or upload them to VMware’s FTP site in association with an SR by following VMware KB 1008525 Uploading diagnostic information to VMware.

Mostafa Khalil Makes Twitter Debut With VMware Nostalgia

December 7th, 2011

SnagIt CaptureFor the Twitter folks… (The Real) Mostafa Khalil (@MostafaVMW, VCDX #2) is now on Twitter.  I’d recommend following him as there are some amazing changes brewing on the vSphere storage horizon.  Hopefully he’ll privilege us on a semi regular basis with bits from his great storage mind.

For the non Twitter folks…  Seven days ago, Mostafa posted the picture shown below.  It’s the Getting Started Guide for VMware Workstation 1.0 for Linux. It comes to us from the year 1999.

SnagIt Capture

Seeing this is enough to make a vEvangelist tear up.  I’d love to get my hands on this product at some point and take it for a spin.  Perhaps I’ll have a chance if the VMTN Subscription makes its return.  My VMware journey didn’t start until a year later with Workstation 2.0.2 for Windows.  Look at the file size – 5MB.

SnagIt Capture