Posts Tagged ‘Linux’

StarWind Releases iSCSI SAN Software Enhanced by VM Backup Technology

January 17th, 2012

Press Release:

New StarWind iSCSI SAN v5.8 and Hyper Backup Plug-in are a New Level of Data Protection

SnagIt CaptureBurlington, MA – January 13, 2012StarWind Software Inc., an innovative provider of SAN software for iSCSI storage and VM Backup technology, today announced the release of new StarWind iSCSI SAN v5.8 and Hyper-V Backup Plug-in. The iSCSI SAN software is enhanced by the powerful VM Backup technology that is included as a plug-in.

Backup plug-in is built specifically for Hyper-V-based environments to provide fast backup and restore for Hyper-V virtual machines. The backup solution delivered by StarWind performs all operations on the Hyper-V host level thus it requires no backup agents to be installed on virtual machines (Agentless Architecture).

Hyper-V Backup Plug-in makes fast backups and allows quick, reliable restore of both virtual machines and individual files. It utilizes advanced technologies for maximum disk space saving (Global Deduplication). This backup tool is integrated with StarWind Centralized Management Console that enables managing backup and storage from a single window.

Additionally, a new version of HA plug-in is presented in StarWind iSCSI SAN v5.8 that allows use of raw basic images to create HA targets. A new replication engine based on own technology instead of MS iSCSI transport creates higher performance and reliability. This new engine permits use of multiple network interfaces for synchronization and heartbeat.

To simplify the replacement of equipment and recovery of fatal failures, StarWind Software has implemented the ability to change the partner node to any other StarWind server without any downtime and on the fly. Synchronization engine is improved, and this version allows both nodes to sync automatically even in the case of a full blackout of both servers.

“With the release of StarWind iSCSI SAN v5.8 our company is happy to provide our customers with highly available storage and fast backup software developed by the same vendor,” said Artem Berman, Chief Executive Officer of StarWind Software. “Now small and medium-sized companies have an opportunity to achieve higher performance and absolute data protection.”

About StarWind Software Inc.
StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software focuses on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated HA Storage Node Failover and Failback (High Availability), Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

Since 2003, StarWind has pioneered the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in more than 100 countries and from small and midsize companies to governments and Fortune 1000 companies.

For more information on StarWind Software Inc., visit: www.starwindsoftware.com

Collecting diagnostic information for VMware vCloud Director

December 12th, 2011

I’ve gone a few rounds with VMware vCloud Director in as many weeks recently.  I’ve got an upcoming blog post on a vCenter Proxy Service issue I’ve been dealing with but until I collect the remaining details on that, I thought I’d point out VMware KB 1026312 Collecting diagnostic information for VMware vCloud Director.  This knowledge base article details the steps required to collect the necessary support logs for both vCD versions 1.0 and 1.5.

The vmware-vcd-support script collects host log information as well as these vCloud Director logs. The script is located in the following folders:

For vCloud Director 1.0, run /opt/vmware/cloud-director/bin/vmware-vcd-support

For vCloud Director 1.5, run /opt/vmware/vcloud-director/bin/vmware-vcd-support

Once executed, the script will bundle the following log files from /opt/vmware/vcloud-director/logs/ into a .tgz tarball saving it in the directory from which the script was run (providing there is enough storage available):

  1. cell.log – Console output from the vCloud Director cell.
  2. diagnostics.log – Cell diagnostics log. This file is empty unless diagnostics logging is enabled in the local logging configuration.
  3. vcloud-container-info.log – Informational log messages from the cell. This log also shows warnings or errors encountered by the cell.
  4. vcloud-container-debug.log – Debug-level log messages from the cell.
  5. vcloud-vmware-watchdog.log – Informational log messages from the cell watchdog. It records when the cell crashes, is restarted, etc.

On the subject of vCD log files, also mentioned in the KB article is VMware KB 1026815 Configuring logging for VMware vCloud Director.  The information in this article is useful for specifying the quantity and size of vCD log files to be maintained on the cell server.

Once the log files have been collected, you may analyze them offline or upload them to VMware’s FTP site in association with an SR by following VMware KB 1008525 Uploading diagnostic information to VMware.

Mostafa Khalil Makes Twitter Debut With VMware Nostalgia

December 7th, 2011

SnagIt CaptureFor the Twitter folks… (The Real) Mostafa Khalil (@MostafaVMW, VCDX #2) is now on Twitter.  I’d recommend following him as there are some amazing changes brewing on the vSphere storage horizon.  Hopefully he’ll privilege us on a semi regular basis with bits from his great storage mind.

For the non Twitter folks…  Seven days ago, Mostafa posted the picture shown below.  It’s the Getting Started Guide for VMware Workstation 1.0 for Linux. It comes to us from the year 1999.

SnagIt Capture

Seeing this is enough to make a vEvangelist tear up.  I’d love to get my hands on this product at some point and take it for a spin.  Perhaps I’ll have a chance if the VMTN Subscription makes its return.  My VMware journey didn’t start until a year later with Workstation 2.0.2 for Windows.  Look at the file size – 5MB.

SnagIt Capture

Expanding vCloud Director Transfer Server Storage

December 5th, 2011

Installing vCloud Director 1.5 can be like installing a VCR.  For the most part, you can get through it without reading the instructions.  However, there may be some advanced or obscure features (such as programming the clock or automatically recording a channel) which require knowledge you’ll only pick up by referring to the documentation.  Such is the case with vCD Transfer Server Storage.  Page 13 of the vCloud Director Installation and Configuration Guide discusses Transfer Server Storage as follows:

To provide temporary storage for uploads and downloads, an NFS or other shared storage volume must be accessible to all servers in a vCloud Director cluster. This volume must have write permission for root. Each host must mount this volume at $VCLOUD_HOME/data/transfer, typically /opt/vmware/vcloud-director/data/transfer. Uploads and downloads occupy this storage for a few hours to a day. Transferred images can be large, so allocate at least several hundred gigabytes to this volume.

This is the only VMware documentation I could find covering Transfer Server Storage.  There is a bit of extra information revealed about Transfer Server Storage upon the initial installation of the vCD cell which basically states that at that point in time you should configure Transfer Server Storage to point to shared NFS storage for all vCD cells to use, or if there is just a single cell, local cell storage may be used:

If you will be deploying a vCloud Director cluster you must mount the shared transfer server storage prior to running the configuration script.  If this is a single server deployment no shared storage is necessary.

Transfer Server Storage is used for uploading and downloading (exporting) vApps.  A vApp is one or more virtual machines with associated virtual disks.  Small vApps in .OVF format will consume maybe 1GB (or potentially less depending on its contents).  Larger vApps could be several hundred GBs or beyond.  By default, Transfer Server Storage will draw capacity from /.  Lack of adequate Transfer Server Storage capacity will result in the inability to upload or download vApps (it could also imply you’re out of space on /).  Long story short, if you skipped the brief instructions on Transfer Server Storage during your build of a RHEL 5 vCD cell, at some point you may run short on Transfer Server Storage and even worse you’d run / out of available capacity.

I ran into just such a scenario in the lab and thought I’d just add a new virtual disk with adequate capacity, create a new mount point, and then adjust the contents of /etc/profile.d/vcloud.sh (export VCLOUD_HOME=/opt/vmware/vcloud-director) to point vCD to the added capacity.  I quickly found out this procedure does not work.  The vCD portal dies and won’t start again.  I did some searching and wound up at David Hill’s vCloud Director FAQ which confirms the transfer folder cannot be moved (Chris Colotti has also done some writing on Transfer Server Storage here in addition to related content I found on the vSpecialist blog).  However, we can add capacity to that folder by creating a new mount at that folder’s location.

I was running into difficulties trying to extend / so I collaborated with Bob Plankers (a Linux and Virtualization guru who authors the blog The Lone Sysadmin) to identify the right steps, in order, to get the job done properly for vCloud Director.  Bob spent his weekend time helping me out with great detail and for that I am thankful.  You rule Bob!

Again, consider the scenario: There is not enough Transfer Server Storage capacity or Transfer Server Storage has consumed all available capacity on /.  The following steps will grow an existing vCloud Director Cell virtual disk by 200GB and then extend the Transfer Server Storage by that amount.  The majority of the steps will be run via SSH, local console or terminal:

  1. Verify rsync is installed. To verify, type rsync followed by enter. All vCD supported versions of RHEL 5 (Updates 4, 5, and 6) should already have rsync installed.  If a minimalist version of RHEL 5 was deployed without rsync, execute yum install rsync to install it (RHN registration required).
  2. Gracefully shut down the vCD Cell.
  3. Now would be a good time to capture a backup of the vCD cell as well as the vCD database if there is just a single cell deployed in the environment.
  4. Grow the vCD virtual disk by 200 GB.
  5. Power the vCD cell back on and at boot time go into single user mode by interrupting GRUB (press an arrow key to move the kernel selection).  Use ‘a‘ to append boot parameters. Append the word single to the end (use a space separator) and hit enter.
  6. Use # sudo fdisk /dev/sda to partition the new empty space:
    1. Enter ‘n’ (for new partition)
    2. Enter ‘p’ (for primary)
    3. Enter a partition number.  For a default installation of RHEL 5 Update 6, 1 and 2 will be in use so this new partition will likely be 3.
    4. First cylinder… it’ll offer a number, probably the first free cylinder on the disk. Hit enter, accept the default.
    5. Last cylinder… hit enter. It’ll offer you the last cylinder available. Use it all!
    6. Enter ‘x’ for expert mode.
    7. Enter ‘b’ to adjust the beginning sector of the partition.
    8. Enter the partition number (3 in this case).
    9. In this step align the partition to a multiple of 128.  It’ll ask for “new beginning of data” and have a default number. Take that default number and round it up to the nearest number that is evenly divisible by 128. So if the number is 401660, I take my calculator and divide it by 128 to get the result 3137.968. I round that up to 3138 then multiply by 128 again = 401664. That’s where I want my partition to start for good I/O performance, and I enter that.
    10. Now enter ‘w’ to write the changes to disk. It’ll likely complain that it cannot reread the partition table but this is safe to ignore.
  7. Reboot the vCD cell using shutdown -r now
  8. When the cell comes back up, we need to add that new space to the volume group.
    1. pvcreate /dev/sda3 to initialize it as a LVM volume. (If you used partition #4 then it would be /dev/sda4).
    2. vgextend VolGroup00 /dev/sda3 to grow the volume.
  9. Now create a filesystem:
    1. lvcreate –size 199G –name transfer_lv VolGroup00 to create a logical volume 199 GB in size named transfer_lv. Adjust the numbers as needed. Notice we cannot use the entire space available due to slight overhead.
    2. mke2fs -j -m 0 /dev/VolGroup00/transfer_lv to create an ext3 filesystem on that logical volume.  The -j parameter indicates journaled, which is ext3.  The -m 0 parameter tells the OS to reserve 0% of the space for the superuser for emergencies. Normally it reserves 5%, which is a complete waste of 5% of your virtual disk.
  10. Now we need to mount the filesystem somewhere where we can copy the contents of /opt/vmware/vcloud-director/data/transfer first.  mount /dev/VolGroup00/transfer_lv /mnt will mount it on /mnt which is a good temporary spot.
  11. Stop the vCloud Director cell service to close any open files or transactions in flight with service vmware-vcd stop.
  12. rsync -av /opt/vmware/vcloud-director/data/transfer/ /mnt to make an exact copy of what’s there. Mind the slashes, they’re important.
  13. Examine the contents of /mnt to be sure everything from /opt/vmware/vcloud-director/data/transfer was copied over properly.
  14. rm -rf /opt/vmware/vcloud-director/data/transfer/* to delete the file and directory contents in the old default location. If you mount over it, the data will still be there sucking up disk space but you won’t be able to see it (instead you’ll see lost+found). Make sure you have a good copy in /mnt!
  15. umount /mnt to unmount the temporary location.
  16. mount /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer (all one line) to mount it in the right spot.
  17. df -h to confirm the mount point is there and vCD data (potentially along with transient transfer storage files) is consuming some portion of it.
  18. To auto mount correctly on reboot:
    1. nano -w /etc/fstab to edit the filesystem mount file.
    2. At the very bottom add a new line (but no blank lines between) that looks like the rest, but with our new mount point. Use tab separation between the fields. It should look like this:
      /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer/ ext3 defaults 1 2
    3. Ctrl-X to quit, ‘y’ to save modified buffer, enter to accept the filename.
  19. At this time we can either start the vCD cell with service vmware-vcd start or reboot to ensure the new storage automatically mounts and the cell survives reboots. If after a reboot the vCD portal is unavailable, it’s probably due to a typo in fstab.

This procedure, albeit a bit lengthy and detailed, worked well and was the easiest solution for my particular scenario.  There are some other approaches which would work to solve this problem.  One of them would be almost identical to the above but instead of extending the virtual disk of the vCD cell, we could add a new virtual disk with the required capacity and then mount it up.  Another option would be to build a new vCloud Director server with adequate space and then decommission the first vCD server.  This wasn’t an option for me because the certificate key files for the first vCD server no longer existed.

VMware Workstation & Fusion Christmas In August Sale!

August 2nd, 2011

30% off through August 4th! All boxed and shrink wrapped copies of VMware Workstation (for Windows & Linux) and VMware Fusion (for Mac) must go!  Hurry while supplies last!  Use promo code PREHOLSALE at checkout for your 30% discount.  Mention boche.net and it is likely that nothing additional will happen.

8-2-2011 11-12-56 PM

Force a Simple VMware vMA Password

June 27th, 2011

VMware ESXi is mainstream.  If you’ve ever deployed a VMware vMA appliance to manage ESXi (or heck, even ESX for that matter), you may have noticed the enforcement of a complex password policy for the vi-admin account.  For example, setting a password of password is denied because it is based on a dictionary word (in addition to other morally obvious reasons).

6-27-2011 7-17-52 PM

However, you can bend the complexity rules and force a simple password after the initial deployment using sudo.  You’ll still be warned about the violation of the complexity policy but by using sudo, the policy is allowed to be bypassed by a higher authority:

sudo passwd vi-admin

6-27-2011 7-14-16 PM

This tip isn’t specific to VMware or the vMA appliance.  It is general *nix knowledge.  There is ample documentation available which discusses the password complexity mechanism in various versions of *nix.  Another approach to bypassing the complexity requirement would be to actually relax the requirement itself but this would impact other local accounts potentially in use on the vMA appliance which may still require complex passwords.  Using the sudo command will be faster and leaves the default complex security mechanism in place.

Tiny Core Linux and Operational Readiness

February 28th, 2011

When installing, configuring, or managing VMware virtual infrastructure, one of the steps which should be performed before releasing a host (back) to production is to perform operational readiness tests.  One test which is quite critical is that of testing virtual infrastructure networking.  After all, what good is a running VM if it has no connectivity to the rest of the network?  Each ESX or ESXi host pNIC should be individually tested for internal and upstream connectivity, VLAN tagging functionality if in use (quite often it is), in addition to proper failover and fail back, and jumbo frames at the guest level if used.

There are several types of VMs or appliances which can be used to generate basic network traffic for operational readiness testing.  One that I’ve been using recently (introduced to me by a colleague) is Tiny Core Linux.  To summarize:

Tiny Core Linux is a very small (10 MB) minimal Linux GUI Desktop. It is based on Linux 2.6 kernel, Busybox, Tiny X, and Fltk. The core runs entirely in ram and boots very quickly. Also offered is Micro Core a 6 MB image that is the console based engine of Tiny Core. CLI versions of Tiny Core’s program allows the same functionality of Tiny Core’s extensions only starting with a console based system.

TCL carries with it a few of benefits, some of which are tied to its small stature:

  • The minimalist approach makes deployment simple.
  • At just 10MB, it’s extremely portable and boots fast.
  • As a Linux OS, it’s freely distributable without the complexities of licensing or activation.
  • It’s compatible with VMware hardware 7 and the Flexible or E1000 vNIC making it a good network test candidate.
  • No installation is required.  It runs straight from an .ISO file or can boot from a USB drive.
  • Point and click GUI interface provides ease of use and configuration for any user.
  • When deployed with internet connectivity, it has the ability to download and install useful applications from an online repository such as Filezilla or Firefox.  There are tons of free applications in the repository.

As I mentioned before, deployment of TCL is pretty easy.  Create a VM shell with the following properties:

  • Other Linux (32-bit)
  • 1 vCPU
  • 256MB RAM
  • Flexible or E1000 vNIC
  • Point the virtual CD/DVD ROM drive to the bootable .ISO
  • No HDD or SCSI storage controller required

First boot splash screen.  Nothing real exciting here other than optional boot options which aren’t required for the purposes of this article.  Press Enter to continue the boot process:

SnagIt Capture

After pressing Enter, the boot process is briefly displayed:

SnagIt Capture

Once booted, the first step would be to configure the network via the Panel applet at the bottom of the Mac like menu:

SnagIt Capture

If DHCP is enabled on the subnet, an address will be automatically acquired by this point.  Otherwise, give eth0 a static TCP/IP configuration.  Name Servers are optional and not required for basic network connectivity unless you would like to test name resolution in your virtual infrastructure:

SnagIt Capture

Once TCP/IP has been configured, a Terminal can be opened up and a basic ping test can be started.  Change the IP address and vNIC portgroup to test different VLANs but my suggestion would be to spawn multiple TCL instances, one per each VLAN to test because you’ll need to vMotion the TCL VMs to each host being tested.  You don’t want to continuously be modifying the TCP/IP configuration:

SnagIt Capture

What else of interest is in the Panel applet besides Network configuration?  Some ubiquitous items such as date/time configuration, disk and terminal services tools, and wallpaper configuration:

SnagIt Capture

The online application repository is packed with what seems like thousands of apps:

SnagIt Capture

After installing FileZilla, it’s available as an applet:

SnagIt Capture

FileZilla is fully functional:

SnagIt Capture

So I’ve only been using Tiny Core Linux as a network testing appliance, but clearly it has some other uses when paired with extensible applications.  A few other things that I’ll point out is:

  1. TCL can be Suspended in order to move it to other clusters (with compatible CPUs) so that both a host and a storage migration can be performed in a single step.  Once TCL reaches its destination cluster, Unsuspend.
  2. During my tests, TCL will continue to run without issue after being severed from its boot .ISO.  This is possible because it is booted into RAM where it continues to run from that point on.

I’ve been watching Tiny Core Linux for several months and the development efforts appear fairly aggressive and backed by an individual or group with a lot of talent and energy which is good to see.  As of this writing, version 3.5 is available.  Give Tiny Core Linux a try.