Expanding vCloud Director Transfer Server Storage

December 5th, 2011 by jason Leave a reply »

Installing vCloud Director 1.5 can be like installing a VCR.  For the most part, you can get through it without reading the instructions.  However, there may be some advanced or obscure features (such as programming the clock or automatically recording a channel) which require knowledge you’ll only pick up by referring to the documentation.  Such is the case with vCD Transfer Server Storage.  Page 13 of the vCloud Director Installation and Configuration Guide discusses Transfer Server Storage as follows:

To provide temporary storage for uploads and downloads, an NFS or other shared storage volume must be accessible to all servers in a vCloud Director cluster. This volume must have write permission for root. Each host must mount this volume at $VCLOUD_HOME/data/transfer, typically /opt/vmware/vcloud-director/data/transfer. Uploads and downloads occupy this storage for a few hours to a day. Transferred images can be large, so allocate at least several hundred gigabytes to this volume.

This is the only VMware documentation I could find covering Transfer Server Storage.  There is a bit of extra information revealed about Transfer Server Storage upon the initial installation of the vCD cell which basically states that at that point in time you should configure Transfer Server Storage to point to shared NFS storage for all vCD cells to use, or if there is just a single cell, local cell storage may be used:

If you will be deploying a vCloud Director cluster you must mount the shared transfer server storage prior to running the configuration script.  If this is a single server deployment no shared storage is necessary.

Transfer Server Storage is used for uploading and downloading (exporting) vApps.  A vApp is one or more virtual machines with associated virtual disks.  Small vApps in .OVF format will consume maybe 1GB (or potentially less depending on its contents).  Larger vApps could be several hundred GBs or beyond.  By default, Transfer Server Storage will draw capacity from /.  Lack of adequate Transfer Server Storage capacity will result in the inability to upload or download vApps (it could also imply you’re out of space on /).  Long story short, if you skipped the brief instructions on Transfer Server Storage during your build of a RHEL 5 vCD cell, at some point you may run short on Transfer Server Storage and even worse you’d run / out of available capacity.

I ran into just such a scenario in the lab and thought I’d just add a new virtual disk with adequate capacity, create a new mount point, and then adjust the contents of /etc/profile.d/vcloud.sh (export VCLOUD_HOME=/opt/vmware/vcloud-director) to point vCD to the added capacity.  I quickly found out this procedure does not work.  The vCD portal dies and won’t start again.  I did some searching and wound up at David Hill’s vCloud Director FAQ which confirms the transfer folder cannot be moved (Chris Colotti has also done some writing on Transfer Server Storage here in addition to related content I found on the vSpecialist blog).  However, we can add capacity to that folder by creating a new mount at that folder’s location.

I was running into difficulties trying to extend / so I collaborated with Bob Plankers (a Linux and Virtualization guru who authors the blog The Lone Sysadmin) to identify the right steps, in order, to get the job done properly for vCloud Director.  Bob spent his weekend time helping me out with great detail and for that I am thankful.  You rule Bob!

Again, consider the scenario: There is not enough Transfer Server Storage capacity or Transfer Server Storage has consumed all available capacity on /.  The following steps will grow an existing vCloud Director Cell virtual disk by 200GB and then extend the Transfer Server Storage by that amount.  The majority of the steps will be run via SSH, local console or terminal:

  1. Verify rsync is installed. To verify, type rsync followed by enter. All vCD supported versions of RHEL 5 (Updates 4, 5, and 6) should already have rsync installed.  If a minimalist version of RHEL 5 was deployed without rsync, execute yum install rsync to install it (RHN registration required).
  2. Gracefully shut down the vCD Cell.
  3. Now would be a good time to capture a backup of the vCD cell as well as the vCD database if there is just a single cell deployed in the environment.
  4. Grow the vCD virtual disk by 200 GB.
  5. Power the vCD cell back on and at boot time go into single user mode by interrupting GRUB (press an arrow key to move the kernel selection).  Use ‘a‘ to append boot parameters. Append the word single to the end (use a space separator) and hit enter.
  6. Use # sudo fdisk /dev/sda to partition the new empty space:
    1. Enter ‘n’ (for new partition)
    2. Enter ‘p’ (for primary)
    3. Enter a partition number.  For a default installation of RHEL 5 Update 6, 1 and 2 will be in use so this new partition will likely be 3.
    4. First cylinder… it’ll offer a number, probably the first free cylinder on the disk. Hit enter, accept the default.
    5. Last cylinder… hit enter. It’ll offer you the last cylinder available. Use it all!
    6. Enter ‘x’ for expert mode.
    7. Enter ‘b’ to adjust the beginning sector of the partition.
    8. Enter the partition number (3 in this case).
    9. In this step align the partition to a multiple of 128.  It’ll ask for “new beginning of data” and have a default number. Take that default number and round it up to the nearest number that is evenly divisible by 128. So if the number is 401660, I take my calculator and divide it by 128 to get the result 3137.968. I round that up to 3138 then multiply by 128 again = 401664. That’s where I want my partition to start for good I/O performance, and I enter that.
    10. Now enter ‘w’ to write the changes to disk. It’ll likely complain that it cannot reread the partition table but this is safe to ignore.
  7. Reboot the vCD cell using shutdown -r now
  8. When the cell comes back up, we need to add that new space to the volume group.
    1. pvcreate /dev/sda3 to initialize it as a LVM volume. (If you used partition #4 then it would be /dev/sda4).
    2. vgextend VolGroup00 /dev/sda3 to grow the volume.
  9. Now create a filesystem:
    1. lvcreate –size 199G –name transfer_lv VolGroup00 to create a logical volume 199 GB in size named transfer_lv. Adjust the numbers as needed. Notice we cannot use the entire space available due to slight overhead.
    2. mke2fs -j -m 0 /dev/VolGroup00/transfer_lv to create an ext3 filesystem on that logical volume.  The -j parameter indicates journaled, which is ext3.  The -m 0 parameter tells the OS to reserve 0% of the space for the superuser for emergencies. Normally it reserves 5%, which is a complete waste of 5% of your virtual disk.
  10. Now we need to mount the filesystem somewhere where we can copy the contents of /opt/vmware/vcloud-director/data/transfer first.  mount /dev/VolGroup00/transfer_lv /mnt will mount it on /mnt which is a good temporary spot.
  11. Stop the vCloud Director cell service to close any open files or transactions in flight with service vmware-vcd stop.
  12. rsync -av /opt/vmware/vcloud-director/data/transfer/ /mnt to make an exact copy of what’s there. Mind the slashes, they’re important.
  13. Examine the contents of /mnt to be sure everything from /opt/vmware/vcloud-director/data/transfer was copied over properly.
  14. rm -rf /opt/vmware/vcloud-director/data/transfer/* to delete the file and directory contents in the old default location. If you mount over it, the data will still be there sucking up disk space but you won’t be able to see it (instead you’ll see lost+found). Make sure you have a good copy in /mnt!
  15. umount /mnt to unmount the temporary location.
  16. mount /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer (all one line) to mount it in the right spot.
  17. df -h to confirm the mount point is there and vCD data (potentially along with transient transfer storage files) is consuming some portion of it.
  18. To auto mount correctly on reboot:
    1. nano -w /etc/fstab to edit the filesystem mount file.
    2. At the very bottom add a new line (but no blank lines between) that looks like the rest, but with our new mount point. Use tab separation between the fields. It should look like this:
      /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer/ ext3 defaults 1 2
    3. Ctrl-X to quit, ‘y’ to save modified buffer, enter to accept the filename.
  19. At this time we can either start the vCD cell with service vmware-vcd start or reboot to ensure the new storage automatically mounts and the cell survives reboots. If after a reboot the vCD portal is unavailable, it’s probably due to a typo in fstab.

This procedure, albeit a bit lengthy and detailed, worked well and was the easiest solution for my particular scenario.  There are some other approaches which would work to solve this problem.  One of them would be almost identical to the above but instead of extending the virtual disk of the vCD cell, we could add a new virtual disk with the required capacity and then mount it up.  Another option would be to build a new vCloud Director server with adequate space and then decommission the first vCD server.  This wasn’t an option for me because the certificate key files for the first vCD server no longer existed.

Advertisement

No comments

  1. Or you can just mount it to an NFS share and be done with it since eventually you may add another cell for load balancing and need it anyway. I’m just saying…. 🙂 Great write up Jason.

  2. jason says:

    Agreed Chris but not all customers have NFS or multiprotocol SANs at their disposal. I think the ability to move or migrate the location of the transfer storage within the vCD GUI after the fact would be a useful feature request.

  3. That is true. I would have to see if that has already been logged, but if not yes a useful feature it could be. This is one of those things I wonder where we will edit it on the appliance for example. For that a UI option may be required so I will ask around next week when I am in Palo Alto.

  4. I have actually fired off the question to a few folks becuase I was also thinking how will one configure the appliance to point to NFS. The nature of appliances is not to have to edit the core OS underneath so some option may have to be in the UI for at least re-directing to NFS.

  5. jason says:

    I personally wouldn’t worry so much about the appliance unless that is VMware’s long term direction for production use. I appreciate the follow up.

  6. Erik Bussink says:

    Jason,

    It seems overly complicated to resize the partition to add extra space to the Transfer. You could go the NFS way for sure, but you could also just add a new vDISK to your vCD appliance, mount it next to the /opt/vmware/vcloud-director/data/transfer directory, stop vCD, copy the Cell files over, and then re-map it as /opt/vmware/vcloud-director/data/transfer and restart vCD.
    All the lvcreate,rsync,single-user mode commands are nicely documented, but it seems you over-complicate things.

  7. Bob Plankers says:

    Erik,

    You’re correct in that you could add a volume, align it properly, and mount it without using LVM, which would make the steps slightly shorter. But your comment basically outlines all the other steps listed above, except in extreme brevity, and without the absolutely essential step of aligning the partitions.

    Someone coming from a Linux or UNIX background would have no problem doing what you just described. Someone coming from a Windows background would be absolutely lost. Even a simple task, like “copy the Cell files over,” is daunting. How in the heck do you do that? And even if they figured out cp, would they get any dot files, or hose the permissions in the process? This process is very future-proof, too, in that the methods used are generic enough to survive changes to the product.

    Our goal was to outline exactly what someone needs to do if they want to do this and do it right, and give them commands that help ensure that they’ll be successful. The reader might also learn a few things in the process, like how to enter single-user mode, use rsync, and/or deal with the crazy LVM thing Red Hat installed for them. That’s the sort of knowledge that’s handy later when they have to maintain the vCD host, or fix a problem.

  8. Erik Bussink says:

    You are right Bob.

    The more people know about these commands (single-user mode, rsync) the better.

    Thank you Jason & Bob.

  9. cwjking says:

    Interesting write up.
    I liked how you included some additional information in the “what the heck” does this thing do. VMware doesn’t really say in there documentation. The irony also is that I actually had a debate with this on someone with my team. For whatever reason they assumed it was only for ISO’s and so on… However I am sure we know this is not the case. I personally would like to know in more detail the process involved with the NFS storage for the vCloud when doing imports and exports… Like scenarios when it actually uses it. I don’t see the NFS being used in the context of importing a VM that is already in vCenter as it would just move it to the appropriate tenant within vCenter (move or copy aka svmotion or a clone operation).

  10. jason says:

    There are three instances I know of where the transfer server (and its corresponsing storage) is leveraged:
    -vApp imports
    -vApp exports
    -vApp linked clones across clusters where datastore presentation is inconsistent between clusters (typically the case with vSphere clusters outside of vCloud Director)

    I’ll ping some VMware vCloud people to see if they can add anything to this discussion.

  11. David Hill says:

    The NFS transfer storage is used for uploading OVFs and ISOs. If you have a single cell then you do not need the transfer LUN, simply using the local disk is fine. However in a multi cell environment, the transfer LUN is for continuity in the event of loosing a cell. If the cell running the task is lost, another cell will pick up that task and continue.

    Hope that helps clear it up.

  12. Transfer space is used essentially anytime something is directly imported or exported from vCD. Therefore SOME operations as I explain in the clone wars will use the transfer space since they would be between clouds.vCC will also use it on import after the initial vApp is exported from the origin cloud.

    There really is no detail in the “process” used. It is pretty simple when you think about any export or import process will utilize that space.

    You are correct that CLONE operations issued to the same vCenter will not use it but there is plenty that will especially on large scale installs where the import/export API’s are being called.

    Bottom line is if you export a VM and there is not enough space either locally on a VMDK of the Cell or on NFS, the export will fail, so expanding that or mounting to NFS large enough is the key. You have to make it large enough to contend with the largest possible export.

  13. To add to David’s comment in multiple cells you MUST have a shared space so the cell’s know who owns what given export/import operation. This can be NFS and I think other ones were added, but I have to check. This is NOT an option on multi-cell setups.

  14. Tom F says:

    Killer! The only thing I had was I could not extend the VMWare disk. I had to create a new one. My disk became “sdb” and I used partition 1. The label was sdb1. My volume group was named vg_. The “mount” command need ‘-t ext3’ added to it. Otherwise about verbatim.

    Thank you very much. I neer would have figured this and the VMWare Professional Services guy did not mention this.

    I added 128GB drive and could use 127GB of it.

    Tom F