Posts Tagged ‘SRM’

Dell Enterprise Manager Client Gets Linux Makeover

April 24th, 2015

Dell storage customers who have been watching the evolution of Enterprise Manager may be interested in the latest release which was just made available.  Aside from adding support for the brand new SCv2000 Series Storage Centers and bundling Java Platform SE 7 Update 67 with the installation of both the Data Collector on Windows and the Client on Windows or Linux (a prerequisite Java installation is no longer required), a Linux client has been introduced for the first time and runs on several Linux operating systems.  The Linux client is Java based and has the same look and feel as the Windows based client.  Some of the details about this release below.

Enterprise Manager 2015 R1 Data Collector and Client management compatibility:

  • Dell Storage Center OS versions 5.5-6.6
  • Dell FS8600 versions 3.0-4.0
  • Dell Fluid Cache for SAN version 2.0.0
  • Microsoft System Center Virtual Machine Manager (SCVMM) versions 2012, 2012 SP1, and 2012 R2
  • VMware vSphere Site Recovery Manager versions 5.x (HCL), 6.0 (compatible)

Enterprise Manager 2015 R1 Client for Linux operating system requirements:

  • RHEL 6
  • RHEL 7
  • SUSE Linux Enterprise 12
  • Oracle Linux 6.5
  • Oracle Linux 7.0
  • 32-bit (x86) or 64-bit (x64) CPU
  • No support for RHEL 5 but I’ve tried it and it seems to work

Although the Enterprise Manager Client for Linux can be installed without a graphical environment, launching and using the client requires the graphical environment.  As an example, neither RHEL 6 or RHEL 7 install a graphical environment by default.  Overall, installing a graphical environment for both RHEL 6 and RHEL 7 is similar in that it requires a yum repository. However, the procedure is slightly different for each version.  There are several resources available on the internet which walk through the process.  I’ll highlight a few below.

Log in with root access.

To install a graphical environment for RHEL 6, create a yum repository and install GNOME or KDE by following the procedure here.

To install a graphical environment for RHEL 7, create a yum repository by following this procedure and install GNOME by following the procedure here.

Installing the Enterprise Manager Client is pretty straightforward.  Copy the RPM to a temporary directory on the Linux host and use rpm -U to install:

rpm -U dell-emclient-15.1.2-45.x86_64.rpm

Alternatively, download the client from the Enterprise Manager Data Collector using the following syntax as an example:

wget em1.boche.lab:3033 –no-check-certificate https://em1.boche.lab:3033/em/EnterpriseManager/web/apps/client/EmClient.rpm

rpm -U EmClient.rpm

Once installed, launch the Enterprise Manager Client from the /var/lib/dell/bin/ directory:

cd /var/lib/dell/bin/

./Client

or

/var/lib/dell/bin/Client

We’re rewarded with the Enterprise Manager 2015 R1 Client splash screen.  New features are found here to immediately manage SCv2000 Series Storage Centers (the SCv2000 Series is the first Storage Center whereby the web based management console has been retired).

Once logged in, it’s business as usual in a familiar UI.

Dell, and before it Compellent, has long since offered a variety of options and integrations to manage Storage Center as well as popular platforms and applications.  The new Enterprise Manager Client for Linux extends that list of management methods available.

vSphere 5.1 Update 1 Update Sequence

May 6th, 2013

Not so long ago, VMware product releases were staggered.  Major versions of vSphere would launch at or shortly after VMworld in the fall, and all other products such as SRM, View, vCloud Director, etc. would rev on some other random schedule.  This was extremely frustrating for a vEvangelist because we wanted to be on the latest and greatest platform but lack of compatibility with the remaining bolt-on products held us back.

While this was a wet blanket for eager lab rats, it was a major complexity for production environments.  VMware understood this issue and at or around the vSphere 5.0 launch (someone correct me if I’m wrong here), all the development teams in Palo Alto synchronized their watches & revd product in essence at the same time.  This was great and it added the much needed flexibility for production environment migrations.  However, in a way it masked an issue which didn’t really exist before by virtue of product release staggering – a clear and understandable order of product upgrades.  That is why in March of 2012, I looked at all the product compatibility matrices and sort of came up with my own “cheat sheet” of product compatibility which would lend itself to an easy to follow upgrade path, at least for the components I had in my lab environment.

vSphere 5.1 Update 1 launched on 4/25/13 and along with it a number of other products were revd for compatibility.  To guide us on the strategic planning and tactical deployment of the new software bundles, VMware issued KB Article 2037630 Update sequence for vSphere 5.1 Update 1 and its compatible VMware products.

Snagit Capture

Not only does VMware provide the update sequencing information, but there are also exists a complete set of links to specific product upgrade procedures and release notes which can be extremely useful for planning and troubleshooting.

The vCloud Suite continues to evolve providing agile and elastic infrastructure services for businesses around the globe in a way which makes IT easier and more practical for consumers but quite a bit more complex on the back end for those who must design, implement, and support it.  Visit the KB Article and give it 5 stars.  Let VMware know this is an extremely helpful type of collateral for those in the trenches.

VMworld 2012 Announcements – Part I

August 27th, 2012

VMworld 2012 is underway in San Francisco.  Once again, a record number of attendees is expected to gather at the Moscone Center to see what VMware and their partners are announcing.  From a VMware perspective, there is plenty.

Given the sheer quantity of announcements, I’m actually going to break up them up into a few parts, this post being Part I.  Let’s start with the release of vSphere 5.1 and some of its notable features.

Enhanced vMotion – the ability to now perform a vMotion as well as a Storage vMotion simultaneously. In addition, this becomes an enabler to perform vMotion without the shared storage requirement.  Enhanced vMotion means we are able to migrate a virtual machine stored on local host storage, to shared storage, and then to local storage again.  Or perhaps migrate virtual machines from one host to another with each having their own locally attached storage only.  Updated 9/5/12 The phrase “Enhanced vMotion” should be correctly read as “vMotion that has been enhanced”.  “Enhanced vMotion” is not an actual feature, product, or separate license.  It is an improvement over the previous vMotion technology and included wherever vMotion is bundled.

Snagit Capture

Enhanced vMotion Requirements:

  • Hosts must be managed by same vCenter Server
  • Hosts must be part of same Datacenter
  • Hosts must be on the same layer-2 network (and same switch if VDS is used)

Operational Considerations:

  • Enhanced vMotion is a manual process
  • DRS and SDRS automation do not leverage enhanced vMotion
  • Max of two (2) concurrent Enhanced vMotions per host
  • Enhanced vMotions count against concurrent limitations for both vMotion and Storage vMotion
  • Enhanced vMotion will leverage multi-NIC when available

Next Generation vSphere Client a.k.a. vSphere Web Client – An enhanced version of the vSphere Web Client which has already been available in vSphere 5.0.  As of vSphere 5.1, the vSphere Web Client becomes the defacto standard client for managing the vSphere virtualized datacenter.  Going forward, single sign-on infrastructure management will converge into a unified interface which any administrator can appreciate.  vSphere 5.1 will be the last platform to include the legacy vSphere client. Although you may use this client day to day while gradually easing into the Web Client, understand that all future development from VMware and its partners now go into the Web Client. Plug-ins currently used today will generally still function with the legacy client (with support from their respective vendors) but they’ll need to be completely re-written vCenter Server side for the Web Client.  Aside from the unified interface, the architecture of the Web Client has scaling advantages as well.  As VMware adds bolt-on application functionality to the client, VMware partners will now have the ability to to bring their own custom objects objects into the Web Client thereby extending that single pane of glass management to other integrations in the ecosystem.

Here is a look at that vSphere Web Client architecture:

Snagit Capture

Requirements:

  • Internet Explorer / FireFox / Chrome
  • others (Safari, etc.) are possible, but will lack VM console access

A look at the vSphere Web Client interface and its key management areas:

Snagit Capture

Where the legacy vSphere Client fall short and now the vSphere Web Client solves these issues:

  • Single Platform Support (Windows)
    • vSphere Web Client is Platform Agnostic
  • Scalability Limits
    • Built to handle thousands of objects
  • White Screen of Death
    • Performance
  • Inconsistent look and feel across VMware solutions
    • Extensibility
  • Workflow Lock
    • Pause current task and continue later right where you left off (this one is cool!)
    • Browser Behavior
  • Upgrades
    • Upgrade a Single serverside component

 vCloud Director 5.1

In the recent past, VMware aligned common application and platform releases to ease issues that commonly occurred with compatibility.  vCloud Director, the cornerstone of the vCloud Suite, is obviously the cornerstone in how VMware will deliver infrastructure, applications, and *aaS now and into the future. So what’s new in vCloud Director 5.1?  First an overview of the vCloud Suite:

Snagit Capture

And a detailed list of new features:

  • Elastic Virtual Datacenters – Provider vDCs can span clusters leveraging VXLAN allowing the distribution and mobility of vApps across infrastructure and the growing the vCloud Virtual Datacenter
  • vCloud Networking & Security VXLAN
  • Profile-Driven Storage integration with user and storage provided capabilities
  • Storage DRS (SDRS) integration
    • Exposes storage Pod as first class storage container (just like a datastores) making it visible in all workflows where a datastore is visible
    • Creation, modification, and deletion of spods not possible in vCD
    • Member datastore operations not permissible in VCD
  • Single level Snapshot & Revert support for vApps (create/revert/remove); integration with Chargeback
  • Integrated vShield Edge Gateway
  • Integrated vShield Edge Configuration
  • vCenter Single Sign-On (SSO)
  • New Features in Networking
    • Integrated Organization vDC Creation Workflow
    • Creates compute, storage, and networking objects in a single workflow
    • The Edge Gateway are exposed at Organization vDC level
    • Organization vDC networks replace Organization networks
    • Edge Gateways now support:
      • Multiple Interfaces on a Edge Gateway
      • The ability to sub-allocate IP pools to a Edge Gatewa
      • Load balancing
      • HA (not the same as vSphere HA)
        • Two edge VMs deployed in Active-Passive mode
        • Enabled at time of gateway creation
        • Can also be changed after the gateway has been completed
        • Gets deployed with first Organizational network created that uses this gateway
      • DNS Relay
        • Provides a user selectable checkbox to enable
        • If DNS servers are defined for the selected external network, DNS requests will be sent to the specified server. If not, then DNS requests will be sent to the default gateway of the external network.
      • Rate limiting on external interface
    • Organization networks replaced by Organization vDC Networks
      • Organization vDC Networks are associated with an Organization vDC
      • The network pool associated with Organization vDC is used to create routed and isolated Organization vDC networks
      • Can be shared across Organization vDCs in an Organization
    • Edge Gateways
      • Are associated with an Organization vDC, can not be shared across Organization vDCs
      • Can be connected to multiple external networks
        • Multiple routed Organization vDC networks will be connected to the same Edge Gateway
      • External network connectivity for the Organization vDC Network can be changed after creation by changing the external networks which the edge gateway is connected.
      • Allows IP pool of external networks to be sub-allocated to the Edge Gateway
        • Needs to be specified in case of NAT and Load Balancer
    • New Features in Gateway Services
      • Load balancer service on Edge Gateways
      • Ability to add multiple subnets to VPN tunnels
      • Ability to add multiple DHCP IP pools
      • Ability to add explicit SNAT and DNAT rules providing user with full control over address translation
      • IP range support in Firewall and NAT services
      • Service Configuration Changes
        • Services are configured on Edge Gateway instead of at the network level
        • DHCP can be configured on Isolated Organization vDC networks.
  • Usability Features
    • New default branding style
      • Cannot revert back to the Charcoal color scheme
      • Custom CSS files will require modification
    • Improved “Add vApp from Catalog” wizard workflow
    • Easy access to VM Quota and Lease Expirations
    • New dropdown menu that includes details and search
    • Redesigned catalog navigation and sub-entity hierarchy
    • Enhanced help and documentation links
  • Virtual Hardware Version 9
    • Supports features presented by HW9 (like 64 CPU support)
    • Supports Hardware Virtualization Calls
    • VT-x/EPT or AMD-V/RVI
    • Memory overhead increased, vMotion limited to like hardware
    • Enable/Disable exposed to users who have rights to create a vApp Template
  • Additional Guest OS Support
    • Windows 8
    • Mac OS 10.5, 10.6 and 10.7
  • Storage Independent of VM Feature
    • Added support for Independent Disks
    • Provides REST API support for actions on Independent Disks
      • As these consume disk space, the vCD UI was updated to show user when they are used:
      • Organizations List Page
      • A new Independent Disks count column is added.
      • Organization Properties Page
      • Independent Disks tab is added to show all independent disks belonging to vDC
      • Tab is not shown if no independent disk exists in the vDC.
      • Virtual Machine Properties Page
      • Hardware tab->Hard Disks section, attached independent disks are shown by their names and all fields for the disk are disabled as they are not editable.

That’s all I have time for right now.  As I said, there is more to come later on topics such as vDS enhancements, VXLAN, SRM, vCD Load Balancing, and vSphere Replication.  Stay tuned!

vSphere 5.0 Update 1 and Related Product Launches

March 16th, 2012

VMware has unveiled a point release update to several of their products tied to the vSphere 5 virtual cloud datacenter platform plus a few new product launches.

vCenter 5.0 Update 1 – Added support for new guest operating systems such as Windows 8, Ubuntu, and SLES 11 SP2, the usual resolved issues and bug fixes, plus some updates around vRAM limits licensing.  One other notable – no compatibility at this time with vSphere Data Recovery (vDR) 2.0 according to the compatibility matrix.

ESXi 5.0 Update 1 – Added support for new AMD and Intel processors, Mac OS X Server Lion, updated chipset drivers, resolved issues and bug fixes.  One interesting point to be made here is that according to the compatibility matrix, vCenter 5.0 supports ESXi 5.0 Update 1.  I’m going to stick with the traditional route of always upgrading vCenter before upgrading hosts as a best practices habit until something comes along to challenge that logic.

vCloud Director 1.5.1 – Added support for vSphere 5.0 Update 1 and vShield 5.0.1, plus RHEL 5 Update 7 as a supported server cell platform.  Enhancements were made around firewall rules, AMQP system notifications, log collection, chargeback retention, resolved issues, and added support for AES-256 encryption on Site-to-Site VPN tunnels (unfortunately no vSphere 5.0 Update 1 <-> vCloud Connector 1.5 support).  Oh yes, sometime over the past few months, VMware Marketing has quietly changed the acronym for vCloud Director from vCD to VCD.  We’ll just call that a new feature for 1.5.1 going forward.  I <3 the Marketing team.

Site Recovery Manager 5.0.1 – Added support for vSphere 5.0 Update 1 plus a “Forced Failover” feature which allows VM recovery in cases where storage arrays fail at the protected site which, in the past, lead to unmanageable VMs which cannot be shut down, powered off, or unregistered.  Added IP customization for some Ubuntu platforms.  Many bug fixes, oh yes.  VMware brought back an advanced feature which hasn’t been seen since SRM 4.1 which provided a configurable option, storageProvider.hostRescanCnt, allowing repeated host scans during testing and recovery. This option was removed from SRM 5.0 but has been restored in the Advanced Settings menu in SRM 5.0.1 and can be particularly useful in troubleshooting a failed Recovery Plan. Right-click a site in the Sites view, select Advanced Settings, then select storageProvider. See KB 1008283.  Storage arrays certified on SRM 5.0 (ie. Dell Compellent Storage Center) are automatically certified on SRM 5.0.1.

View 5.0.1 – Added support for vSphere 5.0 Update 1, new Connection Server, Agent, Clients, fixed known issues.  Ahh.. let’s go back to that new clients bit.  New bundled Mac OS X client with support for PCoIP!  I don’t have a Mac so those who would admit to calling me a friend will have to let me know how sharp that v1.4 Mac client is.  As mentioned in earlier release notes, Ubuntu got a plenty of love this week.  Including a new View PCoIP version 1.4 client for Ubuntu Linux.  I might just have to deploy an Ubuntu desktop somewhere to test this client.  But wait, there’s more.  New releases of the View client for Android and iPad tablets.  The Android client adds fixes for Ice Cream Sandwich devices, security stuff, and updates for the Kindle Fire (I need to get this installed on my wife’s Fire).  The updated iPad client improves both connection times as well as external display support but for the most part Apple fans are flipping out simply over something shiny and new.  Lastly, VMware created a one stop shop web portal for all client downloads which can be fetched at http://www.vmware.com/go/viewclients/

vShield 5.0.1 – Again, added support for vSphere 5.0 Update 1, enhanced reporting and export options, new REST API calls, improved audit logs, simplified troubleshooting, improved vShield App policy management as well as HA enhancements, and enablement of Autodeploy through vShield VIB host modules downloadable from vShield Manager.

So… looking at the compatibility matrix with all of these new code drops, my lab upgrade order will look something like this:

1a. View 5.0 –> View 5.0.1

1b. vCD 1.5 –> VCD 1.5.1

1c. SRM 5.0 –> SRM 5.0.1

1d. vShield App/Edge/Endpoint 5.0 –> 5.0.1

1e. vDR 2.0 –> Go Fish

2. vSphere Client 5.0.1 (it’s really not an upgrade, installs parallel with other versions)

3. vCenter Server 5.0 –> vCenter Server 5.0 Update 1

4. Update Manager 5.0 –> Update Manager 5.0 Update 1

5. ESXi 5.0 –> ESXi 5.0 Update 1

There are a lot of versions in play here which weaves somewhat of a tangled web of compatibility touch points to identify before diving head first into upgrades.  I think VMware has done a great job this time around with releasing products that are, for the most part, compatible with other currently shipping products which provides more flexibility in tactical approach and timelines.  Add to that, some time ago they’ve migrated a two dimensional .PDF based compatibility matrix into an online portal offering interactive input making the set of results customized for the end user.  The only significant things missing in the vSphere 5.0U1 compatibility picture IMO are vCloud Connector, vDR, and based on the results from the compatibility matrix portal – vCenter Operations (output showed no compatibility with vSphere 5.x, didn’t look right to me).  I’ve taken a liberty in creating a component compatibility visual roadmap including most of the popular and currently shipping products vSphere 5.0 and above.  If you’ve got a significant amount of infrastructure to upgrade, this may help you get the upgrade order sorted out quickly.  One last thing – Lab Manager and ESX customers should pay attention to the Island of Misfit Toys.  In early 2013 the Lab Manager ride comes coasting to a stop.  Lab Manager and ESX customers should be formulating solid migration plans with an execution milestone coming soon.

Snagit Capture

VMware vCenter as a vCloud Director vApp

February 27th, 2012

Snagit CaptureThe way things work out, I tend to build a lot of vCenter Servers in the lab.  Or at least it feels like I do.  I need to test this.  A customer I’m meeting with wants to specifically see that.  I need don’t want to taint or impact an existing vCenter Server which may already be dedicated to something else having more importance.  VMware Site Recovery Manager is a good example.  Each time I bring up an environment I need a pair of vCenter Servers which may or not be available.  Whatever the reason, I’ve reached the point where I don’t need to experience the build process repeatedly.

The Idea

A while ago, I had stood up a private cloud for the Technical Solutions/Technical Marketing group at Dell Compellent.  I saved some time by leveraging that cloud environment to quickly provision platforms I could install vCenter Server instances on.  vCenter Servers as vApps – fantastic use case.  However, the vCenter installation process is lengthy enough that I wanted something more in terms of automated cookie cutter deployment which I didn’t have to spend a lot of time on.  What if I took one of the Windows Server 2008 R2 vApps from the vCD Organization Catalog, deployed it as a vApp, bumped up the vCPU and memory count, installed the vSphere Client, vCenter Server, licenses, a local MS SQL Express database, and the Dell Compellent vSphere client plug-in (download|demo video), and then added that vApp back to the vCD Organization Catalog?  Perhaps not such a supported configuration by VMware or Microsoft, but could I then deploy that vApp as future vCenter instances?  Better yet, build a vApp consisting of a pair of vCenter Servers for the SRM use case?  It sounded feasible.  My biggest concerns were things like vCenter and SQL Express surviving the name and IP address change as part of the vCD customization.

The POC

Although I ran into some unrelated customization issues which seemed to have something to do with vCD, Windows Server 2008 R2, and VMXNET3 vNICs (error message: “could not find network adapters as specified by guest customization. Log file is at c:\windows\temp\customize-guest.log.” I’ll save that for a future blog post if I’m able to root cause the problem), the Proof of Concept test results thus far have been successful.  After vCD customization, I was able to add vSphere 5 hosts and continue with normal operations from there.

Initially, I did run into one minor issue and that was hosts would fall into a disconnected status approximately two minutes after being connected to the vCenter Server.  This turned out to be a Windows Firewall issue which was introduced during the customization process.  Also, there were some red areas under the vCenter Service Status which pointed to the old instance name (most fixes for that documented well by Rick Vanover here, plus the vCenter Inventory Service cleanup at VMware KB 2009934).

The Conclusion

To The Cloud!  You don’t normally hear that from me on a regular basis but in this case it fits.  A lengthy and increasingly cumbersome task was made more efficient with vCloud Director and vSphere 5.  Using the Linked Clone feature yields both of its native benefits: Fast Provisioning and Space Efficiency.  I’ll continue to leverage vCD for similar and new use cases where I can.  Lastly, this solution can also be implemented with VMware Lab Manager or simply as a vSphere template.  Caveats being Lab Manager retires in a little over a year and a vSphere Template won’t be as space efficient as a Linked Clone.

SRM 5.0 Replication Bits and Bytes

October 3rd, 2011

VMware has pushed out several releases and features in the past several weeks.  It can be a lot to digest, particularly if you’ve been involved in the beta programs for these new products because there were some changes made when the bits made their GA debut. One of those new products is SRM 5.0.  I’ve been working a lot with this product lately and I thought it would be helpful to share some of the information I’ve collected along the way.

One of the new features in SRM 5.0 is vSphere Replication.  I’ve heard some people refer to it as Host Based Replication or HBR for short.  In terms of how it works, this is an accurate description and it was the feature name during the beta phase.  However, by the time SRM 5.0 went to GA, each of the replication components went through a name change as you’ll see below. If you know me, you’re aware that I’m somewhat of a stickler on branding.  As such, I try to get it right as much as possible myself, and I’ll sometimes point out corrections to others in an effort to lessen or perpetuate confusion.

Another product feature launched around the same time is the vSphere Storage Appliance or VSA for short.  In my brief experience with both products I’ve mentioned so far, I find it’s not uncommon for people to associate or confuse SRM replication with a dependency on the VSA.  This is not the case – they are quite independent.  In fact, one of the biggest selling points of SRM based replication is that it works with any VMware vSphere certified storage and protocol.  If you think about it for a minute, this now becomes a pretty powerful for getting a DR site set up with what you have today storage wise.  It also allows you to get SRM in the door based on the same principles, with the ability to grow into scalable array based replication in an upcoming budget cycle.

With that out of the way, here’s a glimpse at the SRM 5.0 native replication components and terminology (both beta and GA).

Beta Name GA Name GA Acronym
HBR vSphere Replication VR
HMS vSphere Replication Management Server vRMS
HBR server vSphere Replication Server vRS
ESXi HBR agent vSphere Replication Agent vR agent

Here is a look at how the SRM based replication pieces fit in the SRM 5.0 architecture.  Note the storage objects shown are VMFS but they could be both VMFS datastores as well as NFS datastores on either side:

Snagit Capture

Diagram courtesy VMware, Inc.

To review, the benefits of vSphere Replication are:

  1. No requirement for enterprise array based replication at both sites.
  2. Replication between heterogeneous storage, whatever that storage vendor or protocol might be at each site (so long as it’s supported on the HCL).
  3. Per VM replication. I didn’t mention this earlier but it’s another distinct advantage of VR over per datastore replication.
  4. It’s included in the cost of SRM licensing. No extra VMware or array based replication licenses are needed.

Do note that access to the VR feature is by way of a separate installable component of SRM 5.0.  If you haven’t already installed the component during the initial SRM installation, you can do so afterwards by running the SRM 5.0 setup routine again at each site.

I’ve talked about the advantages of VR.  Again, I think they are a big enabler for small to medium sized businesses and I applaud VMware for offering this component which is critical to the best possible RPO and RTO.  But what about the disadvantages compared to array based replication?  In no particular order:

  1. Cannot replicate templates.  The ‘why’ comes next.
  2. Cannot replicate powered off virtual machines.  The ‘why’ for this follows.
  3. Cannot replicate files which don’t change (powered off VMs, ISOs, etc.)  This is because replications are handled by the vRA component – a shim in vSphere’s storage stack deployed on each ESX(i) host.  By the way, Changed Block Tracking (CBT) and VMware snapshots are not used by the vRA.  The mechanism uses a bandwidth efficient technology similar to CBT but it’s worth pointing out it is not CBT.  Another item to note here is that VMs which are shut down won’t replicate writes during the shutdown process.  This is fundamentally because only VMs which are powered on and stay powered on are replicated by VR.  Current state of the VM would, however, be replicated once the VM is powered back on.
  4. Cannot replicate FT VMs. Note that array based replication can be used to protect FT VMs but once recovered they are not longer FT enabled.
  5. Cannot replicate linked clone trees (Lab Manager, vCD, View, etc.)
  6. Array based replication will replicate a VMware based snapshot hierarchy to the destination site while leaving them in tact. VR can replicate VMs with snapshots but they will be consolidated at the destination site.  This is again based on the principle that only changes are replicated to the destination site.
  7. Cannot replicate vApp consistency groups.
  8. VR does not work with virtual disks opened in “multi-writer mode” which is how MSCS VMs are configured.
  9. VR can only be used with SRM.  It can’t be used as a data replication for your vSphere environment outside of SRM.
  10. Losing a vSphere host means that the vRA and the current replication state of a VM or VMs is also lost.  In the event of HA failover, a full-sync must be performed for these VMs once they are powered on at the new host (and vRA).
  11. The number of VMs which can be replicated with VR will likely be less than array based replication depending on the storage array you’re comparing to.  In the beta, VR supported 100 VMs.  At GA, SRM 5.0 supports up to 500 VMs with vSphere Replication. (Thanks Greg)
  12. In band VR requires additional open TCP ports:
    1. 31031 for initial replication
    2. 44046 for ongoing replication
  13. VR requires vSphere 5 hosts at both the protected and recovery sites while array based replication follows only general SRM 5.0 minimum requirements of vCenter 5.0 and hosts which can be 3.5, 4.x, and/or 5.0.

The list of disadvantages appears long but don’t let that stop you from taking a serious look at SRM 5.0 and vSphere Replication.  I don’t think there are many, if any, showstoppers in that list for small to medium businesses.

I hope you find this useful.  I gathered the information from various sources, much of it from an SRM Beta FAQ which to the best of my knowledge are still fact today in the GA release.  If you find any errors or would like to offer corrections or additions, as always please feel free to use the Comments section below.

Rogue SRM 5.0 Shadow VM Icons

September 13th, 2011

Snagit CaptureOne of the new features in VMware SRM 5.0 is Shadow VM Icons.  When VMs are protected at the primary site, these placeholder objects will automatically be created in VM inventory at the secondary site.  It may seem like a trivial topic for discussion but it is important to recognize that these placeholder objects represent datacenter capacity which will be needed and consumed on demand if and when the VMs are powered on during a planned migration or disaster recovery operation within SRM.  In previous versions of SRM, the placeholder VMs simply looked like powered off virtual machines.  In SRM 5.0, these placeholder VMs get a facelift to provide better clarity of their disposition.  You can see what these Shadow VM Icons look like in the image to the right.

Each SRM Server maintains its own unique SQL database instance in order to track current state of the environment.  It does a pretty good job of this.  However, at some point you may run into an instance where once SRM protected VMs are no longer protected (by choice or design), yet they maintain the new Shadow VM Icon look which can yield a false sense of protection.  If the VMs truly are not protected, they should have no relationship with SRM and thus should not be wearing the Shadow VM Icon.  I ran into this during an SRM upgrade.  I corrected the rogue icon by removing the VM from inventory and re-added to inventory.  This action is safe to quickly perform on running VMs.