Posts Tagged ‘3rd Party Apps’

Make-A-File – File Creation Utility

July 20th, 2011

Part of being successful it your role is having the right tool for the job.  If you work a lot with storage, storage performance, tiering, snapshots, or replication (ie. some of the new storage related features in vSphere 5), this tool might come in handy: Make-a-File.  A colleague introduced me to this Windows based utility which creates a file at the size you specify, up to 18 ExaBytes.

7-20-2011 7-31-11 PM

Using the tool is simple, launch Make-a-File.exe

Configurable Parameters:

  • Filename: Specify name and path for the file to be created.
  • Size: Specify a file size between 1 Byte and 18 ExaBytes.
  • Random content: Fills the file with actual random data rather than all zeroes.  Analogous to creating a “thick” file.  For effective storage tests, enable this option.
  • Quick Create: Creates a thin provisioned file using the specified file size to mark the beginning and end geometry boundaries. Doesn’t actually fill the file with data.  Utilizes the SetFilePointer() function to set the end of the file.

Download Make-a-File_src.zip (23KB)

Make-A-File home page

Virtual Bridges Slashes VDI Storage Costs with Latest VDI Gen2 Solution

July 20th, 2011

Press Release:

Virtual Bridges Slashes VDI Storage Costs with Latest VDI Gen2 Solution

VERDE Adds New Features Including Cache I/O Storage Saver and Integrated Endpoint Management

 

AUSTIN, Texas (July 20, 2011)Virtual Bridges, Inc. today announced enhancements to VERDE, the industry’s first “VDI Gen2″ solution. Key additions include a new cache I/O storage saver that removes CAPEX hurdles long associated with VDI, and integrated endpoint management that delivers on the promise of desktop management infrastructure solutions.

Industry analysts have cited storage costs as one of the top 10 inhibitors for organizations looking to implement VDI. Unlike competitive solutions that focus on storage capacity (terabytes needed), VERDE StorageSaverTM uses cache I/O technology to reduce the number of Input/Output Operations Per Second (IOPS) required, the single most important metric in VDI desktop performance. Additionally, Copy-on-Write and Copy-on-Read features reduce external storage requirements by using local disks. On average, VERDE’s storage-related costs are one-third of other VDI solutions.

VERDE also now integrates PC Life Cycle Management (PCLM) policy and patch management into the virtual desktop. This includes integration with IBM’s PCLM solution, Tivoli Endpoint Manager built on BigFix technology, to ensure consistent policy across both physical and virtual desktops. VERDE is also now a management component within BigFix and other PCLM vendor frameworks including Novell zCM, Microsoft SCCM and more.  The result is smarter, faster endpoint management that also reduces risk and complexity.

Other VERDE highlights include:

  • Integrated Third-Party Application Management streamlines the deployment of applications inside the Gold Master with unified policy management and improves overall manageability; works with application virtualization solutions, including VMware ThinApp, Novell SPOON/ZenWorks, Cameyo and InstallFree.
  • Enhanced HA Clustering increases high availability of VERDE with automatic ClusterMasterTM (CM) failover in less than two minutes for any candidate without manual intervention; improves manageability by simplifying installation updates and eliminates the need for third party tools; adds cluster-wide licensing and unattended cluster-wide install/upgrade, offering native CM fail-over.
  • Enhanced Desktop Use Case Coverage extends support beyond traditional productivity/power users who do not need to install their own images, to cover a wide range of users scenarios including:
    • Long Life Dynamic Desktops – Improves security of the virtual desktop for VERDE LEAF users, such as traveling sales executives, who sporadically connect to the corporate network but often use public networks in airports or coffee shops.
    • Static Desktops – Provides greater control for fully persistent users, such as developers or engineers, who need to manage their own applications.
    • Non-Persistent Desktops – Provides ease of use without the need for customization for those who do not persist user data, such as workers at call centers or kiosks.
    • Dynamic Desktops – Continues to deliver robust user experience for productivity and power users who do not need to install their own images, but expect to have persistence for personal settings and documents.

“This release of VERDE is a true collaboration with our customers, tackling their biggest challenges including storage, endpoint management and third-party application management,” said Jim Curtin, CEO of Virtual Bridges. “VDI Gen2 continues to deliver significant advancements to make VDI easier and more cost effective than ever.”

As the first VDI Gen2 offering, VERDE features core capabilities that include online, offline and branch VDI, a Gold Master provisioning model, a Distributed Connection brokering architecture, flexibility to run both Windows and Linux desktops, branch-level VDI at LAN speeds, the ability to span both on-premises and hosted deployment modes and desktop portability on a USB stick.

Virtual Bridges has been named a “Major Player” in desktop virtualization by IDC, a “Cool Vendor” in Personal Computing by Gartner, an MIT Sloan CIO Symposium Innovation Showcase finalist, and one of 15 desktop virtualization vendors to watch in CRN’s Virtualization 100.

For more on VERDE visit http://www.vbridges.com/products/.

 

About Virtual Bridges

Virtual Bridges VERDE is the industry’s most comprehensive desktop management and provisioning solution that leverages virtualization to deliver desktops either on-premises or in the cloud. The VERDE solution lets enterprises transform their desktop TCO by simplifying desktop management, improving security and compliance by centralizing the administration of desktop images and data, and increasing the organizational agility by quickly providing desktop and application access to users on any client machine (PC, Macintosh, Linux, thin client, home computer or on a portable drive) at any time.

New Diskeeper White Paper: Optimization of VMware Systems

June 28th, 2011

diskeeperDiskeeper Corporation reached out to me via email last week letting me know that they’ve released a new white paper on optimizing VMs.  I’m making the three page document available for download via the following link:

Best Practice Protocols: Optimization of VMware Systems (416KB)

Force a Simple VMware vMA Password

June 27th, 2011

VMware ESXi is mainstream.  If you’ve ever deployed a VMware vMA appliance to manage ESXi (or heck, even ESX for that matter), you may have noticed the enforcement of a complex password policy for the vi-admin account.  For example, setting a password of password is denied because it is based on a dictionary word (in addition to other morally obvious reasons).

6-27-2011 7-17-52 PM

However, you can bend the complexity rules and force a simple password after the initial deployment using sudo.  You’ll still be warned about the violation of the complexity policy but by using sudo, the policy is allowed to be bypassed by a higher authority:

sudo passwd vi-admin

6-27-2011 7-14-16 PM

This tip isn’t specific to VMware or the vMA appliance.  It is general *nix knowledge.  There is ample documentation available which discusses the password complexity mechanism in various versions of *nix.  Another approach to bypassing the complexity requirement would be to actually relax the requirement itself but this would impact other local accounts potentially in use on the vMA appliance which may still require complex passwords.  Using the sudo command will be faster and leaves the default complex security mechanism in place.

Xangati Packs More Power in Free VMware Management Tool

June 22nd, 2011

Press Release:

Xangati Packs More Power in Free VMware Management Tool

Expands Functionality of Xangati for ESX with Performance Health Engine for Any Given Host

Cupertino, CA – June 22, 2011 – Xangati, the recognized leader in infrastructure performance management, today announced that it has expanded the capabilities and power offered in its free VMware management tool, Xangati for ESX. Xangati for ESX now includes several features from its recently announced and patent-pending Performance Health Engine – a real-time health index that monitors the health of every object within the virtualized infrastructure and a key component of Xangati’s multi-host Xangati Virtual Infrastructure (VI) and Virtual Desktop Infrastructure (VDI) Dashboards. With the updated Xangati for ESX, virtualization managers now have an even clearer picture of their VM activity, as well as the ability to fully monitor a single ESX host – all at no cost.

“Xangati is continuously looking for ways to improve our infrastructure performance management solutions in order to provide the highest value to virtualization managers – and that objective is absolutely no different for our free Xangati for ESX tool,” said Alan Robin, CEO of Xangati. “The response to our Performance Health Engine – for both our VI and VDI dashboards – inspired us to incorporate some of its capabilities into our free tool, so that everyone can experience and benefit from real-time health analysis – in any stage of their virtualization initiative.”

“By incorporating its health index into the free Xangati for ESX, Xangati allows virtualization managers to create a baseline for the infrastructure,” said David Davis, vExpert and blogger. “When any unusual activity occurs on the infrastructure, the tool alerts you and identifies the problem area. This ability – plus Xangati’s trademark DVR recordings – provide for the most comprehensive troubleshooting available, differentiating Xangati from other virtual performance monitoring tools – all for free.”

New Capabilities Streamline Management and Ensure User Satisfaction

With its new enhancements, Xangati for ESX gives managers deeper insights into any potential problems within virtualized environments by immediately and visually alerting them to any anomalies. Xangati achieves this unique health alert system by comparing real-time data feeds with established performance profiles for up to 10 VMs running on an ESX host supporting virtualized servers or virtual desktops. Its memory-based architecture allows Xangati to compare this data and identify any performance shifts live and continuously – not through intermittent polling intervals – giving managers unparalleled insights for faster troubleshooting. These insights, in turn, provide confidence for the migration of mission-critical applications in the VI and ensure end user satisfaction – the biggest factor in determining the success of VDI initiatives.

Xangati for ESX still includes all of Xangati’s trademark features, including: continuous scroll-bar and drill-down user interface (UI) capabilities for dynamic and real-time navigation; visibility into more than 100 metrics on an ESX/ESXi host and its VMs activity; and automated DVR recordings (triggered by VMware alerts) to capture critical events for replay analysis for precision troubleshooting as opposed to sifting through unstructured log files. Xangati for ESX is also deployed in Open Virtualization Format (OVF) to facilitate a faster and easier installation process. Xangati is committed to continue to incorporate capabilities that add value and help accelerate virtualization initiatives.

Available immediately, the updated Xangati for ESX works with VMware 3.5, 4.0 and 4.1 for ESX and ESXi. Xangati has also created an updated installation video and documentation for additional background about the new features in order to enable virtualization managers to begin using and benefiting from the free tool as quickly as possible. To access the installation video and download a copy of the free Xangati for ESX, go to http://xangati.com/xangati-for-esx-new-features/.

About Xangati

Xangati, the recognized leader in Infrastructure Performance Management (IPM), provides unparalleled performance management for the emerging and transformational data center architectures impacting IT today, including server virtualization, cloud computing and VDI. Its award-winning suite of IPM solutions accelerates cloud computing and virtualization initiatives by providing unprecedented visibility and real-time continuous insights into the entire infrastructure. Leveraging its powerful precision analytics, Xangati’s health performance index provides a new way to view and manage performance – in real-time – at a scale previously not possible.

Founded in 2006, Xangati, Inc. is a privately held company with corporate headquarters based in Cupertino, California. Xangati has been granted numerous technology patents for its unique and comprehensive approach to Infrastructure Performance Management. Xangati is a VMware Technology Alliance Partner and certified Citrix Ready Partner and supports both VMware View and Citrix XenDesktop, as well as other virtualization environments. For more information, visit the company website at http://www.xangati.com.

Disk.SchedNumReqOutstanding and Queue Depth

June 16th, 2011

There is a VMware storage whitepaper available which is titled Scalable Storage Performance.  It is an oldie but goodie.  In fact, next to VMware’s Configuration Maximums document,  it is one of my favorites and I’ve referenced it often.  I like it because it is efficient in specifically covering block storage LUN queue depth and SCSI reservations.  It was written pre-VAAI but I feel the concepts are still quite relevant in the block storage world.

One of the interrelated components of queue depth on the VMware side is the advanced VMkernel parameter Disk.SchedNumReqOutstanding.  This setting determines the maximum number of active storage commands (IO) allowed at any given time at the VMkernel.  In essence, this is queue depth at the hypervisor layer.  Queue depth can be configured at various points in the path of an IO such as the VMkernel which I already mentioned, in addition to the HBA hardware layer, the kernel module (driver) layer, as well as at the guest OS layer.

Getting back to Disk.SchedNumReqOutstanding, I’ve always lived by the definition I felt was most clear in the Scalable Storage Performance whitepaper.  Disk.SchedNumReqOutstanding is the maximum number of active commands (IO) per LUN.  Clustered hosts don’t collaborate on this value which implies this queue depth is per host.  In other words, each host has its own independent queue depth, again, per LUN.  How does Disk.SchedNumReqOutstanding impact multiple VMs living on the same LUN (again, same host)?  The whitepaper states each VM will evenly share the queue depth (assuming each VM has identical shares from a storage standpoint).

When virtual machines share a LUN, the total number of outstanding commands permitted from all virtual machines to that LUN is governed by the Disk.SchedNumReqOutstanding configuration parameter that can be set using VirtualCenter. If the total number of outstanding commands from all virtual machines exceeds this parameter, the excess commands are queued in the ESX kernel.

I was recently challenged by a statement agreeing to all of the above but with one critical exception:  Disk.SchedNumReqOutstanding provides an independent queue depth for each VM on the LUN.  In other words, if Disk.SchedNumReqOutstanding is left at its default value of 32, then VM1 has a queue depth of 32, VM2 has a queue depth of 32, and VM3 has its own independent queue depth of 32.  Stack those three VMs and we arrive at a sum total of 96 outstanding IOs on the LUN.  A few sources were provided to me to support this:

Fibre Channel SAN Configuration Guide:

You can adjust the maximum number of outstanding disk requests with the Disk.SchedNumReqOutstanding parameter in the vSphere Client. When two or more virtual machines are accessing the same LUN, this parameter controls the number of outstanding requests that each virtual machine can issue to the LUN.

VMware KB Article 1268 (Setting the Maximum Outstanding Disk Requests per Virtual Machine):

You can adjust the maximum number of outstanding disk requests with the Disk.SchedNumReqOutstanding parameter. When two or more virtual machines are accessing the same LUN (logical unit number), this parameter controls the number of outstanding requests each virtual machine can issue to the LUN.

The problem with the two statements above is that I feel they are poorly worded, and as a result, misinterpreted.  I understand what the statement is trying to say, but it’s implying something quite a bit different depending on how a person reads it.  Each statement is correct in that Disk.SchedNumReqOutstanding will gate the amount of active IO possible per LUN and ultimately per VM.  However, the wording implies that the value assigned to Disk.SchedNumReqOutstanding applies individually to each VM which is not the case.  The reason I’m pointing this out is due to the number of misinterpretations I’ve subsequently discovered via Google which I gather are the result of reading one of the latter sources above.

The scenario can be quickly proven in the lab.  Disk.SchedNumReqOutstanding is configured for the default value of 32 active IOs.  Using resxtop, I see my three VMs cranking out IO with IOMETER.  Each VM is configured with IOMETER to create 32 active IOs.  If what I’m being told by the challenge is true, I should be seeing 96 active IO being generated to the LUN from the combined activity of the three VMs.

Snagit Capture

But that’s not what’s happening.  Instead what I see is approximately 32 ACTV (active) IOs on the LUN, with another 67 IOs waiting in queue (by the way, ESXTOP statistic definitions can be found here).  In my opinion, the Scalable Storage Performance whitepaper most accurately and best defines the behavior of the Disk.SchedNumReqOutstanding value.

Snagit Capture

Now going back to the possibility of the Disk.SchedNumReqOutstanding stacking, LUN utilization could get out of hand rapidly with 10, 15, 20, 25 VMs per LUN.  We’d quickly exceed the max supported value of Disk.SchedNumReqOutstanding (and all HBAs I’m aware of) which is 256.  HBA ports themselves typically support a few thousand IOPS.  Stacking the queue depths for each VM could quickly saturate an HBA meaning we’d get a lot less mileage out of those ports as well.

While having a queue depth discussion, it’s also worth noting the %USD value is at 100% and LOAD is approximately 3.  The LOAD statistic corroborates the 3:1 ratio of total IO:queue depth and both figures paint the picture of an oversubscribed LUN from an IO standpoint.

In conclusion, I’d like to see VMware modify the wording in their documentation to provide better understanding leaving nothing open to interpretation.

Update 6/23/11:  Duncan Epping at Yellow Bricks responded with a great followup Disk.SchedNumReqOutstanding the story.

Scripted Removal Of Non-present Hardware After A P2V

June 11th, 2011

After converting a physical machine to a virtual machine, it is considered a best practice to remove unneeded applications, software, services, and device drivers which were tied to the physical machine but no longer applicable to the present day virtual machine.  Performing this task from time to time manually isn’t too bad but at large scale, a manual process becomes inefficient.  There are tools available which will automate the process of removing unneeded device drivers (sometimes referred to as ghost hardware).  A former colleage put together a scripted solution for Windows VMs which I’m sharing here. 

Copy the .zip file to the virtual machine local hard drive, extract it, and follow the instructions in the readme.txt file.  I have not thoroughly tested the tool.  No warranties – use at your own risk.  I would suggest using it on a test machine first to become comfortable with the process before using it on production machines or using on a large scale basis.

Download: remnonpresent.zip (719KB)