Posts Tagged ‘Virtualization’

Make-A-File – File Creation Utility

July 20th, 2011

Part of being successful it your role is having the right tool for the job.  If you work a lot with storage, storage performance, tiering, snapshots, or replication (ie. some of the new storage related features in vSphere 5), this tool might come in handy: Make-a-File.  A colleague introduced me to this Windows based utility which creates a file at the size you specify, up to 18 ExaBytes.

7-20-2011 7-31-11 PM

Using the tool is simple, launch Make-a-File.exe

Configurable Parameters:

  • Filename: Specify name and path for the file to be created.
  • Size: Specify a file size between 1 Byte and 18 ExaBytes.
  • Random content: Fills the file with actual random data rather than all zeroes.  Analogous to creating a “thick” file.  For effective storage tests, enable this option.
  • Quick Create: Creates a thin provisioned file using the specified file size to mark the beginning and end geometry boundaries. Doesn’t actually fill the file with data.  Utilizes the SetFilePointer() function to set the end of the file.

Download Make-a-File_src.zip (23KB)

Make-A-File home page

He is serious, and don’t call him Scott

May 20th, 2011

5-20-2011 10-47-54 AMHappy Friday!  Today’s treat is the announcement of a new tech blog by my friend in VMware virtualization, Microsoft SQL,  and the occassional fine cigar, Todd Scalzott (@tscalzott).  I love the title of his blog: Don’t Call Me Scott.  Content focus will be Tech ramblings from a guy named Todd, too often called Scott.  I’m looking forward to what you have to share Todd!

Gestalt IT Tech Field Day – Compellent

July 16th, 2010

Gestalt IT Tech Field Day 2 begins with Compellent, a storage vendor out of Eden Prairie, MN.  Compellent has been around for about eight years and, like other well known multiprotocol SAN vendors, offers spindles of FC, SATA, SAS, and SSD via FC block, iSCSI, NFS, and CIFS.

Compellent’s hardware approach is a modular one.  Many of the components, such as drives and interfaces (Ethernet, FC, etc.), are easily replacable and hot swappable, eliminating the need to “rip and replace” the entire frame of hardware and providing the ability to upgrade components without taking down the array.

In April of 2010, Compellent introduced the new zNAS solution:

Compellent introduces the new zNAS solution, which consolidates file and block storage on a single, intelligent platform. The latest unified storage offering from Compellent integrates next-generation ZFS software, high-performance hardware and Fluid Data architecture to actively manage and move data in a virtual pool of storage, regardless of the size and type of block, file or drive. Enterprises can simplify management, intelligently scale capacity, improve performance for critical applications and reduce complexity and costs.

Fluid Data Storage is Compellent’s granular approach to data management

  • Virtualization
  • Intelligence
  • Automation
  • Utilization

Volume Creation

Volume Recovery

Volume Management

Integration 

  • VMware
    • Leveraging many of the features mentioned above
    • HCL compatibility although I don’t see ESXi in the list which would be a major concern for VMware customers given that ESX is being phased out.  Compellent responded they believe their arrays are compatible with ESXi and will look into updating their VMware support page if that is the case.  VMware’s HCL also shows Compellent storage is not currently certified for ESXi. Significant correction to the earlier statement: VMware’s HCL for storage is inconsistently different than it’s HCL for host hardware in that the host hardware HCL lists explicit compatiblity for both ESX and ESXi, whereas the storage HCL explicitly lists ESX compatibility which implies equivilent ESXi compatibility. Compellent arrays, as of this writing, are both ESX4 and ESXi4 compatible.
  • Microsoft
    • PowerShell (for automation and consistency of storage management)
    • Hyper-V

Compellent performed a live demo of their Replay (Snapshot) feature with a LUN presented to a Windows host.  It worked slick and as expected. Compellent’s Windows based storage management UI has a fresh, no-nonsense, 21st century feel to it which I can appreciate.

We closed discussion answering the question “Why Compellent?”  Top Reasons:

  1. Efficiency
  2. Long term ROI, cost savings through the upgrade model
  3. Ease of use

Follow them on Twitter at @Compellent.

Thank you Compellent for the presentation and I’m sure I’ll see you back in Minnesota!

Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

Gestalt IT Tech Field Day Seattle

July 15th, 2010

Gestalt IT was gracious enough to invite me back as a delegate for Tech Field Day Seattle which is happening… well… now, not to put too fine a point on it.  I’m really excited about this opportunity!  For the next two days, I’ll be at the Microsoft campus in Redmond, WA taking in vendor presentations and participating in peer discussions spanning a few different technology verticals. 

We kicked things off tonight with dinner, discussion, and door prizes at Cedarbrook Lodge in Seatac, WA.  There are a lot of new faces in this group of delegates.  I don’t know most of the guys but that makes for a great opportunity to meet new people and network.  In a word, Cedarbrook is gorgeous.  It has more of a resort feel to it than a hotel.  It’s too bad I won’t be spending more time here but the show must go on.

Tomorrow (Thursday), the other delegates and I will be meeting with Veeam, F5, and a stealth company which officially launches in our very presence tomorrow.  I’m familiar with most of Veeam’s offerings but as a virtualization guy, I’m hoping to see more about their SureBackup technology.  I’ve known of F5 for many years but just recently I’m seeing them push their way into the virtualization arena.  Just last week they have expressed interest in participating in the Minneapolis VMUG.  I’m anxious in seeing what value they bring to the virtualized datacenter.  We cap off the day with a party at the Museum of Flight which should be really cool.

Moving into Friday, we’ll hear from Compellent on what they have been up to in the storage arena and how they are doing things differently than other storage vendors such as EMC, NetApp, Hitachi, HP, IBM, 3PAR, Dell, FalconStor, Pillar, etc.  We’ll also be spending some time with NEC.  I’m real curious as to what they are going to present.  Talk about a diverse portfolio of products (as well as professional services).  Whatever it is, I’ll be looking for virtualization relevance.  Not only that, but will we see a landscape that continues to cater to cloud agility?  Cloud has picked up a lot of momentum.  It’s real.  Adopt, adapt, integrate, or get run over by it.  There may be one more vendor on Friday… that remains to be seen at this point.  We end Friday with dinner in the evening and then some of us will start our journey back home.

I’m looking forward to a couple of great days.

Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

OVF? OVA? WTF?

July 2nd, 2010

If you’ve worked with recent versions of VMware virtual infrastructure, Converter, or Workstation, you may be familiar with the fact that these products have the native ability to work with virtual machines in the Open Virtualization Format, or OVF for short.  OVF is a Specification governed by the DMTF (Distributed Management Task Force) which to me sounds a lot like RFCs which provide standards for protocols and communication across compute platforms – basically SOPs for how content is delivered on the internet as we know it today.

So if there’s one standard, why is it that when I choose to create an OVF (Export OVF Template in the vSphere Client), I’m prompted to create either an OVF or an OVA?  If the OVF is an OVF, then what’s an OVA?

 7-2-2010 8-00-01 PM

Personally, I’ve seen both formats, typically when deploying packaged appliances.  The answer is simple: Both the OVF and the OVA formats roll up into the Specification defined by the DMTF.  The difference between the two is in the presentation and encapsulation.  The OVF is a construct of a few files, all of which are essential to its definition and deployment.  The OVA on the other hand is a single file with all of the necessary information encapsulated inside of it.  Think of the OVA as an archive file.  The single file format provides ease in portability.  From a size or bandwidth perspective, there is no advantage between one format or the other as they each tend to be the same size when all is said and done.

7-2-2010 8-13-26 PM

The DMTF explains the two formats on pages 12 through 13 in the PDF linked above:

An OVF package may be stored as a single file using the TAR format. The extension of that file shall be .ova (open virtual appliance or application).

An OVF package can be made available as a set of files, for example on a standard Web server.

Do keep in mind that which ever file type you choose to work with, if you plan on hosting them on a web server, MIME types will need to be set up for .OVF, OVA, or both, in order for a client to download them for deployment onto your hypervisor.

At 41 pages, the OVF Specification contains a surprising amount of detail.  There’s more to it than you might think, and for good reason:

The Open Virtualization Format (OVF) Specification describes an open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines.

Open, meaning cross platform (bring your own hypervisor).  Combined with Secure and Portable attributes, OVF may be one of the key technologies for intracloud and intercloud mobility.  The format is a collaborative effort spawned from a variety of contributors:

Simon Crosby, XenSource
Ron Doyle, IBM
Mike Gering, IBM
Michael Gionfriddo, Sun Microsystems
Steffen Grarup, VMware (Co-Editor)
Steve Hand, Symantec
Mark Hapner, Sun Microsystems
Daniel Hiltgen, VMware
Michael Johanssen, IBM
Lawrence J. Lamers, VMware (Chair)
John Leung, Intel Corporation
Fumio Machida, NEC Corporation
Andreas Maier, IBM
Ewan Mellor, XenSource
John Parchem, Microsoft
Shishir Pardikar, XenSource
Stephen J. Schmidt, IBM
René W. Schmidt, VMware (Co-Editor)
Andrew Warfield, XenSource
Mark D. Weitzel, IBM
John Wilson, Dell

Take a look at the OVF Specifications document as well as some of the other work going on at DTMF. 

Have a great and safe July 4th weeekend, and congratulations to the Dutch on their win today in World Cup Soccer.  I for one will be glad when it’s all over with and our Twitter APIs can return to normal again.

P2V Milestone

May 15th, 2010

If you’re reading this, that’s good news because it means last night’s P2V completed successfully.  I took the last remaining non-virtualized physical infrastructure server in the lab and made it a virtual machine.  Resource and role wise, this was the largest physical lab server next to the ESX hosts themselves.

Resources:

  • HP Proliant DL380 G3
  • Dual Intel P4 2.8GHz processors
  • 6GB RAM
  • 1/2 TB  local storage
  • Dual Gb NICs
  • Dual fibre channel HBAs

Roles:

  • Windows Server 2003 R2 Enterprise Edition SP2
  • File server
    • binaries
    • isos
    • my documents
    • thousands of family pictures
    • videos
  • Print server
  • IIS web server
    • WordPress blog
    • ASP.NET based family web site
    • other hosted sites
  • DHCP server
  • SQL 2005 server
    • vCenter
    • VUM
    • Citrix Presentation Server
  • MySQL server
    • WordPress blog
  • Backup Sever
  • SAN management

I’m shutting down this last remaining physical server as well as the tape library.  They’ll go in the pile of other physical assets which are already for sale or they will be donated as sales for 32-bit server hardware are slow right now.  This is a milestone because this server, named SKYWALKER – you may have heard me mention it from time to time, has been a physical staple in the lab for as long as the lab has existed (circa 1995).  Granted it has gone through several physical hardware platform migrations, its logical role is historic and its composition has always been physical.  To put it into perspective, at one point in time SKYWALKER was a Compaq Prosignia 300 server with a Pentium Pro processor and a single internal Barracuda 4.3GB SCSI drive.  Before my abilities to acquire server class hardware, it was hand-me-down whitebox parts from earlier gaming rigs.

The P2V (using VMware Converter) took a little over 5 hours for 500GB of storage.  So the only physical servers remaining in the lab are the ESX hosts themselves.  2 DL385 G2s and 2 DL385s which typically remain powered down, earmarked for special projects.  A successful P2V is a great start to a weekend if you ask me.  Now I’m off to my daughter’s T-ball game. :)

Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters

March 25th, 2010

This is one of those “I’m documenting it for my own purposes” articles.  Yes I read my own blog once in a while to find information on past topics.  Here I’m basically copying a VMware KB article but I’ll provide a brief introduction.

So your wondering if you should use VMware Paravirtual SCSI?  I’ve gotten this question a few times.  PVSCSI is one of those technologies where “should I implement it” could be best answered with the infamous consulting reply “it depends”.  One person asked if it would be good to use as a default configuration for all VMs.  One notion that I would agree on by and large is that I feel the support complexity increases when using PVSCSI and it should only be used as needed for VMs which need an additional bit of performance squeezed from the disk subsystem.  This is not a technology I would implement by default on all VMs.  Dissecting the practical beneifts and ROI of implementing PVSCSI should be performed, but before that, your valuable time may be better spent finding out if your environment will support it to begin with.  Have a look at VMware KB Article 1010398 which is where the following information comes from, verbatim.

It’s important to identify the guest VMs which support PVSCSI:

Paravirtual SCSI adapters are supported on the following guest operating systems:

  • Windows Server 2008
  • Windows Server 2003
  • Red Hat Enterprise Linux (RHEL) 5

It’s important to further identify more ambiguous type situations where PVSCSI may or may not not fit:

Paravirtual SCSI adapters also have the following limitations:

  • Hot add or hot remove requires a bus rescan from within the guest.
  • Disks with snapshots might not experience performance gains when used on Paravirtual SCSI adapters or if memory on the ESX host is overcommitted.
  • If you upgrade from RHEL 5 to an unsupported kernel, you might not be able to access data on the virtual machine’s PVSCSI disks. You can runvmware-config-tools.pl with the kernel-version parameter to regain access.
  • Because the default type of newly hot-added SCSI adapter depends on the type of primary (boot) SCSI controller, hot-adding a PVSCSI adapter is not supported.
  • Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. A disk attached using PVSCSI can be used as a data drive, not a system or boot drive. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1

For more information on PVSCSI, including installation steps, see VMware KB Article 1010398.  One more important thing to note is that in some operating system types, to install PVSCSI, you need to create a virtual machine with the LSI controller, install VMware Tools, then change to the drives to paravirtualized mode.