Posts Tagged ‘Hardware’

EMC Celerra Network Server Documentation

November 6th, 2010

EMC has updated their documentation library for the Celerra to version 6.0.  If you work with the Celerra or the UBER VSA, this is good reference documentation to have.  The updated Celerra documentation library on EMC’s Powerlink site is here: Celerra Network Server Documentation (User Edition) 6.0 A01.  The document library includes the following titles:

  • Celerra Network Server User Documents
    • Celerra CDMS Version 2.0 for NFS and CIFS
    • Celerra File Extension Filtering
    • Celerra Glossary
    • Celerra MirrorView/Synchronous Setup on CLARiiON Backends
    • Celerra Network Server Command Reference Manual
    • Celerra Network Server Error Messages Guide
    • Celerra Network Server Parameters Guide
    • Celerra Network Server System Operations
    • Celerra Security Configuration Guide
    • Celerra SMI-S Provider Programmer’s Guide
    • Configuring and Managing CIFS on Celerra
    • Configuring and Managing Celerra Network High Availability
    • Configuring and Managing Celerra Networking
    • Configuring Celerra Events and Notifications
    • Configuring Celerra Naming Services
    • Configuring Celerra Time Services
    • Configuring Celerra User Mapping
    • Configuring iSCSI Targets on Celerra
    • Configuring NDMP Backups on Celerra
    • Configuring NDMP Backups to Disk on Celerra
    • Configuring NFS on Celerra
    • Configuring Standbys on Celerra
    • Configuring Virtual Data Movers for Celerra
    • Controlling Access to Celerra System Objects
    • Getting Started with Celerra Startup Assistant
    • Installing Celerra iSCSI Host Components
    • Installing Celerra Management Applications
    • Managing Celerra for a Multiprotocol Environment
    • Managing Celerra Statistics
    • Managing Celerra Volumes and File Systems Manually
    • Managing Celerra Volumes and File Systems with Automatic Volume Management
    • Problem Resolution Roadmap for Celerra
    • Using Celerra AntiVirus Agent
    • Using Celerra Data Deduplication
    • Using Celerra Event Enabler
    • Using Celerra Event Publishing Agent
    • Using Celerra FileMover
    • Using Celerra Replicator (V2)
    • Using EMC Utilities for the CIFS Environment
    • Using File-Level Retention on Celerra
    • Using FTP on Celerra
    • Using International Character Sets with Celerra
    • Using MirrorView Synchronous with Celerra for Disaster Recovery
    • Using MPFS on Celerra
    • Using Multi-Protocol Directories with Celerra
    • Using NTMigrate with Celerra
    • Using ntxmap for Celerra CIFS User Mapping
    • Using Quotas on Celerra
    • Using SnapSure on Celerra
    • Using SNMPv3 on Celerra
    • Using SRDF/A with Celerra
    • Using SRDF/S with Celerra for Disaster Recovery
    • Using TFTP on Celerra Network Server
    • Using the Celerra nas_stig Utility
    • Using the Celerra server_archive Utility
    • Using TimeFinder/FS, NearCopy, and FarCopy with Celerra
    • Using Windows Administrative Tools with Celerra
    • Using Wizards to Configure Celerra
  • NS-120
    • Celerra NS-120 System (Single Blade) Installation Guide
    • Celerra NS-120 System (Dual Blade) Installation Guide
  • NS-480
    • Celerra NS-480 System (Dual Blade) Installation Guide
    • Celerra NS-480 System (Four Blade) Installation Guide
  • NS20
    • Celerra NS20 Read Me First
    • Setting Up the EMC Celerra NS20 System
    • Celerra NS21 Cabling Guide
    • Celerra NS21FC Cabling Guide
    • Celerra NS22 Cabling Guide
    • Celerra NS22FC Cabling Guide
    • Celerra NS20 System (Single Blade) Installation Guide
    • Celerra NS20 System (Single Blade with FC Option Enabled) Installation Guide
    • Celerra NS20 System (Dual Blade) Installation Guide
    • Celerra NS20 System (Dual Blade with FC Option Enabled) Installation Guide
  • NX4
    • Celerra NX4 System Single Blade Installation Guide
    • Celerra NX4 System Dual Blade Installation Guide
  • Regulatory Documents
    • C-RoHS HS/TS Substance Concentration Chart Technical Note

If you’re looking for more Celerra documentation, check out the Celerra Network Server General Reference page.

Hardware Status and Maintenance Mode

October 20th, 2010

I’m unable to view hardware health status data while a host is in maintenance mode in my vSphere 4.0 Update 1 environment.

SnagIt Capture

A failed memory module was replaced on a host but I’m skeptical about taking it out of maintenance mode until I am sure it is healthy.  There is enough load on this cluster such that removing the host from maintenance mode will result in DRS moving VM workloads onto it within five minutes.  For obvious reasons, I don’t want VMs running on an unhealthy host.

So… I need to disable DRS at the cluster level, take the host out of maintenance mode, verify the hardware health on the Hardware Status tab, then re-enable DRS.  It’s a round about process, particularly if it’s a production environment which requires a Change Request (CR) with associated approvals and lead time to toggle the DRS configuration. 

Taking a look at KB 1011284, VMware acknowledges the steps above and considers the following a resolution to the problem:

Resolution

By design, the host monitoring agents (IPMI) are not supported while the ESX host is in maintenance mode. You must exit maintenance mode to view the information on the Hardware Status tab. To take the ESX host out of maintenance mode:

1.Right click ESX host within vSphere Client.

2.Click on Exit Maintenance Mode.

Fortunately, this design specification has been improved by VMware in vSphere 4.1 where I have the ability to view hardware health while a host is in maintenance mode.

ESXi 4.x Installable HP Customized ISO Image DNA

October 12th, 2010

Those of you who are deploying ESXi in your environment probably know by now there are a few different flavors of the installable version you can deploy from:

  • ESXi 4.x Installable (the non-hardware-vendor-specific “vanilla” ESXi bits)
  • ESXi 4.x Installable Customized ISO Image (hardware-vendor-specific bits)
    • ESXi 4.x Installable HP Customized ISO Image
    • ESXi 4.x with IBM Customization
    • ESXi 4.x Installable Dell Customized ISO Image

Each of the major hardware manufacturers does things a little differently with respect to what and how they bake in their special components into ESXi.  There doesn’t seem to be much of a standard which the vendors are following.  The resulting .ISO file naming convention varies between vendors and even between builds from a specific vendor.  The lack of standards here can make managing a library of ESXi releases among a sea of datacenter hardware difficult to to keep track of.  It seems a bit careless if you ask me, but there are bigger fish to fry.

This short post focuses specifically the HP flavor of ESXi.  What’s the difference between ESXi 4.x Installable and ESXi 4.x Installable HP Customized ISO Image?  The answer is the HP ESXi Offline Bundle.  Essentially what this means is that if you install ESXi 4.x Installable, then install the HP ESXi Offline Bundle, the sum of what you end up with is identically equivalent to installing the ESXi 4.x Installable HP Customized ISO Image.

In mathematical terms…

SnagIt Capture

Where are these HP ESXi Offline Bundles?  You can grab them from HP’s web site.  Thus far, HP has been producing an updated version for each release of vSphere.  For reader convenience, I’ve linked a few of the most recent and relevant versions below:

In addition to the above, both ESX 4.1 and ESXi 4.1 on HP systems requires an add-on NMI Sourcing Driver which is discussed here and can be downloaded hereFailure to install this driver might result in silent data corruption. Isn’t that special.

Unisphere Client V1.0.0.12 Missing Federation

October 8th, 2010

A few weeks ago, the EMC Celerra NS-120 was upgraded to DART 6 and FLARE 30, in that order.  Before I get on with this post, let me just say that Unisphere is the bomb and offers at least a few opportunities for complimentary writing to give it the praise it truely deserves.  My hat is off to EMC, they answered the call (or was it the screams?) for unified management of unified storage. 

What was my opinion of the old sauce? 

  • Navisphere for CLARiiON block storage management was ok although it had a few bugs which forced a need to resort to NaviCLI once in a while.  Other than that, it looked old and was in need of a face/efficiency lift.  I’ve manged a few enterprise arrays from other vendors which have this same feel.  The biggest problem there being no end in sight of lackluster management or performance gathering tools.  Some vendors seem content with what they’ve always had which leads me to a few conclusions:
    • They don’t use their own software
    • The expectation is to use the CLI only
    • Hardware vendors can have outstanding hardware components but that doesn’t make them software developers
    • EMC has bumped it up a notch, at least with Unisphere – I can’t speak to Symmetrix management as I have no experience there
  • Celerra Manager for management of the Data Movers/iSCSI/NFS/CIFS was bug free, but very slow at times, particularly at first login.
  • Seasoned CLARiiON and Celerra TCs (as well as NetApp pros) might laugh at my tendancy to rely on GUI tools, but management of the storage is so few and far between, relearning CLI when a seldom task needs to be performed isn’t precious time well spent unless the tasks are going to be repeated often enough.

I’ve had some legacy Celerra software CDs sitting next to me in my den for several months (Navisphere, Celerra Network Server, etc.) and I will have no problem banishing them to the basement, probably not to be touched again until the next time the basement is cleaned out.  So look for some positive Unisphere posts from me in the future as I get the time.

Getting back on topic…  Earlier today I had finished taking a look at Nicholas Weaver’s SRM video.  Later, I was in the lab playing around with the EMC Celerra UBER VSA 3.2 (it’s the latest craze, you really must check it out).  I noticed a Unisphere feature Nicholas highlighted in his video which I don’t have on the Celerra NS-120’s build of Unisphere – the ability to federate storage array management in Unisphere via single pane of glass.

The Uber VSA has the ability to snap in multiple storage arrays into the Unisphere dashboard by way of an Add button:

SnagIt Capture

The Add button is missing in the Celerra NS-120’s build of Unisphere:

SnagIt Capture

The DART versions match at 6.0.36-4, however, the outstanding difference appears to be the Client Revision.  What’s worth pointing out is that the Add feature exists in the older client revision found in the Uber VSA, but is missing in the newer client revision found on the Celerra NS-120 which was upgraded a few weeks ago.

SnagIt Capture

I’m not sure if federation of multiple arrays was purposely removed by design or if it was an oversight, but it would be nice to get it back.  I should also point out that although federation appears to be missing for multiple arrays, it still exists and consolidates management intra of unified storage arrays consisting of CLARiiON block and the Celerra iSCSI/NFS/CIFS.

Update 3/4/11:  The Celerra NS-120 is now running DART 6.0.40-8, FLARE 04.30.000.5.511,7.30.10 (4.1), and Unisphere V1.0.0.14.  The Add feature to tie in multiple EMC storage frames into a single view is still missing.

Free Book – vSphere on NetApp Best Practices

August 2nd, 2010

Hello gang!  For anyone who doesn’t specifically follow the NetApp blogs, this is just a quick heads up to let you know that NetApp has updated its popular NetApp and VMware vSphere Storage Best Practices book and is offering 1,000 free copies of the new Version 2.0 edition

The free copies are available while supplies last so get registered for yours soon!

Gestalt IT Tech Field Day – NEC

July 16th, 2010

It’s the last presentation of the day and the last presentation overall for Gestalt IT Tech Field Day Seattle.  We’ve made a short journey from the Microsoft store in Redmond, WA to to NEC in Bellevue.  Anyone who knows the NEC brand is aware of their diverse portfolio of products and perhaps their services.  Today’s discussion, however, will focus on Storage Solutions.

First a bit of background information on NEC as a corporation:

  • Founded in 1899
  • 142,000 employees
  • 50,000 patents worldwide

Storage. NEC opened up with some of today’s storage challenges faced by many.  Enter HYDRAstor, a two-tier grid architecture comprised of the following key building blocks:

  • Accelerator nodes – Deliver linear performance scalability for backup and archive.
  • Storage nodes – Deliver non-disruptive capacity scalability from terabytes to petabytes.
  • Standard configurations are delivered with a ratio of 1 Accelerator node for every 2 Storage node – ie.:
    • HS8-2004R = 2AN + 4SN = 24TB-48TB Raw
    • HS8-2010R = 5AN = 10SN = 120TB Raw
    • HS8-2020R = 10AN+20SN = 240TB
    • HS8-2110R = 55AN+110SN = 1.3PB Raw

HYDRAstor delivers the following industry standard benefits:

  • Scalability – Non disruptive independent linear scaling of capacity and performance; concurrent multiple generations of compute and storage technology.
  • Self evolving – Automated load balancing and incorporation of new technology reduces application downtime and data outages.
  • Cost efficiency – Reduce storage consumption by 95% or more with superior data deduplication. Ever “green” evolution of energy savings features.
  • Resiliency – Greater protection than RAID witih less overhead.
  • Manageability – No data migration, zero data provisioning, self-managing storage; single platform for multiple data types, formats and quality of service needs.

A few of other key selling points about HYDRAstor:

  • Global Data Deduplication of backup and archive data is achieved during ingest by combining DataRedux with grid storage architecture.  Dedupe of 20% to 50% across all datasets.
  • Distributed Resilient Data (DRD) technology drives data protection beyond what RAID protection offers with less overhead.  At its native configuration, user data is protected against up to three simultaneous disk or node failures.  This equates to 150% greater resiliency than RAID6 and 300% greater resiliency than RAID5 with less storage overhead and no performance degradtion during rebuild and leveling processes.
  • Turnkey delivery.  According to the brochure, HYDRAstor can be installed and performing backup or archive in less than 45 minutes.  I’m not sure what the point of this proclaimation is other than it will most likely be purchased in a pre-racked, cabled, and hopefully configured state.  When I think about deploying enterprise storage, it’s not something I contemplate performing end to end over my lunch hour.

I know some of the other delegates were really excited about HYDRAstor and its enabling technologies.  Sorry NEC, I wasn’t feeling it.  HYDRAstor’s approach seems to consume more rack space than the competition, more cabling, and based on today’s lab walkthru, more cooling.

IMG00778-20100716-1554

Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

Gestalt IT Tech Field Day – Compellent

July 16th, 2010

Gestalt IT Tech Field Day 2 begins with Compellent, a storage vendor out of Eden Prairie, MN.  Compellent has been around for about eight years and, like other well known multiprotocol SAN vendors, offers spindles of FC, SATA, SAS, and SSD via FC block, iSCSI, NFS, and CIFS.

Compellent’s hardware approach is a modular one.  Many of the components, such as drives and interfaces (Ethernet, FC, etc.), are easily replacable and hot swappable, eliminating the need to “rip and replace” the entire frame of hardware and providing the ability to upgrade components without taking down the array.

In April of 2010, Compellent introduced the new zNAS solution:

Compellent introduces the new zNAS solution, which consolidates file and block storage on a single, intelligent platform. The latest unified storage offering from Compellent integrates next-generation ZFS software, high-performance hardware and Fluid Data architecture to actively manage and move data in a virtual pool of storage, regardless of the size and type of block, file or drive. Enterprises can simplify management, intelligently scale capacity, improve performance for critical applications and reduce complexity and costs.

Fluid Data Storage is Compellent’s granular approach to data management

  • Virtualization
  • Intelligence
  • Automation
  • Utilization

Volume Creation

Volume Recovery

Volume Management

Integration 

  • VMware
    • Leveraging many of the features mentioned above
    • HCL compatibility although I don’t see ESXi in the list which would be a major concern for VMware customers given that ESX is being phased out.  Compellent responded they believe their arrays are compatible with ESXi and will look into updating their VMware support page if that is the case.  VMware’s HCL also shows Compellent storage is not currently certified for ESXi. Significant correction to the earlier statement: VMware’s HCL for storage is inconsistently different than it’s HCL for host hardware in that the host hardware HCL lists explicit compatiblity for both ESX and ESXi, whereas the storage HCL explicitly lists ESX compatibility which implies equivilent ESXi compatibility. Compellent arrays, as of this writing, are both ESX4 and ESXi4 compatible.
  • Microsoft
    • PowerShell (for automation and consistency of storage management)
    • Hyper-V

Compellent performed a live demo of their Replay (Snapshot) feature with a LUN presented to a Windows host.  It worked slick and as expected. Compellent’s Windows based storage management UI has a fresh, no-nonsense, 21st century feel to it which I can appreciate.

We closed discussion answering the question “Why Compellent?”  Top Reasons:

  1. Efficiency
  2. Long term ROI, cost savings through the upgrade model
  3. Ease of use

Follow them on Twitter at @Compellent.

Thank you Compellent for the presentation and I’m sure I’ll see you back in Minnesota!

Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.