Hitachi’s VSP vs. VMAX

Today’s announcement of Hitachi’s VSP brings another round to the competition between EMC and Hitachi/HDS in the enterprise. VSP’s recent introduction which is GA and orderable today, takes the rivalry to a whole new level.

I was on SiliconANGLEs live TV feed earlier today discussing the merits of the two architectures with David Floyer and Dave Vellante from Wikibon. In essence, there seems to be a religious war going on between the two.

Examining VMAX, it’s obviously built around a concept of standalone nodes which all have cache, frontend, backend and processing components built in. Scaling the VMAX, aside from storage and perhaps cache, involves adding more VMAX nodes to the system. VMAX nodes talk to one another via an external switching fabric (RapidIO currently). The hardware although sophisticated packaging, IO connection technology and other internal stuff looks very much like a 2U server one could purchase from any number of vendors.

On the other hand, Hitachi’s VSP is a special built storage engine (or storage computer as Hu Yoshida says). While the architecture is not a radical revision of USP-V, it’s a major upleveling of all component technology from the 5th generation cross bar switch, the new ASIC driven Front-end and Back-end directors, the shared control L2 cache memory and the use of quad core Xenon Intel processors. Much of this hardware is unique, sophistication abounds and looks very much like a blade system for the storage controller community.

The VSP and VMAX comparison is sort of like a open source vs. closed source discussion. VMAX plays the role of open source champion that largely depends on commodity hardware, sophisticated packaging but with minimal ASICs technology. As evidence of the commodity hardware VPLEX EMC’s storage virtualization engine reportedly runs on VMAX hardware. Commodity hardware lets EMC ride the technology curve as it advances for other applications.

Hitachi VSP plays the role of closed source champion. Its functionality is locked inside proprietary hardware architecture, ASICS and interfaces. The functionality it provides is tightly coupled with their internal architecture and Hitachi probably believes that by doing so they can provide better performance and more tightly integrated functionality to the enterprise.

Perhaps this doesn’t do justice to either development team. There is plenty of unique proprietary hardware and sophisticated packaging in VMAX but they have taken the approach of separate but equal nodes. Whereas Hitachi has distributed this functionality out to various components like Front-end directors (FEDs), backend directors (BEDs), cache adaptors (CAs) and virtual storage directors (VSDs), each of which can scale independently, i.e., doesn’t require more BEDs to add FEDs or CAs. Ditto for VSDs. Each can be scaled separately up to the maximum that can fit inside a controller chasis and then if needed, you can add a whole another controller chasis.

One has an internal switching infrastructure (the VSP cross bar switch) and the other uses external switching infrastructure (the VMAX RapidIO). The promise of external switching like commodity hardware, is that you can share the R&D funding to enhance this technology with other users. But the disadvantage is that architecturally you may have more latency to propagate an IO to other nodes for handling.

With VSP’s cross bar switch, you may still need to move IO activity between VSDs but this can be done much faster and any VSD can access any CA, BED, FED resource required to perform the IO so the need to move IO is reduced considerably. Thus, providing a global pool of resources that any IO can take advantage of.

In the end, blade systems like VSP or separate server systems like VMAX, can all work their magic. Both systems have their place today and in the foreseeable future. Where blades servers shine is in dense packaging, high power cooling efficiency and bringing a lot of horse power to a small package. On the other hand, server systems are simple to deploy and connect together with minimal limitations on the number of servers that can be brought together.

In a small space blade systems probably can bring more compute (storage IO) power to bear within the same volume than multiple server systems but the hardware is much more proprietary and costs lots of R&D $s to maintain leading edge capabilities.

Typed this out after the show, hopefully I characterized the two products properly. If I am missing anything please let me know.

[Edited for readability, grammar and numerous misspellings – last time I do this on an iPhone. Thanks to Jay Livens (@SEPATONJay) and others for catching my errors.]

To iPad or not to iPad – part 2

iPad with BlueTooth Keyboard
iPad with BlueTooth Keyboard

(Length post warning – 1200+ words)

We had discussed using the iPad in a prior post and although, it was uncertain up to the last minute, I ended up taking the iPad to a conference early this month.  My uncertainty was all related to getting our monthly newsletter out.

The newsletter is mainly a text file  but it links to a number of Storage Intelligence (StorInt(tm) reports) PDFs which reside on my website.  Creating and editing these documents is done using Microsoft Word.  Oftentimes the edits to these documents involve tracked changes which aren’t handled very well by iPad’s Pages app (they’re all accepted).

In addition, these .DOC files are converted to .PDFs and uploaded to the website.  While Pages handles importing Doc files and publishing PDF files from them, I am still unclear how to upload a Pages PDF file to a website. There are many FTP apps for the iPad/iPhone but none seem able to upload a PDF file out of Pages App.

All this was going to require the use of a laptop but I finally got all the file edits in and before I left, was able to send out the newsletter.

Twitter troubles

While at the conference I noticed that there really isn’t a proper Twitter client for the iPad.  Most desktop/laptop Twitter clients allow one to see their Twitter stream while composing a Tweet.  But the free Twitter/TweetDeck/Twitteriffic Apps on the iPad all seem to want to obscure the Twitter stream(s) when one enter’s a new tweet – probably assuming one’s using the soft keypad which would obscure the stream anyway.  Nonetheless, such actions make responding to Twitter queries more difficult than necessary.

Docs debacle

As always, loading up my current working set (client information, office doc’s, PDFs, etc.) was cumbersome. I have taken to using a special email address, only used for this purpose and creating one email per client which works alright.

Working on a project with iPad Pages App worked ok, but:

  • The font/special characters changes between .Doc and Pages files seems awkward.  For example, I was using the large bullet on Pages and when I transformed this file to a DOC file, the bullet became HUGE.
  • Also the font that Pages uses defaults to something different than Microsoft Word’s defaults.
  • Watermark images didn’t seem to be as transparent when converting between Doc’s and Pages

Mostly these were nuisances that I had to deal with when importing a file from iPad to desktop or vice versa.

However, working on one project I realized I needed some metrics I normally keep in a spreadsheet on my desktop/laptop.  I ended up calling home office and walking my associate through accessing the information and telling me what I needed to know.  I also asked them to send that spreadsheet to me so that I would have it for future reference.

BlueTooth blessings/bunglings

At the conference I was blessed with a table to sit at during the keynotes (passing myself off as a blogger) which made using the BlueTooth (BT) keypad and iPad much easier.  I also used the combination on the airplane on the way home and found the combination much more flexible than a laptop.  Although it’s unclear whether this would work as well sitting on my lap in normal conference seating.

Also I really wish there was some sort of other indicators/light(s) on the BT keypad.  It only has one green led and this makes for rather limited communications.  I tried to connect it to the iPad on the plane ride out but it failed.  I thought perhaps the batteries had run down and needed to be replaced.  When I got to my destination I tried again after looking up what the BT keypad green led and it worked just fine.  FYI:

  • A flashing green led means the BT keypad is pairing with a target devicep
  • To turn the BT keypad on, push and hold the side button until the green led starts to blink.
  • To turn the BT keypad off, push and hold the side button until the green led comes on and eventually off.

For some reason this was difficult to find online but it was probably in the printed doc that came with the keyboard (filed away and never seen again).  More lights might help, like green for on/yellow for discoverable, red for (going) off.  Or maybe if I just need to use it more often. I may have tried to pair it with my iPhone which didn’t help  (can’t be sure, also unclear how to clear it’s prior pairing).

Nevertheless, it might make sense to carry some extra batteries and/or their battery charger for just these types of problems.  There were quite a few people who commented on the BT keyboard/iPad combination.  They seemed unaware that it could be used with the iPad

Spellcheck saga

The other problem I had was with the iPad’s spell checker.  It turns out there are two levels of spell checking in the iPad and they are both active within Pages.  One can be disabled at the Pages Tools=>Check Spelling and the other is under iPad settings at General=>Keyboard=>Auto-Correction.   I was able to quickly find the Pages version but it took some effort to uncover the Keyboard one.

Nonetheless, while pounding in conference notes, I often employ vendor acronyms.  Oftentimes the spell checker/auto-corrector would transform these acronyms to something completely different.  Of course my typing is not perfect, so my other issue is that I miss-type words, which after auto-correction had little relation to what I was trying to type.

I realize that this is an attribute of soft keypad corrections, probably coming from the iPhone where often people mis-type due to the size of the keys.  However, when using the iPad and especially when using the BT keypad it would be nice if auto-correction was turned off, by default.

Other iPad incredulity

I was surprised to see some analysts with both an iPad and a laptop (and probably an iPhone/Blackberry).  Personally, I can’t see why anyone would want both other than for more screen space.  But I was a bit jealous when I had to change Apps to tweet something or check email/websites while inputing notes in real time.

Also, I was afraid depending on hotel/conference WIFI would place me at a disadvantage to other analysts/bloggers.  Ultimately, I found that for my use of internet (mostly for Twitter and email) during conferences, WIFI was adequate and I always had my iPhone if it didn’t work.

After 2hrs+ of keynotes and another 2hrs+ of presentations, I was running low on iPad power.  So, I started to power the iPad off between notes and tweets.  Funny thing, all I had to do to power on the screen was to start typing on the BT keypad – cool.  As I recall, it occasionally missed the first key stroke or so but worked fine after that.  Following lunch about an hour later, I pulled out my power cord extension and plugged it into the table outlet and kept it on for the rest of the day.  Thankfully, I remembered to bring the extension cord (that came with the laptop charger).

Well that’s about it, I have another short conference next week and will probably try again to bring the iPad but that pesky monthly newsletter is due out again…

Cirtas surfaces

Cirtas system (from www.Cirtas.com)
Cirtas system (from http://www.Cirtas.com)

Yesterday, Cirtas came out of stealth mode and into the lime-light with their new Bluejet cloud storage controller hardware system.  Cirtas joins a number of other products offering cloud storage to the enterprise by supplying a more standard interface which we have discussed before (see Cloud storage gateways surface).

With Cirtas, the interface to the backend cloud storage is supplied as iSCSI, similar to StorSimple‘s product we reviewed previously (see More cloud storage gateways …).  However, StorSimple is focused on Microsoft environments only and select applications, namely Sharepoint, Exchange and Microsoft file services.  Cirtas seems aimed at the more general purpose application environment that uses iSCSI storage protocols.  The only other iSCSI cloud storage gateway providers appear to be TwinStrata and Panzura but the information on Panzura’s website is sketchy.

In addition, Cirtas, StorSimple (and Panzura) provide hardware appliances whereas most of the other cloud storage gateways (NasuniGladinet, TwinStrata) only come as software  packages.  Although Gladinet appears to be targeted at the home office environment.

Cirtas’s Bluejet controller includes onboard RAM cache, SSD flash drives and SAS drives (5TB total) that is used to provide higher performing cloud storage access.  Bluejet also supports space efficient snapshots, data encryption, thin provisioning, data deduplication, and data compression. The Cirtas team comes out of the WAN optimization space so they have incorporated some of these data saving technologies into their product to reduce bandwidth requirement and cloud storage demand.

Cirtas currently supports Amazon S3 and IronMountain cloud storage but more are on the way.  They also recently completed their Series A round of funding which included NEA and Amazon.

Cirtas says they can support local storage performance but have no benchmarks to prove this out.  With iSCSI there aren’t many benchmark options but one could use iSCSI to support Microsoft Exchange and submit something on the Exchange Solution Review Program (ESRP) which might show off this capability.

Nonetheless, cloud storage can be considerably cheaper than primary storage ($/GB basis) and no doubt even with the ~$70K Cirtas Bluejet cloud storage controller, Cirtas supports a significant cost advantage.   With the appliance purchase, you get a basic storage key which allows you to store up to 20TB of data on (through) the appliance, if you have more data to store, additional storage keys can be purchased separately.  This 20TB license does not include the cloud storage costs for storing data on the cloud nor the bandwidth costs to upload and/or access the data on the cloud.

Seems like interest in cloud storage gateways/controllers is heating up, with the addition of Cirtas I count at least 4 that target the enterprise space and when Panzura releases a product that will add another.

Anything I missed?

SPC-1/E IOPS per watt – chart of the month

SPC*-1/E IOPs per Watt as of 27Aug2010
SPC*-1/E IOPs per Watt as of 27Aug2010

Not a lot of Storage Performance Council (SPC) benchmark submissions this past quarter just a new SPC-1/E from HP StorageWorks on their 6400 EVA with SSDs and a new SPC-1 run for Oracle Sun StorageTek 6780.  Recall that SPC-1/E executes all the same tests as SPC-1 but adds more testing with power monitoring equipment attached to measure power consumption at a number of performance levels.

With this chart we take another look at the storage energy consumption (see my previous discussion on SSD vs. drive energy use). As shown above we graph the IOPS/watt for three different performance environments: Nominal, Medium, and High as defined by SPC.  These are contrived storage usage workloads to measure the varibility in power consumed by a subsystem.  SPC defines the workloads as follows:

  • Nominal usage is 16 hours of idle time and 8 hours of moderate activity
  • Medium usage is 6 hours of idle time, 14 hours of moderate activity, and 4 hours of heavy activity
  • High usage is 0 hours of idle time, 6 hours of moderate activity and 18 hours of heavy activity

As for activity, SPC defines moderate activity at 50% of the subsystem’s maximum SPC-1 reported performance and heavy activity is at 80% of its maximum performance.

With that behind us, now on to the chart.  The HP 6400 EVA had 8-73GB SSD drives for storage while the two Xiotech submissions had 146GB/15Krpm and 600GB/15Krpm drives with no flash.  As expected the HP SSD subsystem delivered considerably more IOPS/watt at the high usage workload – ~2X the Xiotech with 600GB drives and ~2.3X the Xiotech with 146GB drives.  The multipliers were slightly less for moderate usage but still substantial nonetheless.

SSD nominal usage power consumption

However, the nominal usage bears some explanation.  Here both Xiotech subsystems beat out the HP EVA SSD subsystem at nominal usage with the 600GB drive Xiotech box supporting ~1.3X the IOPS/watt of the HP SSD system. How can this be?  SSD idle power consumption is the culprit.

The HP EVA SSD subsystem consumed ~463.1W at idle while the Xiotech 600GB only consumed ~23.5W and the Xiotech 146GM drive subsystem consumed ~23.4w.  I would guess that the drives and perhaps the Xiotech subsystem have considerable power savings algorithms that shed power when idle.  For whatever reason the SSDs and HP EVA don’t seem to have anything like this.  So nominal usage with 16Hrs of idle time penalizes the HP EVA SSD system resulting in the poors IOPS/watt for nominal usage shown above..

Rays reading: SSDs are not meant to be idled alot and disk drives, especially the ones that Xiotech are using have very sophisticated power management that maybe SSDs and/or HP should take a look at adopting.

The full SPC performance report will go up on SCI’s website next month in our dispatches directory.  However, if you are interested in receiving this sooner, just subscribe by email to our free newsletter and we will send you the current issue with download instructions for this and other reports.

As always, we welcome any suggestions on how to improve our analysis of SPC performance information so please comment here or drop us a line.

Database appliances!?

The Sun Oracle Database Machine by Oracle OpenWorld San Francisco 2009 (cc) (from Flickr)
The Sun Oracle Database Machine by Oracle OpenWorld San Francisco 2009 (cc) (from Flickr)

Was talking with Oracle the other day and discussing their Exadata database system.  They have achieved a lot of success with this product.  All of which got me to wondering whether database specific storage ever makes sense.  I suppose the ultimate arbiter of “making sense” is commercial viability and Oracle and others have certainly proven this, but from a technologist perspective I still wonder.

In my view, the Exadate system combines database servers and storage servers in one rack (with extensions to other racks).  They use an Infiniband bus between the database and storage servers and have a proprietary storage access protocol between the two.

With their proprietary protocol they can provide hints to the storage servers as to what’s coming next and how to manage the database data which make the Exadata system a screamer of a database machine.  Such hints can speed up database query processing, more efficiently store database structures, and overall speed up Oracle database activity.  Given all that it makes sense to a lot of customers.

Now, there are other systems which compete with Exadata like Teradata and Netezza (am I missing anyone?) that also support onboard database servers and storage servers.  I don’t know much about these products but they all seem targeted at data warehousing and analytics applications similar to Exadata but perhaps more specialized.

  • As far as I can tell Teradata has been around for years since they were spun out of NCR (or AT&T) and have enjoyed tremendous success.  The last annual report I can find for them shows their ’09 revenue around $772M with net income $254M.
  • Netezza started in 2000 and seems to be doing OK in the database appliance market given their youth.  Their last annual report for ’10 showed revenue of ~$191M and net income of $4.2M.  Perhaps not doing as well as Teradata but certainly commercially viable.

The only reason database appliances or machines exist is to speed up database processing.  If they can do that then they seem able to build a market place for themselves.

Database to storage interface standards

The key question from a storage analyst perspective is shouldn’t there be some sort of standards committee, like SNIA or others, that work to define a standard protocol between database servers and storage that can be adopted by other storage vendors.  I understand the advantage that proprietary interfaces can supply to an enlightened vendor’s equipment but there are more database vendors out there than just Oracle, Teradata and Netezza and there are (at least for the moment) many more storage vendors out there as well.

A decade or so ago, when I was with another storage company we created a proprietary interface for backup activity and it sold ok but in the end it didn’t sell enough to be worthwhile for either the backup or storage company to continue the approach.  At the time we were looking to support another proprietary interface for sorting but couldn’t seem to justify it.

Proprietary interfaces tend to lock customers in and most customers will only accept lockin if there is a significant advantage to your functionality.  But customer lock-in can lull vendors into not investing R&D funding in the latest technology and over time this affect will cause the vendor to lose any advantage they previously enjoyed.

It seems to me that the more successful companies (with the possible exception of Apple) tend to focus on opening up their interfaces rather than closing them down.  By doing so they introduce more competition which serves their customers better, in the long run.

I am not saying that if Oracle would standardize/publicize their database server to storage server interface that there would be a lot of storage vendors going after that market.  But the high revenues in this market, as evident from Teradata and Netezza, would certainly interest a few select storage vendors.  Now not all of Teradata’s or Netezza’s revenues derive from pure storage sales but I would wager a significant part do.

Nevertheless, a standard database storage protocol could readily be defined by existing database vendors in conjunction with SNIA.  Once defined, I believe some storage vendors would adopt this protocol along with every other storage protocol (iSCSI, FCoE, FC, FCIP, CIFS, NFS, etc.). Once that occurs, customers across the board would benefit from the increased competition and break away from the current customer lock-in with today’s database appliances.

Any significant success in the market from early storage vendor adopters of this protocol would certainly interest other  vendors inducing a cycle of increased adoption, higher competition, and better functionality.  In the end, database customers world wide will benefit from the increased price performance available in the open market.  And in the end that makes a lot more sense to me than the database appliances of today.

As to why Apple has excelled within a closed system environment, that will need to wait for a future post.

Data storage features for virtual desktop infrastructure (VDI) deployments

The Planet Data Center by The Planet (cc) (from Flickr)
The Planet Data Center by The Planet (cc) (from Flickr)

Was talking with someone yesterday about one of my favorite topics, data storage for virtual desktop infrastructure (VDI) deployments.  In my mind there are a few advanced storage features that help considerably with VDI implemetations:

  • Deduplication – almost every one of your virtual desktops will share 75-90% of their O/S disk data with every other virtual desktop.  Having sub-file/sub-block deduplication can be a godsend for all this replicated data and reduce O/S storage requirements considerably.
  • 0 storage snapshots/clones – another solution to the duplication of O/S data is to use some sort of space conserving snapshots.  For example, one creates a master (gold) disk image and makes 100s if not 1000s of snapshots of it, taking almost no additional space.
  • Highly available/highly reliable storage – when you have a lone desktop dependent on DAS for it’s O/S, it doesn’t impact a lot of users if that device fails. However, when you have 100s to 1000s of users dependent on DAS device(s) for their O/S software, any DAS failure could impact all of them at the same time.  As such, one needs to move off DAS and invest in highly reliable and available external storage of some kind to sustain reasonable uptime for your user community.

Those seem to me to be the most important attributes for VDI storage but there are a couple more features/facilities which can also:

  • NAS systems with NFS – VDI deployments will generate lots of VMDKs for all the user desktop C: drives.  Although this can be managed with block level storage as separate LUNs or multi-VMDK LUNs, who want’s to configure a 100 to 1000 LUNs.  NFS files can perform just as well and are much easier to create on the fly and thus, for VDI it’s hard to beat NFS storage.
  • Boot storm enhancements – Another problem with VDI is that everyone gets to work 8am Monday and proceeds to boot up their (virtual) machines, which drives an awful lot of IO to their virtual C: drives.  Deduplication and 0 storage snapshots can help manage the boot storm as long as these characteristics are retained throughout system cache, i.e, deduplication exists in cache as well as on backend disk.  But there are other approaches to the problem as well, available from various vendors to better manage boot storms.
  • Anti-Virus scan enhancements – Similar to boot storms, A-V scans also typically happen around the same time for many desktop users and can be just as bad for virtual C: drive performance.  Again, deduplication or 0 storage snapshots can help (with above caveats) but some vendor storage can offload these activities from the desktop alltogether.  Also last weeks VMworld release of VMware’s vShield Edge (see VMworld 2010 review) also supports some A-V scan enhancements. Any of these approaches should be able to help.

Regular “dumb” block storage will always work but it will require a lot more raw storage, performance will suffer just when everybody gets back to work, and the administrative burden will be much higher.

I may seem biased but enterprise class reliability&availability with some of the advanced storage features described above can help make your deployment of VDI that much better for you and all your knowledge workers.

Anything I missed?

VMworld 2010 review

The start of VMWorld2010's 1st keynote session
The start of VMWorld2010's 1st keynote session

Got back from VMworld last week and had a great time. Met a number of new and old friends and talked a lot about the new VMware technology coming online. Some highlights from the keynote sessions I attended,

vCloud Director

Previously known as Redwood, VMware is rolling out their support for cloud services and tieing it into their data center services. vCloud Director supports the definition of Virtual Data Centers with varying SLA characteristics. It is expected that virtual data centers would each support different service levels, something like “Gold”, “Silver” and “Bronze”. Virtual data centers now represent a class of VM service and aggregates all VMware data center resources together into massive resource pools which can now better managed and allocated to VMs that need them.

For example, by using vCLoud Director, one only needs to select which Virtual Data Center to specify the SLAs for a VM. New VMs will be allocated to the virtual data center that provides the requested service. This takes DRS, HA and FT to a whole new level.

Even more, it now allows vCloud Data Center Service partners to enter into the picture and provide a virtual data center class of service to the customer. In this way, a customer’s onsite data center could supply Gold and Silver virtual data center services while Bronze services could be provided at a service partner.

vShield

With all the advent of VM cloud capabilites coming online the need for VM security is becoming much more pressing. To address these concerns, VMware rolled out their vShield services which come in two levels today vShield Endpoint and vShield Edge.

  • Endpoint – offloads anti-virus scans from running in the VM and interfaces with standard anti-virus vendors to run the scan at the higher (ESX) levels.
  • Edge – provides for VPN and firewalls surrounding the virtual data center and interfaces with Cisco, Intel-McAffee, Symantec, and RSA to insure tight integration with these data center security providers.

The combination of vShield and vCloud Director allows approved vCloud Data Center Service providers to supply end-to-end data center security surrounding VMs and virtual data centers. Their are currently 5 approved vShield/vCloud Data Center Services partners today and they are Terramark, Verizon, Singtel, Colt, and Bluelock with more coming online shortly. Using vShield services, VMs could have secured access to onsite data center services even though they were executing offsite in the cloud.

VMware View

A new version of VMware’s VDI interface was released which now includes offline mode for those users that occasionally reside outside normal network access and need to use a standalone desktop environment. With the latest VMware View offline mode, one would checkout (download) a desktop virtual machine to your laptop and then be able to run all your desktop applications without network access.

 

vStorage API for Array Integration (VAAI)

VAAI supports advanced storage capabilities such as cloning, snapshot and thin provisioning and improves the efficiency of VM I/O. These changes should make thin provisioning much more efficient to use and should enable VMware to take advantage of storage hardware services such as snapshots and clones to offload VMware software services.

vSphere Essentials

Essentials is an SMB targeted VMware solution license-able for ~$18 per VM in an 8-core server, lowering the entry costs for VMware to very reasonable levels. The SMB data center’s number one problem is the lack of resources and this should enable more SMB shops to adopt VMware services at an entry level and grow up with VMware solutions in their environment.

VMforce

VMforce allows applications developed under Springsource, the enterprise java application development framework of the future, to run on the cloud via Salesforce.com cloud infrastructire. VMware is also working with Google and other cloud computing providers to provide similar services on their cloud infrastructure.

Other News

In addition to these feature/functionality announcements, VMware discussed their two most recent acquisitions of Integrien and TriCipher.

  • Integrien – is a both a visualization and resource analytics application. This will let administrators see at a glance how their VMware environment is operating with a dashboard and then allows one to drill down to see what is wrong with any items indicated by red or yellow lights. Integrien integrates with vCenter and other services to provide the analytics needed to determine resource status and details needed to understand how to resolve any flagged situation.
  • TriCipher – is a security service that will ultimately provide a single sign-on/login for all VMware services. As discussed above security is becoming ever more important in VMware environments and separate sign-ons to all VMware services would be cumbersome at best. However, with TriCipher, one only need sign-on once and then have access to any and all VMware services in a securely authenticated fashion.

VMWorld Lowlights

Most of these are nits and not worth dwelling on but the exhibitors and other non-high level sponsors/exhibitors all seemed to complain about the lack of conference rooms and were not allowed in the press&analyst rooms. Finding seating to talk to these vendors was difficult at best around the conference sessions, on the exhibit floor, or in the restuarants/cafe’s surrounding Moscone Conference Center. Although once you got offsite facilities were much more accommodating.

I would have to say another lowlight were all the late night parties that occurred – not that I didn’t partake in my fair share of partying. There were rumors of one incident where a conference goer was running around a hotel hall with only undergarments on blowing kisses to any female within sight. Some people shouldn’t be allowed to leave home.

The only other real negative in a pretty flawless show was the lines of people waiting to get into the technical sessions. They were pretty orderly but I have not seen anything like this amount of interest before in technical presentations. Perhaps, I have just been going to the wrong conferences. In any event, I suspect VMworld will need to change venues soon as their technical sessions seem to be outgrowing their session rooms although the exhibit floor could have used a few more exhibitors. Too bad, I loved San Francisco and Moscone Center was so easy to get to…

—-

But all in all a great conference, learned lot’s of new stuff, talked with many old friends, and met many new ones. I look forward to next year.

Anything I missed?

Crowdsourcing business analyst 1 on 1 scheduling with executives

Paris vs Pranksky - Lot 101 by Pranksky (cc) (from Flickr)
Paris vs Pranksky - Lot 101 by Pranksky (cc) (from Flickr)

Just got back from a conference with business analyst meetings and while there I talked with a number of analyst relations people in the audience about what it takes to pull these meeting together.  I was astounded by the effort that goes into setting up 1 on 1s for all the analysts.  In some cases, person weeks of effort goes into this scheduling nightmare.

I have a better answer, just crowdsource the process and let the analysts do all the work by auctioning off your executive meeting slots.  For example, give every analyst showing up a certain amount of analyst bucks (A$) and let them bid their A$’s for whichever executive(s) they want.  The winning bidders get to meet with the executives and the losers can take their money to the next auction.

I see this working as follows:

  • A company creates an auction website accessible to analysts registered for the meeting, with the executive names, bios, and pertinent areas of expertise/influence listed for everyone able to talk with analysts.  One may want to add the number of meeting slots and a minimum acceptable bid (if warranted).
  • The auction should have a duration or time window with which all bids would need to be in and at the end of which a “striking price” could be determined for each executive time slot.
  • While the auction is open, analysts would apportion their A$s to whichever executive(s) they wished to talk to, letting the analysts decide (do the work to determine) which they want to meet with.

Whether this could be done on eBay, some conference call/webex or other facility would need to be investigated.  If eBay were used, the proceeds from the auction could go to charity.

One problem with this approach is sometimes executives have to change their schedules at the last minute.  Thus, there would need to be some list of alternates for executives so that if the primary executive bowed out, an alternate could take their place.

Another potential issue is in how to apportion A$s to analyst firms or analysts.  Personally, I like a per attending analyst apportionment (one man/women, one vote sort of thing).  This way, larger firms with multiple analysts attending would have more A$s to use.  Although as a single person analyst firm I may be disadvantaged with this allotment, there is a possibility that I could syndicate with other  firms to build up our collective A$s, if necessary.

A follow-on question is whether to use the English, Dutch, or sealed-bid auction method (see wikipedia on auctions).  Personally, I think either the English or Dutch auction form would work fine and would provide adequate visibility.  However, the sealed-bid approach could make the process simpler for the company hosting the auction as it wouldn’t require much support other than some address to email the bids for the various executives.

Of course, this all assumes that the purpose of the analyst 1 on 1 meetings is to have executives meet with analysts. But alternatively, another purpose may be to have executives influence particular analysts.  In this case, the auction could be reversed and have the executive team bid on meetings with the analysts.

On the other hand, a combination of the two approaches could be done by supplying both the analysts and executives some A$s and have each submit bids for the other. The total combined A$ value from the executives and the analyst (firms) would decide how their time slots are allocated.  This may not be optimal from either the analyst or the executive perspective, but it would cross-optimize both sides for meeting slots.

Well there you have it, crowdsourcing/auctioning meeting slots, my solution to the problem of scheduling 1 on 1’s for company analyst meetings.

Auctioning off Lot-102, 30 minute meeting with Silverton Consulting/Ray Lucchesi, do I hear any bids?