Collaboration as a function of proximity vs. heterogeneity, MIT research

Read an article the other week in MIT news on how Proximity boosts collaboration on MIT campus. Using MIT patents and papers published between 2004-2014, researchers determined how collaboration varied based on proximity or physical distance.

What they found was that distance matters. The closer you are to a person the more likely you are collaborate with him or her (on papers and patents at least).

Paper results

In looking at the PLOS research paper (An exploration of collaborative scientific production at MIT …), one can see that the relative frequency of collaboration decays as distance increases (Graph A shows frequency of collaboration vs. proximity for papers and Graph B shows a similar relationship for patents).


Other paper results

The two sets of charts below show the buildings where research (papers and patents) was generated. Building heterogeneity, crowdedness (lab space/researcher) and number of papers and patents per building is displayed using the color of the building.

The number of papers and patents per building is self evident.

The heterogeneity of a building is a function of the number of different departments that use the building. The crowdedness of a building is an indication of how much lab space per faculty member a building has. So the more crowded buildings are lighter in color and less crowded buildings are darker in color.

I would like to point out Building 32. It seems to have a high heterogeneity, moderate crowdedness and a high paper production but a relatively low patent production. Conversely, Building 68 has a low heterogeneity, low crowdedness and a high production of papers and a relatively low production of patents. So similar results have been obtained from buildings that have different crowdedness and different heterogeneity.

The paper specifically cites buildings 3 & 32 as being most diverse on campus and as “hubs on campus” for research activity.  The paper states that these buildings were outliers in research production on a per person basis.

And yet there’s no global correlation between heterogeneity or crowdedness for that matter and (paper/patent) research production. I view crowdedness as a substitute for researcher proximity. That is the more crowded a building is the closer researchers should be. Such buildings should theoretically be hotbeds of collaboration. But it doesn’t seem like they have any more papers than non-crowded buildings.

Also heterogeneity is often cited as a generator of research. Steven Johnson’s Where Good Ideas Come From, frequently mentions that good research often derives from collaboration outside your area of speciality. And yet, high heterogeneity buildings don’t seem to have a high production of research, at least for patents.

So I am perplexed and unsatisfied with the research. Yes proximity leads to more collaboration but it doesn’t necessarily lead to more papers or patents. The paper shows other information on the number of papers and patents by discipline which may be confounding results in this regard.

Telecommuting and productivity

So what does this tell us about the plight of telecommuters in todays business and R&D environments. While the paper has shown that collaboration goes down as a function of distance, it doesn’t show that an increase in collaboration leads to more research or productivity.

This last chart from the paper shows how collaboration on papers is trending down and on patents is trending up. For both papers and patents, inter-departmental collaboration is more important than inter-building collaboration. Indeed, the sidebars seem to show that the MIT faculty participation in papers and patents is flat over the whole time period even though the number of authors (for papers) and inventors (for patents) is going up.

So, I,  as a one person company can be considered an extreme telecommuter for any organization I work with. I am often concerned that  my lack of proximity to others adversely limits my productivity. Thankfully the research is inconclusive at best on this and if anything tells me that this is not a significant factor in research productivity

And yet, many companies (Yahoo, IBM, and others) have recently instituted policies restricting telecommuting because, they believe,  it  reduces productivity. This research does not show that.

So IBM and Yahoo I think what you are doing to concentrate your employee population and reduce or outright eliminate telecommuting is wrong.

Picture credit(s): All charts and figures are from the PLOS paper. 


Zipline delivers blood 7X24 using fixed wing drones in Rwanda

Read an article the other day in MIT Tech Review (Zipline’s ambitious medical drone delivery in Africa) about a startup in Silicon Valley, Zipline, that has started delivering blood by drones to remote medical centers in Rwanda.

We’ve talked about drones before (see my Drones as a leapfrog technology post) and how they could be another leapfrog 3rd world countries into the 21st century. Similar, to cell phones, drones could be used to advance infrastructure without having to go replicate the same paths as 1st world countries such as building roads/hiways, trains and other transport infrastructure.

The country

Rwanda is a very hilly but small (10.2K SqMi/26.3 SqKm) and populous (pop. 11.3m) country in east-central Africa, just a few degrees south of the Equator. Rwanda’s economy is based on subsistence agriculture with a growing eco-tourism segment.

Nonetheless, with all
its hills and poverty roads in Rwanda are not the best. In the past delivering blood supplies to remote health centers could often take hours or more. But with the new Zipline drone delivery service technicians can order up blood products with an app on a smart phone and have it delivered via parachute to their center within 20 minutes.

Drone delivery operations

In the nest, a center for drone operations, there is a tent housing the blood supplies, and logistics for the drone force. Beside the tent are a steel runway/catapults that can launch drones and on the other side of the tent are brown inflatable pillows  used to land the drones.

The drones take a pre-planned path to the remote health centers and drop their cargo via parachute to within a five meter diameter circle.

Operators fly the drones using an iPad and each drone has an internal navigation system. Drones fly a pre-planned flightaugmented with realtime kinematic satellite navigation. Drone travel is integrated within Rwanda’s controlled air space. Routes are pre-mapped using detailed ground surveys.

Drone delivery works

Zipline drone blood deliveries have been taking place since late 2016. Deliveries started M-F, during daylight only. But by April, they were delivering 7 days a week, day and night.

Zipline currently only operates in Rwanda and only delivers blood but they have plans to extend deliveries to other medical products and to expand beyond Rwanda.

On their website they stated that before Zipline, delivering blood to one health center would take four hours by truck which can now be done in 17 minutes. Their Muhanga drone center serves 21 medical centers throughout western Rwanda.

Photo Credits:

Axellio, next gen, IO intensive server for RT analytics by X-IO Technologies

We were at X-IO Technologies last week for SFD13 in Colorado Springs talking with the team and they showed us their new IO and storage intensive server, the Axellio. They want to sell Axellio to customers that need extreme IOPS, very high bandwidth, and large storage requirements. Videos of X-IO’s sessions at SFD13 are available here.

The hardware

Axellio comes in 2U appliance with two server nodes. Each server supports  2 sockets of Intel E5-26xx v4 CPUs (4 sockets total) supporting from 16 to 88 cores. Each server node can be configured with up to 1TB of DRAM or it also supports NVDIMMs.

There are two key differentiators to Axellio:

  1. The FabricExpress™, a PCIe based interconnect which allows both server nodes to access dual-ported,  2.5″ NVMe SSDs; and
  2. Dense drive trays, the Axellio supports up to 72 (6 trays with 12 drives each) 2.5″ NVMe SSDs offering up to 460TB of raw NVMe flash using 6.4TB NVMe SSDs. Higher capacity NVMe SSDS available soon will increase Axellio capacity to 1PB of raw NVMe flash.

They also probably spent a lot of time on packaging, cooling and power in order to make Axellio a reliable solution for edge computing. We asked if it was NEBs compliant and they told us not yet but they are working on it.

Axellio can also be configured to replace 2 drive trays with 2 processor offload modules such as 2x Intel Phi CPU extensions for parallel compute, 2X Nvidia K2 GPU modules for high end video or VDI processing or 2X Nvidia P100 Tesla modules for machine learning processing. Probably anything that fits into Axellio’s power, cooling and PCIe bus lane limitations would also probably work here.

At the frontend of the appliance there are 1x16PCIe lanes of server retained for networking that can support off the shelf NICs/HCAs/HBAs with HHHL or FHHL cards for Ethernet, Infiniband or FC access to the Axellio. This provides up to 2x100GbE per server node of network access.

Performance of Axellio

With Axellio using all NVMe SSDs, we expect high IO performance. Further, they are measuring IO performance from internal to the CPUs on the Axellio server nodes. X-IO says the Axellio can hit >12Million IO/sec with at 35µsec latencies with 72 NVMe SSDs.

Lab testing detailed in the chart above shows IO rates for an Axellio appliance with 48 NVMe SSDs. With that configuration the Axellio can do 7.8M 4KB random write IOPS at 90µsec average response times and 8.6M 4KB random read IOPS at 164µsec latencies. Don’t know why reads would take longer than writes in Axellio, but they are doing 10% more of them.

Furthermore, the difference between read and write IOP rates aren’t close to what we have seen with other AFAs. Typically, maximum write IOPs are much less than read IOPs. Why Axellio’s read and write IOP rates are so close to one another (~10%) is a significant mystery.

As for IO bandwitdh, Axellio it supports up to 60GB/sec sustained and in the 48 drive lax testing it generated 30.5GB/sec for random 4KB writes and 33.7GB/sec for random 4KB reads. Again much closer together than what we have seen for other AFAs.

Also noteworthy, given PCIe’s bi-directional capabilities, X-IO said that there’s no reason that the system couldn’t be doing a mixed IO workload of both random reads and writes at similar rates. Although, they didn’t present any test data to substantiate that claim.

Markets for Axellio

They really didn’t talk about the software for Axellio. We would guess this is up to the customer/vertical that uses it.

Aside from the obvious use case as a X-IO’s next generation ISE storage appliance, Axellio could easily be used as an edge processor for a massive fabric of IoT devices, analytics processor for large RT streaming data, and deep packet capture and analysis processing for cyber security/intelligence gathering, etc. X-IO seems to be focusing their current efforts on attacking these verticals and others with similar processing requirements.

X-IO Technologies’ sessions at SFD13

Other sessions at X-IO include: Richard Lary, CTO X-IO Technologies gave a very interesting presentation on an mathematically optimized way to do data dedupe (caution some math involved); Bill Miller, CEO X-IO Technologies presented on edge computing’s new requirements and Gavin McLaughlin, Strategy & Communications talked about X-IO’s history and new approach to take the company into more profitable business.

Again all the videos are available online (see link above). We were very impressed with Richard’s dedupe session and haven’t heard as much about bloom filters, since Andy Warfield, CTO and Co-founder Coho Data, talked at SFD8.

For more information, other SFD13 blogger posts on X-IO’s sessions:

Full Disclosure

X-IO paid for our presence at their sessions and they provided each blogger a shirt, lunch and a USB stick with their presentations on it.


When 64 nodes are not enough

Why would VMware with years of ESX development behind them want to develop a whole new virtualization system for Docker and other container frameworks. Especially since they already have a compatible Docker support in their current product line.

The main reason I can think of is that a 64 node cluster may be limiting to some container services and the likelihood of VMware ESX/vSphere to supporting 1000s of nodes in a single cluster seems pretty unlikely. So given that more and more cloud services are being deployed across 1000s of nodes using container frameworks, VMware had to do something or say goodbye to a potentially lucrative use case for virtualization.

Yes over time VMware may indeed extend vSphere clusters to 128 or even 256 nodes but by then the world will have moved beyond VMware services for these services and where will VMware be then – left behind.

Photon to the rescue

With the new Photon system VMware has an answer to anyone that needs 1000 to 10,000 server cluster environments. Now these customers can easily deploy their services on a VMware Photon Platform which is was developed off of ESX but doesn’t have any cluster limitations of ESX.

Thus, the need for Photon was now. Customers can easily deploy container frameworks that span 1000s of nodes. Of course it won’t be as easy to manage as a 64 node vSphere cluster but it will be easy automated and easier to deploy and easier to scale when necessary, especially beyond 64 nodes.

The claim is that the new Photon will be able to support multiple container frameworks without modification.

So what’s stopping you from taking on the Amazons, Googles, and Apples of the worlds data centers?

  • Maybe storage, but then there’s ScaleIO, and the other software defined storage solutions that are there to support local DAS clusters spanning almost incredible sizes of clusters.
  • Maybe networking, I am not sure just where NSX is in the scheme of things but maybe it’s capable of handling 1000s of nodes and maybe not but networking could be a clear limitation to what how many nodes can be deployed in this sort of environment.

Where does this leave vSphere? Probably continuation of the current trajectory, making easier and more efficient to run VMware clusters and over time extending any current limitations. So for the moment two development streams based off of ESX and each being enhanced for it’s own market.

How much of ESX survived is an open question but it’s likely that Photon will never see the VMware familiar services and operations that is readily available to vSphere clusters.


Photo Credit(s): A first look into Dockerfile system

Peak code, absurd

Read a post the other day that said we would soon reach Peak Code (see ROUGH TYPE Peak Code? post). In his post, Nick Carr discussed a NBER paper (see Robots Are Us: Some Economics of Human Replacement, purchase required). The paper implied we will shortly reach peak code because of the proliferation of software reuse and durability which will lead to less of a need for software engineers/coders.

Peak code refers to a maximum amount of code produced in a year that will be reached at some point, afterwards, code production will decline.

Software durability, hogwash

Let’s dispense with the foolish first – durability. Having been a software engineer, and managed/supervised massive (>1MLoC) engineering developments over my 30 years in the industry, code is anything but durable. Fragile yes, durable no.

Code fixes beget other bugs, often more substantial than the original. System performance is under constant stress, lest the competition take your market share. Enhancements are a never ending software curse.

Furthermore, hardware changes constantly, as components go obsolete, new processors come online, IO changes, etc. One might think new hardware would be easy  to accommodate. But you would be sadly mistaken.

New processors typically come with added enhancements beyond speed or memory size that need to be coded for. New IO busses often require significant code “improvements” to use effectively. New hardware today is moving to more cores, which makes software optimization even more difficult.

On all the projects I was on, code counts never decreased. This was mostly due to enhancements, bug fixes, hardware changes and performance improvements.

Software’s essential difference is that it is unbounded by any physical reality. Yes it has to fit in memory, yes it must execute instructions, yes it performs IO with physical devices/memory/busses. But these are just transient limitations, not physical boundaries. They all go away or change after the next generation hardware comes out, every 18 months or so.

So software grows to accommodate any change, any fix, any enhancement that can be dreamed up by man, women or beast. Software is inherently, not durable and is subject to too many changes which most often leads to increased fragility, not durability.

Software reuse, maybe

I am on less firm footing here. Code reuse is wonderful for functionality that has been done before. If adequate documentation exists, if interfaces are understandable, if you don’t mind including all the tag-along software needed to reuse the code, then reuse is certainly viable.

But, reusing software component often requires integration work, adding or modifying code to work with the module. Yes there may be less code to generate and potentially, validate/test. But, you still have to use the new function somewhere.

And Linux, OpenStack, Hadoop, et al, are readily reusable for organizations that need OS, cloud services or big data. But these things don’t operate in a vacuum. Somebody needs to code a Linux application that views, adds, changes or deletes data somewhere.  Somebody needs to write that cloud service offering which runs under OpenStack that services and moves data across the network. Somebody needs to code up MapReduce, MapR or Spark modules to take unstructured data and do something with it.

Yes there are open source applications, cloud services, and MapReduce packages for standardized activities. But these are the easy, already done parts and seldom suffice in and of themselves for what needs to be done next. Often, even using these as is requires some modifications to run on your data, your applications, and in your environment.

So, does software reuse diminish requirements for new coding, yes. Does software reuse eliminate the need for new code, definitely not.

Coding Automation, yes

Coding automation could diminish the need for new software engineers/coders. However, this would  be equivalent to human level artificial intelligence and would eliminate the need for coders/software engineers, if and when it becomes available. But if anything this would lead to a proliferation of ever more (automated) code, not less. So it’s not peak code as much as peak coders. Hopefully, I won’t see this transpire anytime soon.

So as far as I’m concerned peak code is never going to happen and when peak coders does happen, if ever we will have worse problems to contend with (see my post on Existential Threats).


Photo Credit(s): PDX Creative Coders by Bill Automata 

Apple SIM and more flexible data plans

(c) 2014, Apple (from their website)The new US and UK iPad Air 2 and Mini iPad 3’s now come with a new, programable SIM (see wikipedia SIM article for more info) card for their cellular data services. This is a first in the industry and signals a new movement to more flexible cellular data plans.

Currently, the iPad 2 Apple SIM card supports AT&T, Sprint and T-Mobile in the US (what no Verizon?) and EE in the UK. With this new flexibility one can switch iPad data carriers anytime, seemingly right on the phone without having to get up from your chair at all. You no longer need to go into a cellular vendor’s store and get a new SIM card and insert the new SIM card into your iPad Air 2.

It seems not many cellular carriers are be signed up to the new programmable SIM cards. But with the new Apple SIM’s ability to switch data carriers in an instant, can the other data carriers hold out for long.

What’s a little unclear to me is how the new Apple SIM doesn’t show support for Verizon but the iPad 2 Air literature does show support for Verizon data services. After talking with Apple iPad sales there is an actual SIM card slot in the new iPads that holds the new Apple SIM card and if you want to use Verizon you would need to get a SIM card from them and swap out the Apple SIM card for the Verizon SIM card and insert it into the iPad Air 2.

Having never bought a cellular option for my iPad’s this is all a little new to me. But it seems that when you purchase a new iPad Air 2 wifi + cellular, the list pricing is without any data plan already. So you are free to go to whatever compatible carrier you want right out of the box. With the new Apple SIM the compatible US carriers are AT&T, T-Mobile and Sprint. If you want a Verizon data plan you have to buy a Verizon iPad.

For AT&T, it appears that you can use  their Dataconnect cellular data service for tablets on a month by month basis. I assume the same is true for  T-Mobile who makes a point of not having any service contract even for phones.  Not so sure about Sprint but if AT&T offer it can Sprint be far behind.

I have had a few chats with the cellular service providers and I would say they are not all up to speed on the new Apple SIM capabilities but hopefully they will get there over time.

Now if Apple could somehow do the same for cable data plans or cable TV providers, they really could change the world – Apple TV anyone?


The Mac 30 years on

I have to admit it. I have been a Mac and Apple bigot since 1984. I saw the commercial for the Mac and just had to have one. I saw the Lisa, a Mac precursor at a conference in town and was very impressed.

At the time, we were using these green or orange screens at work connected to IBM mainframes running TSO or VM/CMS and we thought we were leading edge.

And then the Mac comes out with proportional fonts, graphics terminal screen, dot matrix printing that could print anything you could possibly draw, a mouse and a 3.5″ floppy.

Somehow my wife became convinced and bought our family’s first Mac for her accounting office. You could buy spreadsheet and a WYSIWIG Word processor software and run them all in 128KB. She ended up buying Mac accounting software and that’s what she used to run her office.

She upgraded over the years and got the 512K Mac but eventually when  she partnered with two other accountants she changed to a windows machines. And that’s when the Mac came home.

I used the Mac, spreadsheets and word processing for most of my home stuff and did some programming on it for odd jobs but mostly it just was used for home office stuff. We upgraded this over the years, eventually getting a PowerMac which had a base station with a separate CRT above it, but somehow this never felt like a Mac.

Then in 2002 we got the 15″ new iMac. This came as a half basketball base with a metal arm emerging out of the top of it, with a color LCD screen attached. I loved this Mac. We still have it but nobody’s using it anymore. I used it to edit my first family films using an early version of iMovie. It took hours to upload the video and hours more to edit it. But in the end, you had a movie on the iMac or on CD which you could watch with your family. You can’t imagine how empowered I felt.

Sometime later I left corporate America for the life of a industry analyst/consultant. I still used the 15″ iMac for the first year after I was out but ended up purchasing an alluminum Powerbook Mac laptop with my first check. This was faster than the 15″ iMac and had about the same size screen. At the time, I thought I would spend a lot out of time on the road.

But as it turns out, I didn’t spend that much time out of the office so when I generated enough revenue to start feeling more successful, I bought a iMac G5. The kids were using this until last year when I broke it. This had a bigger screen and was definitely a step up in power, storage and had a  Superdrive which allowed me to burn DVD-Rs for our family movies. When I wasn’t working I was editing family movies in half an hour or less (after import) and converting them to DVDs. Somewhere during this time, Garageband came out and I tried to record and edit a podcast, this took hours to complete and to export as a podcast.

I moved from the PowerBook laptop to a MacBook laptop. I don’t spend a lot of time out of the office but when I do I need a laptop to work on. A couple of years back I bought a MacBook Air and have been in love with it ever since. I just love the way it feels, light to the touch and doesn’t take up a lot of space. I bought a special laptop backpack for the old MacBook but it’s way overkill for the Air. Yes, it’s not that powerful, has less storage and  has the smaller screen (11″) but in a way it’s more than enough to live with on long vacations or out of the office

Sometime along the way I updated to my desktop to the aluminum iMac. It had a bigger screen, more storage and was much faster. Now movie editing was a snap. I used this workhorse for four years before finally getting my latest generation iMac with the biggest screen available and faster than I could ever need (he says now). Today, I edit GarageBand podcasts in a little over 30 minutes and it’s not that hard to do anymore.

Although, these days Windows has as much graphic ability as the Mac, what really made a difference for me and my family is the ease of use, multimedia support and the iLife software (iMovie, iDVD, iPhoto, iWeb, & GarageBand) over the years and yes, even iTunes. Apple’s Mac OS software has evolved over the years but still seems to be the easiest desktop to use, bar none.

Let’s hope the Mac keeps going for another 30 years.

Photo Credits:  Original 128k Mac Manual by ColeCamp

my original Macintosh by Blake Patterson

Brand new iMac, February 16, 2002 by Dennis Brekke

MacBook Air by

iMac Late 2012 by Cellura Technology

HDS Influencer Summit wrap up

[Sorry for the length, it was a long day] There was an awful lot of information suppied today. The morning sessions were all open but most of the afternoon was under NDA.

Jack Domme,  HDS CEO started the morning off talking about the growth in HDS market share.  Another 20% y/y growth in revenue for HDS.  They seem to be hitting the right markets with the right products.  They have found a lot of success in emerging markets in Latin America, Africa and Asia.  As part of this thrust into emerging markets HDS is opening up a manufacturing facility in Brazil and a Sales/Solution center in Columbia.

Jack spent time outlining the infrastructure cloud to content cloud to information cloud transition that they believe is coming in the IT environment of the future.   In addition, there has been even greater alignment within Hitachi Ltd and consolidation of engineering teams to tackle new converged infrastructure needs.

Randy DeMont, EVP and GM Global Sales, Services and Support got up next and talked about their success with the channel. About 50% of their revenue now comes from indirect sources. They are focusing some of their efforts to try to attract global system integrators that are key purveyors to Global 500 companies and their business transformation efforts.

Randy talked at length about some of their recent service offerings including managed storage services. As customers begin to trust HDS with their storage they are start considering moving their whole data center to HDS. Randy said this was a $1B opportunity for HDS and the only thing holding them back is finding the right people with the skills necessary to provide this service.

Randy also mentioned that over the last 3-4 years HDS has gained 200-300 new clients a quarter, which is introducing a lot of new customers to HDS technology.

Brian Householder, EVP, WW Marketing, Business Development and Partners got up next and talked about how HDS has been delivering on their strategic vision for the last decade or so.    With HUS VM, HDS has moved storage virtualization down market, into a rack mounted 5U storage subsystem.

Brian mentioned that 70% of their customers are now storage virtualized (meaning that they have external storage managed by VSP, HUS VM or prior versions).  This is phenomenal seeing as how only a couple of years back this number was closer to 25%.  Later at lunch I probed as to what HDS thought was the reason for this rapid adoption, but the only explanation was the standard S-curve adoption rate for new technologies.

Brian talked about some big data applications where HDS and Hitachi Ltd, business units collaborate to provide business solutions. He mentioned the London Summer Olympics sensor analytics, medical imaging analytics, and heavy construction equipment analytics. Another example he mentioned was financial analysis firms usingsatellite images of retail parking lots to predict retail revenue growth or loss.  HDS’s big data strategy seems to be vertically focused building on the strength in Hitachi Ltd’s portfolio of technologies. This was the subject of a post-lunch discussion between John Webster of Evaluator group, myself and Brian.

Brian talked about their storage economics professional services engagement. HDS has done over 1200 storage economics engagements and  have written books on the topic as well as have iPad apps to support it.  In addition, Brian mentioned that in a late The Info Pro survey, HDS was rated number 1 in value for storage products.

Brian talked some about HDS strategic planning frameworks one of which was an approach to identify investments to maximize share of IT spend across various market segments.  Since 2003 when HDS was 80% hardware revenue company to today where they are over 50% Software and Services revenue they seem to have broaden their portfolio extensively.

John Mansfield, EVP Global Solutions Strategy and Development and Sean Moser, VP Software Platforms Product Management spoke next and talked about HCP and HNAS integration over time. It was just 13 months ago that HDS acquired BlueArc and today they have integrated BlueArc technology into HUS VM and HUS storage systems (it was already the guts of HNAS).

They also talked about the success HDS is having with HCP their content platform. One bank they are working with plans to have 80% of their data in an HCP object store.

In addition there was a lot of discussion on UCP Pro and UCP Select, HDS’s converged server, storage and networking systems for VMware environments. With UCP Pro the whole package is ordered as a single SKU. In contrast, with UCP Select partners can order different components and put it together themselves.  HDS had a demo of their UCP Pro orchestration software under VMware vSphere 5.1 vCenter that allowed VMware admins to completely provision, manage and monitor servers, storage and networking for their converged infrastructure.

They also talked about their new Hitachi Accelerated Flash storage which is an implementation of a Flash JBOD using MLC NAND but with extensive Hitachi/HDS intellectual property. Together with VSP microcode changes, the new flash JBOD provides great performance (1 Million IOPS) in a standard rack.  The technology was developed specifically by Hitachi for HDS storage systems.

Mike Walkey SVP Global Partners and Alliances got up next and talked about their vertical oriented channel strategy.  HDS is looking for channel partners perspective the questions that can expand their reach to new markets, providing services along with the equipment and that can make a difference to these markets.  They have been spending more time and money on vertical shows such as VMworld, SAPhire, etc. rather than horizontal storage shows (such as SNW). Mike mentioned key high level partnerships with Microsoft, VMware, Oracle, and SAP as helping to drive solutions into these markets.

Hicham Abhessamad, SVP, Global Services got up next and talked about the level of excellence available from HDS services.  He indicated that professional services grew by 34% y/y while managed services grew 114% y/y.  He related a McKinsey study that showed that IT budget priorities will change over the next couple of years away from pure infrastructure to more analytics and collaboration.  Hicham talked about a couple of large installations of HDS storage and what they are doing with it.

There were a few sessions of one on ones with HDS executives and couple of other speakers later in the day mainly on NDA topics.  That’s about all I took notes on.  I was losing steam toward the end of the day.