Disk rulz, at least for now

Last week WDC announced their next generation technology for hard drives, MAMR or Microwave Assisted Magnetic Recording. This is in contrast to HAMR, Heat (laser) Assisted Magnetic Recording. Both techniques add energy so that data can be written as smaller bits on a track.

Disk density drivers

Current hard drive technology uses PMR or Perpendicular Magnetic Recording with or without SMR (Shingled Magnetic Recording) and TDMR (Two Dimensional Magnetic Recording), both of which we have discussed before in prior posts.

The problem with PMR-SMR-TDMR is that the max achievable disk density is starting to flat line and approaching the “WriteAbility limit” of the head-media combination.

That is even with TDMR, SMR and PMR heads, the highest density that can be achieved is ~1.1Tb/sq.in. The Writeability limit for the current PMR head-media technology is ~1.4Tb/sq.in. As a result most disk density increases over the past years has been accomplished by adding platters-heads to hard drives.

MAMR and HAMR both seem able to get disk drives to >4.0Tb/sq.in. densities by adding energy to the magnetic recording process, which allows the drive to record more data in the same (grain) area.

There are two factors which drive disk drive density (Tb/sq.in.): Bits per inch (BPI) and Tracks per inch (TPI). Both SMR and TDMR were techniques to add more TPI.

I believe MAMR and HAMR increase BPI beyond whats available today by writing data on smaller magnetic grain sizes (pitch in chart) and thus more bits in the same area. At 7nm grain sizes or below PMR becomes unstable, but HAMR and MAMR can record on grain sizes of 4.5nm which would equate to >4.5Tb/sq.in.

HAMR hurdles

It turns out that HAMR as it uses heat to add energy, heats the media drives to much higher temperatures than what’s normal for a disk drive, something like 400C-700C.  Normal operating temperatures for disk drives is  ~50C.  HAMR heat levels will play havoc with drive reliability. The view from WDC is that HAMR has 100X worse reliability than MAMR.

In order to generate that much heat, HAMR needs a laser to expose the area to be written. Of course the laser has to be in the head to be effective. Having to add a laser and optics will increase the cost of the head, increase the steps to manufacture the head, and require new suppliers/sourcing organizations to supply the componentry.

HAMR also requires a different media substrate. Unclear why, but HAMR seems to require a glass substrate, the magnetic media (many layers) is  deposited ontop of the glass substrate. This requires a new media manufacturing line, probably new suppliers and getting glass to disk drive (flatness-bumpiness, rotational integrity, vibrational integrity) specifications will take time.

Probably more than a half dozen more issues with having laser light inside a hard disk drive but suffice it to say that HAMR was going to be a very difficult transition to perform right and continue to provide today’s drive reliability levels.

MAMR merits

MAMR uses microwaves to add energy to the spot being recorded. The microwaves are generated by a Spin Torque Oscilator, (STO), which is a solid state device, compatible with CMOS fabrication techniques. This means that the MAMR head assembly (PMR & STO) can be fabricated on current head lines and within current head mechanisms.

MAMR doesn’t add heat to the recording area, it uses microwaves to add energy. As such, there’s no temperature change in MAMR recording which means the reliability of MAMR disk drives should be about the same as todays disk drives.

MAMR uses todays aluminum substrates. So, current media manufacturing lines and suppliers can be used and media specifications shouldn’t have to change much (?) to support MAMR.

MAMR has just about the same max recording density as HAMR, so there’s no other benefit to going to HAMR, if MAMR works as expected.

WDC’s technology timeline

WDC says they will have sample MAMR drives out next year and production drives out in 2019. They also predict an enterprise 40TB MAMR drive by 2025. They have high confidence in this schedule because MAMR’s compatabilitiy with  current drive media and head manufacturing processes.

WDC discussed their IP position on HAMR and MAMR. They have 400+ issued HAMR patents with another 100+ pending and 75 issued MAMR patents with 46 more pending. Quantity doesn’t necessarily equate to quality, but their current IP position on both MAMR and HAMR looks solid.

WDC believes that by 2020, ~90% of enterprise data will be stored on hard drives. However, this is predicated on achieving a continuing, 10X cost differential between disk drives and (QLC 3D) flash.

What comes after MAMR is subject of much speculation. I’ve written on one alternative which uses liquid Nitrogen temperatures with molecular magnets, I called CAMR (cold assisted magnetic recording) but it’s way to early to tell.

And we have yet to hear from the other big disk drive leader, Seagate. It will be interesting to hear whether they follow WDC’s lead to MAMR, stick with HAMR, or go off in a different direction.

Comments?

 

Photo Credit(s): WDC presentation

A tale of two storage companies – NetApp and Vantara (HDS-Insight Grp-Pentaho)

It was the worst of times. The industry changes had been gathering for a decade almost and by this time were starting to hurt.

The cloud was taking over all new business and some of the old. Flash’s performance was making high performance easy and reducing storage requirements commensurately. Software defined was displacing low and midrange storage, which was fine for margins but injurious to revenues.

Both companies had user events in Vegas the last month, NetApp Insight 2017 last week and Hitachi NEXT2017 conference two weeks ago.

As both companies respond to industry trends, they provide an interesting comparison to watch companies in transition.

Company role

  • NetApp’s underlying theme is to change the world with data and they want to change to help companies do this.
  • Vantara’s philosophy is data and processing is ultimately moving into the Internet of things (IoT) and they want to be wherever the data takes them.

Hitachi Vantara is a brand new company that combines Hitachi Data Systems, Hitachi Insight Group and Pentaho (an analytics acquisition) into one organization to go after the IoT market. Pentaho will continue as a separate brand/subsidiary, but HDS and Insight Group cease to exist as separate companies/subsidiaries and are now inside Vantara.

NetApp sees transitions occurring in the way IT conducts business but ultimately, a continuing and ongoing role for IT. NetApp’s ultimate role is as a data service provider to IT.

Customer problem

  • Vantara believes the main customer issue is the need to digitize the business. Because competition is emerging everywhere, the only way for a company to succeed against this interminable onslaught is to digitize everything. That is digitize your manufacturing/service production, sales, marketing, maintenance, any and all customer touch points, across your whole value chain and do it as rapidly as possible. If you don’t your competition will.
  • NetApp sees customers today have three potential concerns: 1) how to modernize current infrastructure; 2) how to take advantage of (hybrid) cloud; and 3) how to build out the next generation data center. Modernization is needed to free capital and expense from traditional IT for use in Hybrid cloud and next generation data centers. Most organizations have all three going on concurrently.

Vantara sees the threat of startups, regional operators and more advanced digitized competitors as existential for today’s companies. The only way to keep your business alive under these onslaughts is to optimize your value delivery. And to do that, you have to digitize every step in that path.

NetApp views the threat to IT as originating from LoB/shadow IT originating applications born and grown in the cloud or other groups creating next gen applications using capabilities outside of IT.

Product direction

  • NetApp is looking mostly towards the cloud. At their conference they announced a new Azure NFS service powered by NetApp. They already had Cloud ONTAP and NPS, both current cloud offerings, a software defined storage in the cloud and a co-lo hardware offering directly attached to public cloud (Azure & AWS), respectively.
  • Vantara is looking towards IoT. At their conference they announced Lumada 2.0, an Industrial IoT (IIoT) product framework using plenty of Hitachi software functionality and intended to bring data and analytics under one software umbrella.

NetApp is following a path laid down years past when they devised the data fabric. Now, they are integrating and implementing data fabric across their whole product line. With the ultimate goal that wherever your data goes, the data fabric will be there to help you with it.

Vantara is broadening their focus, from IT products and solutions to IoT. It’s not so much an abandoning present day IT, as looking forward to the day where present day IT is just one cog in an ever expanding, completely integrated digital entity which the new organization becomes.

They both had other announcements, NetApp announced ONTAP 9.3, Active IQ (AI applied to predictive service) and FlexPod SF ([H]CI with SolidFire storage) and Vantara announced a new IoT turnkey appliance running Lumada and a smart data center (IoT) solution.

Who’s right?

They both are.

Digitization is the future, the sooner organizations realize and embrace this, the better for their long term health. Digitization will happen with or without organizations and when it does, it will result in a significant re-ordering of today’s competitive landscape. IoT is one component of organizational digitization, specifically outside of IT data centers, but using IT resources.

In the mean time, IT must become more effective and efficient. This means it has to modernize to free up resources to support (hybrid) cloud applications and supply the infrastructure needed for next gen applications.

One could argue that Vantara is positioning themselves for the long term and NetApp is positioning themselves for the short term. But that denies the possibility that IT will have a role in digitization. In the end both are correct and both can succeed if they deliver on their promise.

Comments?

 

Collaboration as a function of proximity vs. heterogeneity, MIT research

Read an article the other week in MIT news on how Proximity boosts collaboration on MIT campus. Using MIT patents and papers published between 2004-2014, researchers determined how collaboration varied based on proximity or physical distance.

What they found was that distance matters. The closer you are to a person the more likely you are collaborate with him or her (on papers and patents at least).

Paper results

In looking at the PLOS research paper (An exploration of collaborative scientific production at MIT …), one can see that the relative frequency of collaboration decays as distance increases (Graph A shows frequency of collaboration vs. proximity for papers and Graph B shows a similar relationship for patents).

 

Other paper results

The two sets of charts below show the buildings where research (papers and patents) was generated. Building heterogeneity, crowdedness (lab space/researcher) and number of papers and patents per building is displayed using the color of the building.

The number of papers and patents per building is self evident.

The heterogeneity of a building is a function of the number of different departments that use the building. The crowdedness of a building is an indication of how much lab space per faculty member a building has. So the more crowded buildings are lighter in color and less crowded buildings are darker in color.

I would like to point out Building 32. It seems to have a high heterogeneity, moderate crowdedness and a high paper production but a relatively low patent production. Conversely, Building 68 has a low heterogeneity, low crowdedness and a high production of papers and a relatively low production of patents. So similar results have been obtained from buildings that have different crowdedness and different heterogeneity.

The paper specifically cites buildings 3 & 32 as being most diverse on campus and as “hubs on campus” for research activity.  The paper states that these buildings were outliers in research production on a per person basis.

And yet there’s no global correlation between heterogeneity or crowdedness for that matter and (paper/patent) research production. I view crowdedness as a substitute for researcher proximity. That is the more crowded a building is the closer researchers should be. Such buildings should theoretically be hotbeds of collaboration. But it doesn’t seem like they have any more papers than non-crowded buildings.

Also heterogeneity is often cited as a generator of research. Steven Johnson’s Where Good Ideas Come From, frequently mentions that good research often derives from collaboration outside your area of speciality. And yet, high heterogeneity buildings don’t seem to have a high production of research, at least for patents.

So I am perplexed and unsatisfied with the research. Yes proximity leads to more collaboration but it doesn’t necessarily lead to more papers or patents. The paper shows other information on the number of papers and patents by discipline which may be confounding results in this regard.

Telecommuting and productivity

So what does this tell us about the plight of telecommuters in todays business and R&D environments. While the paper has shown that collaboration goes down as a function of distance, it doesn’t show that an increase in collaboration leads to more research or productivity.

This last chart from the paper shows how collaboration on papers is trending down and on patents is trending up. For both papers and patents, inter-departmental collaboration is more important than inter-building collaboration. Indeed, the sidebars seem to show that the MIT faculty participation in papers and patents is flat over the whole time period even though the number of authors (for papers) and inventors (for patents) is going up.

So, I,  as a one person company can be considered an extreme telecommuter for any organization I work with. I am often concerned that  my lack of proximity to others adversely limits my productivity. Thankfully the research is inconclusive at best on this and if anything tells me that this is not a significant factor in research productivity

And yet, many companies (Yahoo, IBM, and others) have recently instituted policies restricting telecommuting because, they believe,  it  reduces productivity. This research does not show that.

So IBM and Yahoo I think what you are doing to concentrate your employee population and reduce or outright eliminate telecommuting is wrong.

Picture credit(s): All charts and figures are from the PLOS paper. 

 

Hardware vs. software innovation – round 4

We, the industry and I, have had a long running debate on whether hardware innovation still makes sense anymore (see my Hardware vs. software innovation – rounds 1, 2, & 3 posts).

The news within the last week or so is that Dell-EMC cancelled their multi-million$, DSSD project, which was a new hardware innovation intensive, Tier 0 flash storage solution, offering 10 million of IO/sec at 100µsec response times to a rack of servers.

DSSD required specialized hardware and software in the client or host server, specialized cabling between the client and the DSSD storage device and specialized hardware and flash storage in the storage device.

What ultimately did DSSD in, was the emergence of NVMe protocols, NVMe SSDs and RoCE (RDMA over Converged Ethernet) NICs.

Last weeks post on Excelero (see my 4.5M IO/sec@227µsec … post) was just one example of what can be done with such “commodity” hardware. We just finished a GreyBeardsOnStorage podcast (GreyBeards podcast with Zivan Ori, CEO & Co-founder, E8 storage) with E8 Storage which is yet another approach to using NVMe-RoCE “commodity” hardware and providing amazing performance.

Both Excelero and E8 Storage offer over 4 million IO/sec with ~120 to ~230µsec response times to multiple racks of servers. All this with off the shelf, commodity hardware and lots of software magic.

Lessons for future hardware innovation

What can be learned from the DSSD to NVMe(SSDs & protocol)-RoCE technological transition for future hardware innovation:

  1. Closely track all commodity hardware innovations, especially ones that offer similar functionality and/or performance to what you are doing with your hardware.
  2. Intensely focus any specialized hardware innovation to a small subset of functionality that gives you the most bang, most benefits at minimum cost and avoid unnecessary changes to other hardware.
  3. Speedup hardware design-validation-prototype-production cycle as much as possible to get your solution to the market faster and try to outrun and get ahead of commodity hardware innovation for as long as possible.
  4. When (and not if) commodity hardware innovation emerges that provides  similar functionality/performance, abandon your hardware approach as quick as possible and adopt commodity hardware.

Of all the above, I believe the main problem is hardware innovation cycle times. Yes, hardware innovation costs too much (not discussed above) but I believe that these costs are a concern only if the product doesn’t succeed in the market.

When a storage (or any systems) company can startup and in 18-24 months produce a competitive product with only software development and aggressive hardware sourcing/validation/testing, having specialized hardware innovation that takes 18 months to start and another 1-2 years to get to GA ready is way too long.

What’s the solution?

I think FPGA’s have to be a part of any solution to making hardware innovation faster. With FPGA’s hardware innovation can occur in days weeks rather than months to years. Yes ASICs cost much less but cycle time is THE problem from my perspective.

I’d like to think that ASIC development cycle times of design, validation, prototype and production could also be reduced. But I don’t see how. Maybe AI can help to reduce time for design-validation. But independent FABs can only speed the prototype and production phases for new ASICs, so much.

ASIC failures also happen on a regular basis. There’s got to be a way to more quickly fix ASIC and other hardware errors. Yes some hardware fixes can be done in software but occasionally the fix requires hardware changes. A quicker hardware fix approach should help.

Finally, there must be an expectation that commodity hardware will catch up eventually, especially if the market is large enough. So an eventual changeover to commodity hardware should be baked in, from the start.

~~~~

In the end, project failures like this happen. Hardware innovation needs to learn from them and move on. I commend Dell-EMC for making the hard decision to kill the project.

There will be a next time for specialized hardware innovation and it will be better. There are just too many problems that remain in the storage (and systems) industry and a select few of these can only be solved with specialized hardware.

Comments?

Picture credit(s): Gravestones by Sherry NelsonMotherboard 1 by Gareth Palidwor; Copy of a DSSD slide photo taken from EMC presentation by Author (c) Dell-EMC

4.5M IO/sec@227µsec 4KB Read on 100GBE with 24 NVMe cards #SFD12

At Storage Field Day 12 (SFD12) this week we talked with Excelero, which is a startup out of Israel. They support a software defined block storage for Linux.

Excelero depends on NVMe SSDs in servers (hyper converged or as a storage system), 100GBE and RDMA NICs. (At the time I wrote this post, videos from the presentation were not available, but the TFD team assures me they will be up on their website soon).

I know, yet another software defined storage startup.

Well yesterday they demoed a single storage system that generated 2.5 M IO/sec random 4KB random writes or 4.5 M IO/Sec random 4KB reads. I didn’t record the random write average response time but it was less than 350µsec and the random read average response time was 227µsec. They only did these 30 second test runs a couple of times, but the IO performance was staggering.

But they used lots of hardware, right?

No. The target storage system used during their demo consisted of:

  • 1-Supermicro 2028U-TN24RT+, a 2U dual socket server with up to 24 NVMe 2.5″ drive slots;
  • 2-2 x 100Gbs Mellanox ConnectX-5 100Gbs Ethernet (R[DMA]-NICs); and
  • 24-Intel 2.5″ 400GB NVMe SSDs.

They also had a Dell Z9100-ON Switch  supporting 32 X 100Gbs QSFP28 ports and I think they were using 4 hosts but all this was not part of the storage target system.

I don’t recall the CPU processor used on the target but it was a relatively lowend, cheap ($300 or so) dual core, Intel standard CPU. I think they said the total target hardware cost $13K or so.

I priced out an equivalent system. 24 400GB 2.5″ NVMe Intel 750 SSDs would cost around $7.8K (Newegg); the 2 Mellanox ConnectX-5 cards $4K (Neutron USA); and the SuperMicro plus an Intel Cpu around $1.5K. So the total system is close to the ~$13K.

But it burned out the target CPU, didn’t it?

During the 4.5M IO/sec random read benchmark, the storage target CPU was at 0.3% busy and the highest consuming process on the target CPU was the Linux “Top” command used to display the PS status.

Excelero claims that the storage target system consumes absolutely no CPU processing to service an 4K read or write IO request. All of IO processing is done by hardware (the R(DMA)-NICs, the NVMe drives and PCIe bus) which bypasses the storage target CPU altogether.

We didn’t look at the host cpu utilization but driving 4.5M IO/sec would take a high level of CPU power even if their client software did most of this via RDMA messaging magic.

How is this possible?

Their client software running in the Linux host is roughly equivalent to an iSCSI initiator but talks a special RDMA protocol (patent pending by Excelero, RDDA protocol) that adds an IO request to the NVMe device submission queue and then rings the doorbell on the target system device and the SSD then takes it off the queue and executes it. In addition to the submission queue IO request they preprogram the PCIe MSI interrupt request message to somehow program (?) the target system R-NIC to send the read data/write status data back to the client host.

So there’s really no target CPU processing for any NVMe message handling or interrupt processing, it’s all done by the client SW and is handled between the NVMe drive and the target and client R-NICs.

The result is that the data is sent back to the requesting host automatically from the drive to the target R-NIC over the target’s PCIe bus and then from the target system to the client system via RDMA across 100GBE and the R-NICS and then from the client R-NIC to the client IO memory data buffer over the client’s PCIe bus.

Writes are a bit simpler as the 4KB write data can be encapsulated into the submission queue command for the write operation that’s sent to the NVMe device and the write IO status is relatively small amount of data that needs to be sent back to the client.

NVMe optimized for 4KB IO

Of course the NVMe protocol is set up to transfer up to 4KB of data with a (write command) submission queue element. And the PCIe MSI interrupt return message can be programmed to (I think) write a command in the R-NIC to cause the data transfer back for a read command directly into the client’s memory using RDMA with no CPU activity whatsoever in either operation. As long as your IO request is less than 4KB, this all works fine.

There is some minor CPU processing on the target to configure a LUN and set up the client to target connection. They essentially only support replicated RAID 10 protection across the NVMe SSDs.

They also showed another demo which used the same drive both across the 100Gbs Ethernet network and in local mode or direct as a local NVMe storage. The response times shown for both local and remote were within  5µsec of each other. This means that the overhead for going over the Ethernet link rather than going local cost you an additional 5µsec of response time.

Disaggregated vs. aggregated configuration

In addition to their standalone (disaggregated) storage target solution they also showed an (aggregated) Linux based, hyper converged client-target configuration with a smaller number of NVMe drives in them. This could be used in configurations where VMs operated and both client and target Excelero software was running on the same hardware.

Simply amazing

The product has no advanced data services. no high availability, snapshots, erasure coding, dedupe, compression replication, thin provisioning, etc. advanced data services are all lacking. But if I can clone a LUN at lets say 2.5M IO/sec I can get by with no snapshotting. And with hardware that’s this cheap I’m not sure I care about thin provisioning, dedupe and compression.  Remote site replication is never going to happen at these speeds. Ok HA is an important consideration but I think they can make that happen and they do support RAID 10 (data mirroring) so data mirroring is there for an NVMe device failure.

But if you want 4.5M 4K random reads or 2.5M 4K random writes on <$15K of hardware and happen to be running Linux, I think they have a solution for you. They showed some volume provisioning software but I was too overwhelmed trying to make sense of their performance to notice.

Yes it really screams for 4KB IO. But that covers a lot of IO activity these days. And if you can do Millions of them a second splitting up bigger IOs into 4K should not be a problem.

As far as I could tell they are selling Excelero software as a standalone product and offering it to OEMs. They already have a few customers using Excelero’s standalone software and will be announcing  OEMs soon.

I really want one for my Mac office environment, although what I’d do with a millions of IO/sec is another question.

Comments?

Mixed progress on self-driving cars

Read an article the other day on the progress in self-driving cars in NewsAtlas (DMV reports self-driving cars are learning — fast). More details are available from their source (CA [California] DMV [Dept. of Motor Vehicles] report).

The article reported on what’s called disengagement events that occurred on CA roads. This is where a driver has to take over from the self-driving automation to deal with a potential mis-queue, mistake, or accident.

Waymo (Google) way out ahead

It appears as if Waymo, Google’s self-driving car spin out, is way ahead of the pack. It reported only 124 disengages for 636K mi (~1M km) or ~1 disengage every ~5.1K mi (~8K km). This is ~4.3X better rate than last year, 1 disengage for every ~1.2K mi (1.9K km).

Competition far behind

Below I list some comparative statistics (from the DMV/CA report, noted above), sorted from best to worst:

  • BMW: 1 disengage 638 mi (1027 km)
  • Ford: 3 disengages for 590 mi (~950 km) or 1 disengage every ~197 mi (~317 km);
  • Nissan: 23 disengages for 3.3K mi (3.5K km) or 1 disengage every ~151 mi (~243 km)
  • Cruise (GM) automation: had 181 disengagements for ~9.8K mi (~15.8K km) or 1 disengage every ~54 mi (~87 km)
  • Delphi: 149 disengages for ~3.1K mi (~5.0K km) or 1 disengage every ~21 mi (~34 km);

There was no information on previous years activities so no data on how competitors had improved over the last year.

Please note: the report only applies to travel on California (CA) roads. Other competitors are operating in other countries and other states (AZ, PA, & TX to name just a few). However, these rankings may hold up fairly well when combined with other state/country data. Thousand(s) of kilometers should be adequate to assess self-driving cars disengagement rates.

Waymo moving up the (supply chain) stack

In addition, according to a Recode, (The Google car was supposed to disrupt the car industry) article, Waymo is moving from a (self-driving automation) software supplier to a hardware and software supplier to the car industry.

Apparently, Google has figured out how to reduce their sensor (hardware) costs by a factor of 10X, bringing the sensor package down from $75K to $7.5K, (most probably due to a cheaper way to produce Lidar sensors – my guess).

So now Waymo is doing about ~65 to ~1000 X more (CA road) miles than any competitor, has a much (~8 to ~243 X) better disengage rate and is  moving to become a major auto supplier in both hardware and software.

It’s going to be an interesting century.

If the 20th century was defined by the emergence of the automobile, the 21st will probably be defined by dominance of autonomous operations.

Comments?

Photo credits: Substance E′TS; and Waymo on the road

 

Storage systems on Agile

640px-Scrum_FrameworkWas talking with Qumulo‘s CEO Peter Godman earlier this week for another GreyBeards On Storage Podcast (not available yet). One thing he said which was hard for me to comprehend was that they were putting out a new storage software release every 2 weeks.

Their customers are updating their storage system software every 2 weeks.

In my past life as a storage systems development director, we would normally have to wait months if not quarters before customers updated their systems to the latest release. As a result, we strived to put out an update at most, once a quarter with a major release every year to 18 months or so.

To me releasing code to the field every two weeks sounds impossible or at best very risky. Then I look at my iPhone. I get updates from Twitter, Facebook, LinkedIN and others, every other week. And Software-as-a-service (SaaS) solutions often update their systems frequently, if not every other week. Should storage software be any different?

It turns out Peter and his development team at Qumulo have adopted SaaS engineering methodology, which I believe uses Agile development.

Agile development

As I understand it Agile development has a couple of tenets (see Wikipedia article for more information):

  • Individuals and interaction – leading to co-located teams, with heavy use of pair programming, and developer generated automated testing, rather than dispersed teams with developers and QA separate but (occasionally) equal.
  • Working software – using working software as a means of validating requirements, showing off changes and modifying code rather than developing reams of documentation.
  • (Continuous) Customer collaboration – using direct engagement with customers over time to understand changes (using working software) rather than one time contracts for specifications of functionality
  • Responding to change – changing direction in real time using immediate customer feedback rather than waiting months or a year or more to change development direction to address customer concerns.

In addition to all the above, Agile development typically uses Scrum for product planning. An Agile Scrum (see picture above & Wikipedia article) is a weekly (maybe daily) planning meeting, held standing up and discussing what changes go into the code next.

This is all fine for application development which involves a few dozen person years of effort but storage software development typically takes multiple person centuries of development & QA effort. In my past life, our storage system regression testing typically took 24 hours or more and proper QA validation took six months or more of elapsed time with ~ 5 person years or so of effort, not to mention beta testing the system at a few, carefully selected customer sites for 6 weeks or more. How can you compress this all into a few weeks?

Software development on Agile

With Agile, you probably aren’t beta testing a new release for 6 weeks anywhere, anymore. While you may beta test a new storage system for a period of time you can’t afford the time to do this on subsequent release updates anymore.

Next, there is no QA. It’s just a developer/engineer and their partner. Together they own code change and its corresponding test suite. When one adds functionality to the system, it’s up to the team to add new tests to validate it. Test automation helps streamline the process.

Finally, there’s continuous integration to the release code in flight. Used to be a developer would package up a change, then validate it themselves (any way they wanted), then regression test it integrated with the current build, and then if it was deemed important enough, it would be incorporated into the next (daily) build of the software. If it wasn’t important, it could wait on the shelf (degenerating over time due to lack of integration) until it came up for inclusion. In contrast, I believe Agile software builds happen hourly or even more often (in real time perhaps), changes are integrated as soon as they pass automated testing, and are never put on the shelf. Larger changes may still be delayed until a critical mass is available, but if it’s properly designed even major changes can be implemented incrementally. Once in the build, automated testing insures that any new change doesn’t impact working functionality.

Due to the length of our update cycle, we often had 2 or more releases being validated at any one time. Unclear to me whether Agile allows for multiple releases in flight as it just adds to the complexity and any change may  have to be tailored for each release it goes into.

Storage on Agile

Vendors are probably not doing this with hardware that’s also undergoing significant change. Trying to do both would seem suicidal.

Hardware modifications often introduce timing alterations that can expose code bugs that had never been seen before. Hardware changes also take a longer time to instantiate (build into electronics). This can be worked around by using hardware simulators but timing is often not the same as the real hardware and it can take 10X to 100X more real-time to execute simple operations. Nonetheless, new hardware typically takes weeks to months to debug and this can be especially hard if the software is changing as well.

Similar to hardware concerns, OS or host storage protocol changes (say from NFSv3 to NFSv4) would take a lot more testing/debugging to get right.

So it helps if the hardware doesn’t change, the OS doesn’t change and the host IO protocol doesn’t change when your using Agile to develop storage software.

The other thing that we ran into is that over time, regression testing just kept growing and took longer and longer to complete. We made it a point of adding regression tests to validate any data loss fix we ever had in the field. Some of these required manual intervention (such as hardware bugs that need to be manually injected). This is less of a problem with a new storage system and limited field experience, but over time fixes accumulate and from a customer perspective, tests validating them are hard to get rid of.

Hardware on Agile

Although a lot of hardware these days is implemented as ASICs, it can also be implemented via Field Programmable Gate Arrays (FPGAs). Some FPGAs can be configured at runtime (see Wikipedia article on FPGAs), that is in the field, almost on demand.

FPGA programming is done using a hardware description language, an electronic logic coding scheme. It looks very much like software development of hardware logic. Why can’t this be incrementally implemented, continuously integrated, automatically validated and released to the field every two weeks.

As discussed above, the major concern is that new hardware introducing timing changes which expose hard to find (software and hardware) bugs.

And incremental development of original hardware, seems akin to having a building’s foundation changing while your adding more stories. One needs a critical mass of hardware to get to a base level of functionality to run storage functionality. This is less of a problem when one’s adding or modifying functionality for current running hardware.

~~~~

I suppose Qumulo’s use of Agile shouldn’t be much of a surprise. They’re a startup, with limited resources, and need to play catchup with a lot of functionality to implement. It’s risky from my perspective but you have to take calculated risks if your going to win the storage game.

You have to give Qumulo credit for developing their storage using Agile and being gutsy enough to take it directly to the field. Let’s hope it continues to work for them.

Photo Credits“Scrum Framework” by Source (WP:NFCC#4). Licensed under Fair use via Wikipedia

The Mac 30 years on

I have to admit it. I have been a Mac and Apple bigot since 1984. I saw the commercial for the Mac and just had to have one. I saw the Lisa, a Mac precursor at a conference in town and was very impressed.

At the time, we were using these green or orange screens at work connected to IBM mainframes running TSO or VM/CMS and we thought we were leading edge.

And then the Mac comes out with proportional fonts, graphics terminal screen, dot matrix printing that could print anything you could possibly draw, a mouse and a 3.5″ floppy.

Somehow my wife became convinced and bought our family’s first Mac for her accounting office. You could buy spreadsheet and a WYSIWIG Word processor software and run them all in 128KB. She ended up buying Mac accounting software and that’s what she used to run her office.

She upgraded over the years and got the 512K Mac but eventually when  she partnered with two other accountants she changed to a windows machines. And that’s when the Mac came home.

I used the Mac, spreadsheets and word processing for most of my home stuff and did some programming on it for odd jobs but mostly it just was used for home office stuff. We upgraded this over the years, eventually getting a PowerMac which had a base station with a separate CRT above it, but somehow this never felt like a Mac.

Then in 2002 we got the 15″ new iMac. This came as a half basketball base with a metal arm emerging out of the top of it, with a color LCD screen attached. I loved this Mac. We still have it but nobody’s using it anymore. I used it to edit my first family films using an early version of iMovie. It took hours to upload the video and hours more to edit it. But in the end, you had a movie on the iMac or on CD which you could watch with your family. You can’t imagine how empowered I felt.

Sometime later I left corporate America for the life of a industry analyst/consultant. I still used the 15″ iMac for the first year after I was out but ended up purchasing an alluminum Powerbook Mac laptop with my first check. This was faster than the 15″ iMac and had about the same size screen. At the time, I thought I would spend a lot out of time on the road.

But as it turns out, I didn’t spend that much time out of the office so when I generated enough revenue to start feeling more successful, I bought a iMac G5. The kids were using this until last year when I broke it. This had a bigger screen and was definitely a step up in power, storage and had a  Superdrive which allowed me to burn DVD-Rs for our family movies. When I wasn’t working I was editing family movies in half an hour or less (after import) and converting them to DVDs. Somewhere during this time, Garageband came out and I tried to record and edit a podcast, this took hours to complete and to export as a podcast.

I moved from the PowerBook laptop to a MacBook laptop. I don’t spend a lot of time out of the office but when I do I need a laptop to work on. A couple of years back I bought a MacBook Air and have been in love with it ever since. I just love the way it feels, light to the touch and doesn’t take up a lot of space. I bought a special laptop backpack for the old MacBook but it’s way overkill for the Air. Yes, it’s not that powerful, has less storage and  has the smaller screen (11″) but in a way it’s more than enough to live with on long vacations or out of the office

Sometime along the way I updated to my desktop to the aluminum iMac. It had a bigger screen, more storage and was much faster. Now movie editing was a snap. I used this workhorse for four years before finally getting my latest generation iMac with the biggest screen available and faster than I could ever need (he says now). Today, I edit GarageBand podcasts in a little over 30 minutes and it’s not that hard to do anymore.

Although, these days Windows has as much graphic ability as the Mac, what really made a difference for me and my family is the ease of use, multimedia support and the iLife software (iMovie, iDVD, iPhoto, iWeb, & GarageBand) over the years and yes, even iTunes. Apple’s Mac OS software has evolved over the years but still seems to be the easiest desktop to use, bar none.

Let’s hope the Mac keeps going for another 30 years.

Photo Credits:  Original 128k Mac Manual by ColeCamp

my original Macintosh by Blake Patterson

Brand new iMac, February 16, 2002 by Dennis Brekke

MacBook Air by nuzine.eu

iMac Late 2012 by Cellura Technology