Toy whirligig to blood centrifuge

653dRead an article (Stanford research: Inspired by a whirligig toy, … handpowered blood centrifuge) the other day about a group of researchers taking an idea from a kid’s whirligig that spins around as you pull on it and using it for  blood centrifuge (Paperfuge) that can be used anywhere in the world.

This was all inspired when the lead researcher saw an electronic blood centrifuge being used as a door stop in a remote clinic due to lack of electricity.

So they started looking at various pre-electricity toys that rotate quickly to see if they could come up with an alternative.

The Paperfuge

The surprising thing is that they clocked a toy whirligig at over 10K RPM which no-one knew before. The team worked on the device using experimentation, computer simulation and mathematical analysis of the various aspects of the device such as string elasticityin order to improve its speed and reliability. Finally, they were able to get their device to spin at 125K RPM.

They mounted a capillary (tube) onto a paper disk where the blood is placed and then they just start pulling and pushing the device to have it centrifuge the blood into its various components.

Blood centrifuges for anywhere

A blood centrifuge separates blood components into layers based on the density of blood elements. Red blood cells are the heaviest so they end up at the bottom of the tube, blood plasma is the lightest so it ends up at the top of the tube and parasites like malaria settle in the middle. Blood centrifuges help in diagnosing disease.

Any device spinning at 125K RPM is more than adequate to centrifuge blood. As such, the PaperFuge competes with electronic blood centrifuge  that cost $1000-$5000 and of course, take electricity to run.

The Paperfuge is currently in field testing but at $0.20 each, it would be a boon to many clinics and remote medical personnel both on and off the world.

Now about that gyroscope…

Comments?

Photo Credit(s): Childrens Books and Toys;  Video from Stanford website

Dreaming of SCM but living with NVDIMMs…

Last months GreyBeards on Storage podcast was with Rob Peglar, CTO and Sr. VP of Symbolic IO. Most of the discussion was on their new storage product but what also got my interest is that they are developing their storage system using NVDIMM technologies.

In the past I would have called NVDIMMs NonVolatile RAM but with the latest incarnation it’s all packaged up in a single DIMM and has both NAND and DRAM on board. It looks a lot like 3D XPoint but without the wait.

IMG_2338The first time I saw similar technology was at SFD5 with Diablo Technologies and SANdisk, a Western Digital company (videos here and here). At that time they were calling them UltraDIMM and memory class storage. ULTRADIMMs had an onboard SSD and DRAM and they  provided a sort of virtual memory (paged) access to the substantial (SSD) storage behind the DRAM page(s). I  wrote two blog posts about UltraDIMM and MCS (called MCS, UltraDIMM and memory IO, the new path ahead part1 and part2).

 

NVDIMM defined

NVDIMMs are currently available today from Micron, Crucial, NetList, Viking, and probably others. With today’s NVDIMM there is no large SSD (like ULTRADIMMs, just backing flash) and the complete storage capacity is available from the DRAM in the NVDIMM. At power reset, the NVDIMM sort of acts like virtual memory paging in data from the flash until all the data is in DRAM.

NVDIMM hardware includes control logic, DRAM, NAND and SuperCAPs/Batteries together in one DIMM. DRAM is used for normal memory traffic but in the case of a power outage, the data from DRAM is offloaded onto the NAND in the NVDIMM using the SuperCAP/Battery to hold up the DRAM memory just long enough to transfer it to flash..

Th problem with good, old DRAM is that it is volatile, which means when power is gone so is your data. With NVDIMMs (3D XPoint and other new non-volatile storage class memories also share this characteristic), when power goes away your data is still available and persists across power outages.

For example, Micron offers an 8GB, JEDEC DDR4 compliant, 288-pin NVDIMM that has 8GB of DRAM and 16GB of SLC flash in a single DIMM. Depending on part, it has 14.9-16.2GB/s of bandwidth and 1866-2400 MT/s (million memory transfers/second). Roughly translating MT/s to IOPS, says with ~17GB/sec and at an 8KB block size, the device should be able to do ~2.1 MIO/s (million IO operations per second [never thought I would need an acronym for that]).

Another thing that makes NVDIMMs unique in the storage world is that they are byte addressable.

Hardware – check, Software?

SNIA has a NVM Programming (NVMP) Technical Working Group (TWG), which has been working to help adoption of the new technology. In addition to the NVMP TWG, there’s pmem.io, SANdisk’s NVMFS (2013 FMS paper, formerly known as DirectFS) and Intel’s pmfs (persistent memory file system) GitHub repository.  Couldn’t find any GitHub for NVMFS but both pmem.io and pmfs are well along the development path for Linux.

swarchThe TWG identified a three prong approach to NVDIMM adoption:  crawl, walk, run (see pmem.io blog post for more info).

  • The Crawl approach uses standard block and file system drivers on Linux to talk to a NVDIMM driver. This way has the benefit of being well tested, well known and widely available (except for the NVDIMM driver). The downside is that you have a full block IO or file IO stack in front of a device that can potentially do 2.1 MIO/s and it is likely to cause a lot of overhead reducing this potential significantly.
  • The Walk approach uses a persistent memory file system (pmfs?) to directly access the NVDIMM storage using memory mapped IO. The advantage here is that there’s absolutely no kernel code active during a NVDIMM data access. But building a file system or block store up around this may require some application level code.
  • The Run approach wasn’t described well in the blog post but it seems like SANdisk’s NVMFS approach which uses both standard NVMe SSDs and non-volatile memory to build a hybrid (NVDIMM-SSD) file system.

Symbolic IO as another run approach?

Symbolic IO computationally defined storage is intended to make use of NVDIMM technology and in the Store [update 12/16/16] appliance version has SSD storage as well in a hybrid NVDIMM-SSD run-like solution. The appliance has a full version of Linux SymCE which doesn’t use a file system or the PMEM library to access the data, it’s just byte addressable storage  with a PMEM file system embedded within [update 12/16/16]. This means that applications can use standard Linux file APIs to (directly) reference NVDIMM and the backend SSD storage.

It’s computationally defined because they use compute power to symbolically transform the data reducing data footprint in NVDIMM and subsequently in the SSD backing tier. Checkout the podcast to learn more

I came away from the podcast thinking that NVDIMMs are more prevalent than I thought. So, that’s what prompted this post.

Comments?

Photo Credit(s): UltraDIMM photo taken by Ray at SFD5, Architecture picture from pmem.io blog post

 

Microsoft ESRP database access latencies – chart of the month

sciesrp1030-001
The above chart was included in last month’s SCI e-Newsletter and depicts recent Microsoft Exchange (2013) Solution Reviewed Program (ESRP) results for database access latencies. Storage systems new to this 5000 mailbox and over category include, Oracle, Pure and Tegile. As all of these systems are all flash arrays we are starting to see significant reductions in database access latencies and the #4 system (Nimble Storage) was a hybrid (disk and flash) array.

As you recall, ESRP reports on three database access latencies: read database, write database and log write. All three are shown above but we sort and rank this list based on database read activity alone.

Hard to see above, but reading the ESRP reports, one finds that the top 3 systems had 1.04, 1.06 and 1.07 milliseconds. average read database latencies. So the separation between the top 3 is less than 40 microseconds.

The top 3 database write access latencies were 1.75, 1.62 and 3.07 milliseconds, respectively. So if we were ranking the above on write response times Pure would have come in #1.

The top 3 log write access latencies were 0.67, 0.41 and 0.82 milliseconds and once again if we were ranking based on log response times Pure would be #1.

It’s unclear whether Exchange customers would want to deploy AFAs for their database and log files but these three ESRP reports and Nimble’s show that there should be no problem with the performance of AFAs in these environments.

What about data reduction?

Unclear to me is how much data reduction technologies played in the AFA and hybrid solution ESRP performance. Data reduction advantages would most likely show up in database IOPS counts more so than response times but if present, may still reduce access latencies, as there would be potentially less data to be transferred to-from the backend of the storage system into-out of storage system cache.

ESRP reports do not officially report on a vendor’s data reduction effectiveness, so we are left with whatever the vendor decides to say.

In that respect, Pure FlashArray//m20 indicated in their ESRP report that their “data reduction is significantly higher” than what they see normally (4:1) because Jetstress (ESRP benchmark program) generates lots of duplicated data.

I couldn’t find anything that Tegile T3800 or Nimble Storage said similar to the above, indicating how well their data reduction technologies worked in Jetstress as compared to normal. However, they did make a reference to their compression effectiveness in database size but I have found this number to be somewhat less effective as it historically showed the amount of over provisioning used by disk-only systems and for AFA’s and hybrid – storage, it’s unclear how much is data reduction effectiveness vs. over provisioning.

For example, Pure, Tegile and Nimble also reported a “database capacity utilization” of 4.2%60% and 74.8% respectively. And Nimble did report that over their entire customer base, Exchange data has on average, a 61.2% capacity savings.

So you tell me what was the effective data reduction for their Pure’s, Tegile’s and Nimble’s respective Jetstress runs? From my perspective Pure’s report of 4.2% looks about right (that says that actual database data fit in 4.2% of SSD storage for a ~23.8:1 reduction effectiveness for Jetstress/ESRP data. I find it harder to believe what Tegile and Nimble have indicated as it doesn’t seem to be as believable as they would imply a 1.7:1 and 1.33:1 reduction effectiveness for Jetstress/ESRP data.

Oracle FS1-2 doesn’t seem to have any data reduction capabilities and reported a 100% storage capacity used by Exchange database.

So that’s it, Jetstress uses “significantly reducible” data for some AFAs systems. But in the field the advantage of data reduction techniques are much less so.

I think it’s time that ESRP stopped using significantly reducible data in their Jetstress program and tried to more closely mimic real world data.

Want more?

The October 2016 and our other ESRP reports have much more information on Microsoft Exchange performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is also included in SCI’s SAN-NAS Buying Guidealso available from our website. And if your interested in file system performance please consider purchasing our NAS Buying Guide also available on our website.

~~~~

The complete ESRP performance report went out in SCI’s October 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.

As always, we welcome any suggestions or comments on how to improve our ESRP  performance reports or any of our other storage performance analyses.

 

Engineers invent an Acoustic Prism

opticks-prismRead an article in Scientific American online (Engineers Debut the Acoustic Prism article, EPFL [Ecole Politechnique Fe´de´ral de Lausanne] press release, YouTube video) that discussed a newly invented device, the Acoustic Prism. The prism was invented by Hervé Lissek and his team at EPFL

acoustic-prism1An acoustic prism acts on sound waves similar to the way an optical prism acts on light waves by separating out the composite frequencies of the incoming signal into isolated frequencies of an outgoing signal.

The Acoustic prism

Apparently the acoustic prism is made up of metallic cavities (cells) acoustic-prism2separated by a membrane that (delays) sound waves of a certain frequency down a channel in order to isolate the frequencies of the incoming sound.

Not quite sure what the purpose of the membranes are other than to somehow normalize the incoming sound waves so that it seems like it is hitting all the cavities at the same time. And what constitutes the prismatic effect in the above is not

The acoustic prism on display in the YouTube video looks like a long metallic rectangular tube with ten holes along one side and a microphone at one end and the cavities in the middle. The sound enters one end (to the right in the photo below) and it escapes through the holes in the tube, high frequency sounds closer to the acoustic-prism3source and low frequency sounds farther away from the source. The membranes delay the sound propagating down the tube based on frequency which allows that frequency to depart the tube.

So the composite sound is separated out into it’s constituent parts and dispersed out of the tube in individual sound waves, not unlike an optical prism. Why ten holes and not twenty or thirty is one question and it would seem that the membranes would need the be engineered separately for each frequency you want to isolate.

Study of optical prisms changed the world

It seems to me the study of light, coming from optical prisms discussed by Newton in his Opticks in 1704 led the way to the Enlightenment and to ultimately a redefinition of light as we know it today.

As I have both hearing and eyesight difficulties, it has always confounded me that a simple lens like device can correct for just about any and all eye imperfections and allow me to read anything I need to. But nothing similar is available for improving hearing (ear) defects.

We need an Acoustic Lens

If there were some sort of sound lens it could correct incoming sound frequencies automatically to overcome any ear defects. An optical lens works just as well for noisy or non-noisy (light) environments without problem. I suppose it’s because it modifies all incoming  light wavelengths the same way.

I believe electronic comb filters/digital waveguides should have been able to do this with proper processing power, but they seem at best a modest improvement. There just is nothing similar for hearing to what eyeglasses/bifocals can do for eyesight and light waves.

Maybe if there were some sort of acoustic lens that was able to frequency shift across a number of frequencies into other frequencies. If there is such a thing as an acoustic prism that refracts sound waves well then there should certainly be a way to combine these refracting surfaces to shift acoustic frequencies to something that was more effective.

Acoustic lens practicalities

Not sure what’s at the other end of the acoustic prism rectangular tube but you are supposed to be able to speak at one end of it and hear the constituent parts of the sound emerge out of the face with the ten holes.

It’s a bit much to be wearing something like this around an ear today but it’s just a start. And yes, I realize that a prism is not a lens but they both work via refraction. If one can isolate frequencies, one should be able to (electronically or mechanically) convert one to another, and then (electronically or mechanically) combine the ones that matter into some sort of output sound stream.

So we would need to miniaturize it considerably. Also it would be more helpful if it were somehow circular or spiral so it could be worn over an ear not unlike headphones or ear muffs. If necessary, the electronics to process the incoming sound, modify it’s frequencies to whats needed and output them (through some sort of speakers) could be embedded in the headphones. And there you have it.

It would be very nice if someday, it came in a rechargeable Bluetooth earpiece form factor but that could be generation 3 or 4 or …

Anyways, barring some sort of genetic engineering solution that produces a brand new ear cochlea, either in situ or via transplantation, there is nothing other than modest electronic means (today’s hearing aids) available today to solve hearing problems. But the Acoustic Prism is just a start and its applications seem endless.

I look forward to some day in the future where I can wear an EarGlasses to hear better…

Comments?

Photo credits: Opticks by Sir Isaac. Newton Knt., from Google Books, Screen shots from the Youtube video, discussing the device 

Docker presents at Cloud Field Day 1 (CFD1)

img_6933Earlier this summer, Docker presented at Cloud Field Day 1 (CFD1) on some of their current technology and upcoming enhancements. (See the video’s here).

As you probably recall, Docker is an implementation of Linux containers which is a way of packaging applications into micro-services that can be built, ship and run across onprem, private and public cloud infrastructure.

Docker containers and Docker Engine

Docker containers combine a base OS image, plus whatever other binaries are needed to run a micro-service into a container which runs ontop of a Docker Engine.  Containers can then be run as a single instance or multiple instances on a Docker Engine.

img_6943Containers are not VMs, they have a fundamentally different architecture. For instance,

  • A VM includes a full OS and App software, it often takes several minutes to boot up and there is a hypervisor underneath it that emulates hardware and other critical services needed to run a VM. But there is no underlying standard OS under the VM layer.
  • A Docker container relies on shared OS resources, which allows for a lighter weight application package using shared resources, which means that instantiation/booting up is much faster, there is no Hypervisor, but a container can run under Linux, Windows or Mac OSs, and containers provide for full stack portability.

In the Docker Hub (srepository for Docker containers) one can find a WordPress container that contains the whole LAMP + WordPress stack in a single container. To run WordPress you would also need a MySQL or compatible database and there’s a MySQL machine container that can be used. You could easily run both the WordPress/LAMP container and the MySQL container in the same Docker Engine, connect the two together and connect the LAMP+Wordpress container to the Internet to fire up a WordPress blog site.

Docker compared VMs to houses and containers to apartments. Docker Engines can run as a VM or on bare metal hardware.

Running Docker containers on desktop, servers and in the cloud

img_6938If you want to experiment with Docker, you can download Docker for Mac or Docker for Windows which can be used install and run a native Docker engine on your desktop.

Windows Server also supports native Docker containers. In VMware one can run Docker containers under vSphere Integrated Containers which supplies Docker API endpoints as standard ESX VMs or you can run Docker containers under Project Photon which is a streamlined, non-ESX hypervisor that also supplies Docker API endpoints.

You can run Docker containers in AWS and Azure as well that integrates with each public cloud’s compute, network and storage services.

Docker Swarm

So you have your Docker engine running, with multiple containers sharing resources and to create an application but your out of compute, storage or networking power on your engine and need to bring on another server or two.  What do you do? With Docker 1.12, you can now use Docker Swarm, which supports multiple Docker Engines.

With Docker Swarm, you have management nodes and worker nodes. Management nodes provide HA services for Docker containers which runs across multiple worker nodes. Worker nodes run Docker Engines with multiple containers.

img_6940Docker Swarms orchestrates the operation of multiple Docker Engines running Docker Services.

A Docker Service is a Docker container running across multiple worker nodes (engines) in a Docker Swarm. Docker services can be run globally (across each worker node) or replicated (some number of Docker Container instances are run across one or more worker nodes). You specify on the Docker Service command which you want and Swarm will insure that the specifications selected are implemented across its worker nodes.

If a worker node goes down, Swarm will detect it and re-start the failed container instances on other worker nodes in the Swarm. Beware, if your container relied on persistent storage, that storage must be also available to all Swarm worker nodes.

Swarm provides a Routing Mesh. When you fire up a container service you can identify a swarm-wide ingress port for a container. Every worker node will listen in on that port to provide a container-aware routing service to route app requests across the Swarm to wherever the containers are currently running.

You can have multiple Swarm management nodes which share the management of the Swarm. Swarm management nodes are either leaders or followers and provide a RAFT consensus model. If the leader node goes down, another management node will take on its leadership role and start managing the Swarm.

There are many other technologies underneath Docker Swarm that are worth a look but suffice it to say it provides a load-balancing, HA service for container execution across multiple engines.

Docker Datacenter

What could possibly be missing? We have Docker Engines that can run multiple containers and Docker Swarms that can run multiple Docker Engines and containers in an HA manner. But we really need something that supports multiple Docker Swarms,  and throw in a private secure Container repository and enterprise support options while you’re at it.

Earlier this year Docker introduced Docker Datacenter, a priced service offering which does just that.  It provides Containers-as-a-Service (CaaS) across multiple Docker Swarms that has commercial support options, a Docker Trusted Repository and integrates it all with enterprise services like LDAP/AD to provide audit logs and other monitoring capabilities for container services execution.

Using Docker Datacenter, developers can have their own multiple development swarms to support engineering activities and ship and store their container images in a secure, private repository and operations can have multiple Swarms which all run the same Docker Container apps in an HA manner.

From an app developer standpoint, it all looks like container instances are running in the same Docker Engine environment across all those implementations. Operations sees a centralized management console (plane) that provides a way to monitor and manage multiple Docker Swarms running everywhere.

Well that’s about it for the update on Docker. There wasn’t much at the sessions on how containers access persistent storage but there’s a Flocker service that offers plugin support for EMC, NetApp and other enterprise SAN storage for Container apps. And there seem to be others out there and available.

You can read/hear more about Docker from these other CFD1 participants:

Comments

Full disclosure: Docker gave us a very nice/very long scarf, and two t-shirts decorated with Docker logo and tagline and a number of stickers and pins.

QoM1610: Will NVMe over Fabric GA in enterprise AFA by Oct’2017

NVMeNVMe over fabric (NVMeoF) was a hot topic at Flash Memory Summit last August. Facebook and others were showing off their JBOF (see my Facebook moving to JBOF post) but there were plenty of other NVMeoF offerings at the show.

NVMeoF hardware availability

When Brocade announced their Gen6 Switches they made a point of saying that both their Gen5 and Gen6 switches currently support NVMeoF protocols. In addition to Brocade’s support, in Dec 2015 Qlogic announced support for NVMeoF for select HBAs. Also, as of  July 2016, Emulex announced support for NVMeoF in their HBAs.

From an Ethernet perspective, Qlogic has a NVMe Direct NIC which supports NVMe protocol offload for iSCSI. But even without NVMe Direct, Ethernet 40GbE & 100GbE with  iWARP, RoCEv1-v2, iSCSI SER, or iSCSI RDMA all could readily support NVMeoF on Ethernet. The nice thing about NVMeoF for Ethernet is not only do you get support for iSCSI & FCoE, but CIFS/SMB and NFS as well.

InfiniBand and Omni-Path Architecture already support native RDMA, so they should already support NVMeoF.

So hardware/firmware is already available for any enterprise AFA customer to want NVMeoF for their data center storage.

NVMeoF Software

Intel claims that ~90% of the software driver functionality of NVMe is the same for NVMeoF. The primary differences between the two seem to be the NVMeoY discovery and queueing mechanisms.

There are two fabric methods that can be used to implement NVMeoF data and command transfers: capsule mode where NVMe commands and data are encapsulated in normal fabric packets or fabric dependent mode where drivers make use of native fabric memory transfer mechanisms (RDMA, …) to transfer commands and data.

12679485_245179519150700_14553389_nA (Linux) host driver for NVMeoF is currently available from Seagate. And as a result, support for NVMeoF for Linux is currently under development, and  not far from release in the next Kernel (I think). (Mellanox has a tutorial on how to compile a Linux kernel with NVMeoF driver support).

With Linux coming out, Microsoft Windows and VMware can’t be far behind. However, I could find nothing online, aside from base NVMe support, for either platform.

NVMeoF target support is another matter but with NICs/HBAs & switch hardware/firmware and drivers presently available, proprietary storage system target drivers are just a matter of time.

Boot support is a major concern. I could find no information on BIOS support for booting off of a NVMeoF AFA. Arguably, one may not need boot support for NVMeoF AFAs as they are probably not a viable target for storing App code or OS software.

From what I could tell, normal fabric multi-pathing support should work fine with NVMeoF. This should allow for HA NVMeoF storage, a critical requirement for enterprise AFA storage systems these days.

NVMeoF advantages/disadvantages

Chelsio and others have shown that NVMeoF adds ~8μsec of additional overhead beyond native NVMe SSDs, which if true would warrant implementation on all NVMe AFAs. This may or may not impact max IOPS depending on scale-ability of NVMeoF.

For instance, servers (PCIe bus hardware) typically limit the number of private NVMe SSDs to 255 or less. With an NVMeoF, one could potentially have 1000s of shared NVMe SSDs accessible to a single server. With this scale, one could have a single server attached to a scale-out NVMeoF AFA (cluster) that could supply ~4X the IOPS that a single server could perform using private NVMe storage.

Base level NVMe SSD support and protocol stacks are starting to be available for most flash vendors and operating systems such as, Linux, FreeBSD, VMware, Windows, and Solaris. If Intel’s claim of 90% common software between NVMe and NVMeoF drivers is true, then it should be a relatively easy development project to provide host NVMeoF drivers.

The need for special Ethernet hardware that supports RDMA may delay some storage vendors from implementing NVMeoF AFAs quickly. The lack of BIOS boot support may be a minor irritant in comparison.

NVMeoF forecast

AFA storage systems, as far as I can tell, are all about selling high IOPS and very-low latency IOs. It would seem that NVMeoF would offer early adopter AFA storage vendors a significant performance advantage over slower paced competition.

In previous QoM/QoW posts we have established that there are about 13 new enterprise storage systems that come out each year. Probably 80% of these will be AFA, given the current market environment.

Of the 10.4 AFA systems coming out over the next year, ~20% of these systems pride themselves on being the lowest latency solutions in the market, and thus command high margins. One would think these systems would be the first to adopt NVMeoF. But, most of these systems have their own, proprietary flash modules and do not use standard (NVMe) SSDs and can use their own proprietary interface to their proprietary flash storage. This will delay any implementation for them until they can convert their flash storage to NVMe which may take some time.

On the other hand, most (70%) of the other AFA systems, that currently use SAS/SATA SSDs, could boost their IOP counts and drastically reduce their IO  response times, by implementing NVMe SSDs and NVMeoF. But converting SAS/SATA backends to NVMe will take time and effort.

But, there are a select few (~10%) of AFA systems, that already use NVMe SSDs in their AFAs, and for these few, they would seem to have a fast track towards implementing NVMeoF. The fact that NVMeoF is supported over all fabrics and all storage interface protocols make it even easier.

Moreover, NVMeoF has been under discussion since the summer of 2015, which tells me that astute AFA vendors have already had 18+ months to develop it. With NVMeoF host drivers & hardware available since Dec. 2015, means hardware and software exist to test and validate against.

I believe that NVMeoF will be GA’d within the next 12 months by at least one enterprise AFA system. So my QoM1610 forecast for NVMeoF is YES, with a 0.83 probability.

Comments?

 

 

 

Hitachi and the coming IoT gold rush

img_7137Earlier this week I attended Hitachi Summit 2016 along with a number of other analysts and Hitachi executives where Hitachi discussed their current and ongoing focus on the IoT (Internet of Things) business.

We have discussed IoT before (see QoM1608: The coming IoT tsunami or not, Extremely low power transistors … new IoT applications). Analysts and companies predict  ~200B IoT devices by 2020 (my QoM prediction is 72.1B 0.7 probability). But in any case there’s a lot of IoT activity going to come online, very shortly. Hitachi is already active in IoT and if anything, wants it to grow, significantly.

Hitachi’s current IoT business

Hitachi is uniquely positioned to take on the IoT business over the coming decades, having a number of current businesses in industrial processes, transportation, energy production, water management, etc. Over time, all these industries and more are becoming much more data driven and smarter as IoT rolls out.

Some metrics indicating the scale of Hitachi’s current IoT business, include:

  • Hitachi is #79 in the Fortune Global 500;
  • Hitachi’s generated $5.4B (FY15) in IoT revenue;
  • Hitachi IoT R&D investment is $2.3B (over 3 years);
  • Hitachi has 15K customers Worldwide and 1400+ partners; and
  • Hitachi spends ~$3B in R&D annually and has 119K patents

img_7142Hitachi has been in the OT (Operational [industrial] Technology) business for over a century now. Hitachi has also had a very successful and ongoing IT business (Hitachi Data Systems) for decades now.  Their main competitors in this IoT business are GE and Siemans but neither have the extensive history in IT that Hitachi has had. But both are working hard to catchup.

Hitachi Rail-as-a-Service

img_7152For one example of what Hitachi is doing in IoT, they have recently won a 27.5 year Rail-as-a-Service contract to upgrade, ticket, maintain and manage all new trains for UK Rail.  This entails upgrading all train rolling stock, provide upgraded rail signaling, traffic management systems, depot and station equipment and ticketing services for all of UK Rail.

img_7153The success and profitability of this Hitachi service offering hinges on their ability to provide more cost efficient rail transport. A key capability they plan to deliver is predictive maintenance.

Today, in UK and most other major rail systems, train high availability is often supplied by using spare rolling stock, that’s pre-positioned and available to call into service, when needed. With Hitachi’s new predictive maintenance capabilities, the plan is to reduce, if not totally eliminate the need for spare rolling stock inventory and keep the new trains running 7X24.

img_7145Hitachi said their new trains capture 48K data items and generate over ~25GB/train/day. All this data, will be fed into their new Hitachi Insight Group Lumada platform which includes Pentaho, HSDP (Hitachi Streaming Data Platform) and their Content Analytics to analyze train data and determine how best to keep the trains running. Behind all this analytical power will no doubt be HDS HCP object store used to keep track of all the train sensor data and other information, Hitachi UCP servers to process it all, and other Hitachi software and hardware to glue it all together.

The new trains and services will be rolled out over time, but there’s a pretty impressive time table. For instance, Hitachi will add 120 new high speed trains to UK Rail by 2018.  About the only thing that Hitachi is not directly responsible for in this Rail-as-a-Service offering, is the communications network for the trains.

Hitachi other IoT offerings

Hitachi is actively seeking other customers for their Rail-as-a-service IoT service offering. But it doesn’t stop there, they would like to offer smart-water-as-a-service, smart-city-as-a-service, digital-energy-as-a-service, etc.

There’s almost nothing that Hitachi currently supplies as industrial products that they wouldn’t consider offering in an X-as-a-service solution. With HDS Lumada Analytics, HCP and HDS storage systems, Hitachi UCP converged infrastructure, Hitachi industrial products, and Hitachi consulting services, together they are primed to take over the IoT-industrial products/services market.

Welcome to the new Hitachi IoT world.

Comments?

SPC-1 IOPS performance per GB-NAND – chart of the month

Bar chart depicting IOPS/GB-NAND, #1 is Datacore Parallel Server with ~266 IOPS/GB-NAND,
(c) 2016 Silverton Consulting, All Rights Reserved

The above is an updated chart from last months SCI newsletter StorInt™ SPC Performance Report depicting the top 10 SPC-1 submissions IOPS™ per GB-NAND. We have been searching for a while now how to depict storage system effectiveness when using SSD or other flash storage. We have used IOPS/SSD in the past but IOPS/GB-NAND looks better.

Calculating IOPS/GB-NAND

SPC-1 does not report this metric but it can be calculated by dividing IOPS by NAND storage capacity. One can find out NAND storage capacity by looking over SPC-1 full disclosure reports (FDR), totaling up the NAND storage in the configuration in all the SSDs and flash devices. This is total NAND capacity, not Total ASU (used storage) Capacity. GB-NAND reflects just what’s indicated for SSD/flash device capacity in the configuration section. This is not necessarily the device’s physical NAND capacity when over provisioned, but at least it’s available in the FDR.

DataCore Parallel Server IOPS/GB-NAND explained

The DataCore Parallel Server generated over 5M IOPS (IO’s/second) under an SPC-1 (OLTP-like) workload. And with their 54-480GB SSDs, totaling ~25.9TB of NAND capacity, it gives them just under 200 IOPS/GB-NAND. The chart in the original report was incorrect.  There we used 36-480GB SSDs or ~17.3TB of NAND to compute IOPS/GB-NAND, which gave them just under 300 IOPS/GB-NAND in the report, which was incorrect. (The full report has been since corrected and is available for re-download for subscribers to our newsletter).

The 480GB (Samsung SM863 MZ-7KM480E)SSDs were all SATA attached. Samsung lists these SSDs as V-NAND, MLC drives, rated at 97K random Reads and 26K random writes. At over 5M IOPS, it should be running close to 100% of the SSDs rated performance. However, DataCore’s Parallel Server included 2 controllers with a total of 3TB of DRAM cache,  which was then SAS connected to 4 DELL MD1220 storage arrays, each with 512GB of DRAM cache, so their total configuration had about 5TB of DRAM in it, most of which would have been used as a IO cache.

The SPC-1 submission only used 11.8TB (Total ASU capacity) of storage. All the DRAM cache help to explain how they attained 5M IOPS. Having a multi-tiered cache like DataCore-MD1220 configuration, doesn’t insure that all the cache is effectively used but even without cache tiering logic, there might not be much of an overlap between the MD1220 and Parallel Server caches. It would be more interesting to see how busy the SSDs were during this SPC-1 run.

How random the SPC-1 workload is, is subject to much speculation in the industry. Suffice it to say it’s not 100% random, but what is. Non-random OLTP workloads would tend to favor larger caches.

SPC is coming out with a new version of their benchmark with supplementary information which may shed more light on device busyness.

All SPC-1 benchmark submissions are available at storageperformance.org.

Want more?

The August 2016 and our other SPC Performance reports have much more information on SPC-1 and SPC-2 performance. Moreover, there’s a lot more performance information, covering email and other (OLTP and throughput intensive) block storage workloads, in our SAN Storage Buying Guide, available for purchase on our website. More information on file and block protocol/interface performance is included in SCI’s SAN-NAS Buying Guidealso available from our website .

~~~~

The complete SPC performance report went out in SCI’s August 2016 Storage Intelligence e-newsletter.  A copy of the report will be posted on our SCI dispatches (posts) page over the next quarter or so (if all goes well).  However, you can get the latest storage performance analysis now and subscribe to future free SCI Storage Intelligence e-newsletters, by just using the signup form in the sidebar or you can subscribe here.