110: GreyBeards talk FMS2020 wrap up with Jim Handy, General Director of Objective Analysis

This months it’s back to storage and our annual wrap-up on the Flash Memory Summit Conference with Jim Handy, General Director of Objective Analysis. Jim’s been on our show 5 times before and is a well known expert on NAND and SSDs (as well as DRAM and memory systems). Jim also blogs at TheSSDGuy.com and TheMemoryGuy.com just in case you want to learn more.

FMS went virtual this year and had many interesting topics including how computational storage is making headway in the cloud, 3D QLC is hitting the enterprise with PLC on the way, and for a first at FMS, a talk on DNA storage (for more information on this, see our podcast with CatalogDNA). Jim’s always interesting to talk with to help us understand where the NAND-SSD industry is headed. Listen to the podcast to learn more.

Jim mentioned that the major NAND vendors are all increasing the number of layers for their 3D NAND, and it continues to scale well. Most vendors are currently shipping ~100 layer NAND, with Micron doing more than that. And vendor roadmaps are looking at the possibility of 200 layers or more. Jim doesn’t think anyone knows how high it can go.

Another advantage of 3D NAND is it can be used to make bigger bit cells and thus have better endurance. From Jim’s perspective more electrons per cell means a better more resilient bit cell.

Many vendors in the nascent persistent memory industry were all hoping that NAND would stop scaling at some point and they would be able to pick up the slack. But NAND manufacturers found 3D and scaling hasn’t stopped at all. This has relegated most persistent memory vendors to a small niche market with the exception of Intel (and Micron).

Jim said that Intel is losing money on Optane every year, ~$5B so far. But Intel knows that chip profitability is tied to economies of scale, volumes matter. With enough volume, Optane will become cheap enough to manufacture that they will make buckets of money from it.

Interestingly, Jim said that DRAM scaling is slowing down. That means there may be an even bigger market for something close to DRAM access speeds, but with increased density and lower cost. Optane seems to fit that description very well.

Jim also mentioned that computational storage is starting to see some traction with public cloud vendors. Computational storage adds generic compute power to inside an SSD which can be used to perform storage intensive functions out at the SSD rather than transferring data into the CPU for processing. This makes sense where a lot of data would need to be transferred back and forth to an SSD and where computational cycles are just as cheap out on the SSD as in the server. For example, for data compression, search, and video transcoding, computational storage can make a lot of sense. (See our podcast with NGD systems for more informaiton).

In contrast, Open-Channel SSDs are making dumb SSDs, or SSDs without any flash translation layer or other smarts needed to make NAND work as persistent storage bin the enterprise. There’s a small group of system providers that want to perform all this functionality at a global scale (or across multiple SSDs) rather than at the local, SSD drive level.

Another topic that hit it’s stride this year at FMS2020 was Zoned Name Spaces (ZNS). ZNS partitions an SSD into separately addressable segments, to allow higher performing sequential (write) access within those zones. As SSD capacity has increased, IO activity has sky-rocketed and this has led to an “IO blender” effect. Within an IO blender, it’s impossible to understand which IO is following a sequential pattern and which is not. ZNS is intended to solve that probplem

With ZNS SSDs, IOs doing sequential access can have their own partition and that way the SSD can understand its sequential IO and act accordingly. It turns out that sequential writes to NAND can perform much, much faster than random writes.

ZNS was invented for SMR (shingled magnetic recording) disks, because these overwrote portions of other tracks (like roof shingles, tracks on SMR disks overlap). We had heard about ZNS at FMS2019 but had thought this just a better way to share access to a single SSD, by carving it up into logical (mini-)volumes. Jim said that was also a benefit but the major advantage is being able to understand sequential IO and write to the SSD more effectively.

We talked some on the economics of NAND flash, disk and tape as storage media. Jim and I see this continuing a trend that’s been going on for years, where NAND storage cost $/GB ~10X more than disk capacity, and disk storage costs $/GB ~10X more than tape capacity. All three technologies continue their relentless pursuit of increasing capacity but it’s almost like train tracks, all three $/GB curves following one another into the future.

On the other hand, high RPM disk seems to have died, and been replaced with SSDs. Disk manufacturers have seen unit declines but the # GB they are shipping continues to increase. Contrary to a number of AFA system providers, disk is not dead and is unlikely to die anytime soon.

Finally, we discussed DNA storage and it’s coming entry into the storage market. It’s all a question of price of the drive and media technology, size of the mechanism (drive?) and read and write access times. At the moment all these are coming down but are not yet competitive with tape. But given DNA technology trends, there doesn’t appear to be any physical barrier that’s going to stop it from becoming yet another storage technology in the enterprise, most likely at a 10X $/GB cost advantage over tape…

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com

0101: Greybeards talk with Howard Marks, Technologist Extraordinary & Plenipotentiary at VAST

As most of you know, Howard Marks (@deepstoragenet), Technologist Extraordinary & Plenipotentiary at VAST Data used to be a Greybeards co-host and is still on our roster as a co-host emeritus. When I started to schedule this podcast, it was going to be our 100th podcast and we wanted to invite Howard and the rest of the co-hosts to be on the call to discuss our podcast. But alas, the 100th Greybeards podcast came and went, before we could get it done. So we decided to refocus this podcast back on VAST Data.

We talked with Howard last year about VAST and some of this podcast covers the same ground (see last year’s podcast with Howard on VAST Data) but I highlighted below different aspects of their product that we also discussed.

For starters, VAST just finalized a recent round of funding, which if I recall, valued them at over $1B USD, or yet another data storage unicorn.

VAST is a scale out, disaggregated, unstructured data platform that takes advantage of the economics of QLC SSD (from Intel) combined with the speed of 3D XPoint storage class memory (Optane SSD, also from Intel) to support customer data. Intel is an investor in VAST.

VAST uses mutliple front end (controller) servers, with one or more HA NVMe drive module(s) connected via a dual infiniband or 100Gbps Ethernet RDMA cluster interconnect. The HA NVMe drive module has two (IO modules) adapter cards, one for each connection that takes IO and data requests and transfers them across a PCIe bus which connects to QLC and Optane SSDs. They also have a Mellanox (another investor) switch on their backend with a (round robin) DNS router to connect hosts to their storage (front-end) servers.

Each backend HA NVMe drive module has 12 1.5TB Optane U.2 SSDs and 44 15.4TB QLC SSDs, for a total of 56 drives. Customer data is first written to Optane and then destaged to QLC SSD.

QLC has the advantage of being 4 bits per cell (for a lower $/GB stored) but it’s endurance or drive writes/day (dw/d)) is significantly worse than TLC. So VAST has had to work to increase QLC endurance in their system.

Natively, QLC offers ~0.2 dw/d when doing random 4K writes. However, if your system does 128KB sequential writes, it offers 4.0 dw/d. VAST destages data from Optane SSDs to QLC in 1MB chunks which both optimizes endurance and reduces garbage collection write amplification within the drive.

Howard mentioned their frontend servers are stateless, i.e., maintain no state information about any IO activity going on. Any IO state information is maintained by their system in Optane SSDs. Each server maintains a work log (like) structure on Optane that describes what they are doing in support of host IO and other activities. That way, if one front end server goes down, another one can access its log and take over its activity.

Metadata is also maintained only on Optane SSDs. Howard called their metadata structure a V-tree (B-tree). VAST mirrors all meta-data and customer data to two Optane SSDs. So if one Optane SSD goes down, its pair can be used to continue operations.

In last years podcast we talked at length about VAST data protection and data reduction capabilities so we won’t discuss these any further here.

However, one thing worth noting is that VAST has a very large RAID (erasure code protection) stripe. Data is written to the QLC SSDs in a VAST designed, locally decodable erasure coding format.

One problem with large stripes is rebuild time. VAST’s locally decodable parity codes help with this but the other thing that helps is distributing rebuild IO activity to all front end servers in the system.

The other problem with large stripe sizes is garbage collection. VAST segregates customer data by “temporariness” based on their best guess. In this way all data in one stripe should have similar lifetimes. When it’s time for stripe garbage collection, having all temporary data allows VAST to jettison the whole stripe (or most of it) rather than having to collect and re-write old stripe data to another new stripe.

VAST came out supporting NFSv3 and S3 object storage protocols, Their next release adds support for SMB 2.2, data-at-rest encryption and snapshotting to an external S3 store. As you may recall SMB is a stateful protocol. In VAST’s home grown, SMB implementation, front end servers can take over SMB transactions from other failed servers, without having to fail the whole transaction and start over again.

VAST uses a fail in place, maintenance policy. That is failed SSDs are not normally replaced in customer deployments, rather blocks, pages, or SSDs are marked as failed and the spare capacity available in the drive enclosure is used to provide space for any needed rebuilt data.

VAST offers a 10 year maintenance option where the customer keeps the same storage for 10 full years. That way customers don’t have to migrate data from one system to another until their 10 years are up.

The podcast runs a little under 44 minutes. Howard and I can talk forever. He is always a pleasure to talk with as well as extremely knowledgeable about (VAST) storage and other industry solutions.  The co-hosts and I had a great time talking with him again. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Howard Marks, Technologist Extraordinary and Plenipotentiary, VAST Data, Inc.

Howard Marks brings over forty years of experience as a technology architect for hire and Industry observer to his role as VAST Data’s Technologist Extraordinary and Plienopotentary. In this role, Howard demystifies VAST’s technologies for customers and customer requirements for VAST’s engineers.

Before joining VAST, Howard ran DeepStorage an industry test lab and analyst firm. An award-winning speaker, he has appeared at events on three continents including Comdex, Interop and VMworld.

Howard is the author of several books (all gratefully out of print) and hundreds of articles since Bill Machrone taught him journalism at PC Magazine in the 1980s.

Listeners may also remember that Howard was a founding co-Host of the Greybeards-on-Storage Podcast.

096: GreyBeards YE2019 IT Industry Trends podcast

In this, our yearend industry wrap up episode, the GreyBeards discuss trends and technologdies impacting the IT industry in 2019 and what’s ahead for 2020. This year we have Matt and Keith on the podcast along with Ray. Just like last year, we start off with NVMeoF.

NVMeoF unleashed

This year just about every major storage vendor announced new systems that either have support for NVMeoF or currently offer NVMeoF on their storage systems. Most offer FC based NVMeoF but a few offer NVMeoF/Ethernet, fewer still offer both.

All of the NVMeoF/Ethernet seem to be using RoCE or iWARP. Unclear if one is more often used that the other, so for now both continue to be used in the market. Some storage vendors are offering NVMeoF as an internal fabric to access storage while still using iSCSI or FC/SCSI to access the data. This works better than SAS but won’t provide all the performance you can get from end-to-end NVMeoF.

NVMeoF is all about increasing IOPS and reducing response times. That and getting ready for SCM SSDs. In the mean time the SSD industry has introduced some very attractive NVMe (NAND) SSDs that in NVMeoF storage system can increase IOPS and reduce latencies.

We talked last year about NVMeoF standards finally stabilizing and this year the rollout across enterprise storage systems is testament to that.

SCM hits the enterprise

Most of us attended an Intel Data Center Event earlier this past yea,r where Optane DC PM was introduced. Optane DC PM is the memory version of Optane SCM (3DX Crosspoint) technology. Intel offers two distinct modes of accessing Optane DC PM as memory: 1) App Direct mode, where data in Optane DC PM persists across power cycles but requires one to use a special AP; and 2) Memory mode where Optane DC PM is cleared during a power cycle, (see our RayOnStorage post Need memory, Intel’s Optane DC PM…).

Vendors seem to be using Optane both memory and SCM technology differently. Pure is using Optane SSDs plugged into their FlashArray as sort of a read cache for customer IO. They suggest for well behaved applications this can reduce IO response times considerably.

Dell EMC introduced SCM as a storage tier and are using their automated storage tiering to move the hottest data to SCM. Oracle’s latest Exadata appliance uses Optane DC PM as both a read and write caching layer.

It won’t be long before every enterprise vendor offers SCM drives in their storage systems with a few offering Optane DC PM as in memory caching technology.

Of course, the big news for Optane DC PM is its use in memory databases, specifically SAP HANA. HANA can take advantage of the (6) TB of memory to to handle larger databases. Keith mentioned that even Microsoft SQL server can take advantage of the additional memory to provide faster responses to queries.

Keith also mentioned that there are some systems out there that can be configured to share Optane memory (or storage). When SAP or other databases use this solution they are able to amortize the cost of the technology over more use cases.

Of course, Optane DC PM are only available on the lastest generation Intel processors. None of us have heard anything from AMD (or Micron) on providing a second source for support of Optane DC PM (or the memory technology itself). Presumably most customers would want a second source for Optane DC PM processor support (as well as the technology)

Cloud enterprise storage hits mainstream

The other thing we saw more of this year is enterprise vendors offering versions of storage in public cloud environments. NetApp was an early proponent of doing this.

We saw at Pure that they have a new Cloud Block Store witch is a re-architected version of FlashArray//X storage using AWS hardware and networking services. We were very impressed with what they have accomplished and it was the subject of more than one late night discussion. Listen to the Keith & Ray show at Pure//Accelerate2019 podcast to learn more.

Matt mentioned Nimble’s cloud volume storage which is cloud adjacent. Most enterprise vendors offer something similar today. They differentiate on how easy it is to configure, use and where (which regions) it’s available in.

NetApp has arguably been at this the longest and has the deepest offerings available from cloud adjacent file and block storage, to offering native enterprise file services for all public cloud environments, to supplying a suite of dedicated data services to surround all of their storage technology operating in public clouds and on premises.

While Dell EMC may have missed the turn to the cloud, they are quickly trying to catch up. Keith mentioned Faction, a Dell partner that offers cloud storage services using VMware with VMC. With Faction and vSAN customers have access to software defined storage that uses cloud hardware to support data services.

What’s driving data growth

There seems to be no end for the need for storage to store data. The GreyBeards point to three trends driving data growth today.

  1. IoT seems to have no bounds. A recent RayOnStorage post Internet of Tires discussed how tire companies were tying their tires to the internet. And that’s just the start, pretty soon every artifact, every device, every manufactured item will have a number of sensors attached all of which will be creating massive amounts of data.
  2. AI ML DL has an insatiable appetite for data. IoT is being used largely to optimize products and services. But it’s DL, with a large dollop of data, that is behind much of that optmization.
  3. SaaS applications is a relatively new application approach that’s being rolled out to more arenas and as it’s online and user oriented, seems to generate lots of data.

Containers storage debate

We closed the podcast with a heavy debate on whether container applications have need for storage. Keith was adamant that containers by their very nature are stateless and that Kubernetes ability to stop and start container applications at will almost requires stateless operations.

Ray was a bit more theoretical on the topic and believed that most container applications today take advantage of some sort of database or other services to store state and that state is just another word for storage.

Keith mentioned encoding as a typical container app. Encoding containers can be fired up and taken down at will without hurting anything but throughput. Yes, but those encoder container apps must access some database or other state information to find out what work is left to do and as they complete their work they update this data as well as store their newly encoded segments. This all involves the use of state information.

In the end, I think we were talking about the same thing but using different terminology. Keith believes that persistent state information is needed and Ray says that this is just another word for (containers) storage. Matt said we probably need Nigel (@NigelPoulton) on the podcast to straighten us both out.

The podcast ran a bit long and could have run longer. Keith and Matt bring systems level perspective to what’s happening in the storage market. But they come at it from different sides. Ray seems to frame everything from a storage perspective. Diverse perspectives lead to a more fuller and interesting discussion. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ray Lucchesi ( @RayLucchesi) is the host of GreyBeardsOnStorage and is President/Founder of Silverton Consulting, and a prominent blogger at RayOnStorage.com.

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek

Matt Leib (@MBLeib), one of our co-hosts, has been blogging in the storage space for over 10 years, with work experience both on the engineering and presales/product marketing. His blog is at Virtually Tied to My Desktop.


89: Keith & Ray show at Pure//Accelerate 2019

There were plenty of announcements at Pure//Accelerate in Austin this past week and we were given a preview of them at a StorageFieldDay Exclusive (SFDx), the day before the announcement.

First up is Pure’s DirectMemory. They have added Optane SSDs to FlashArray//X to be used as a read cache for customer data. As you may know, Pure already has an NVRAM write cache. With DirectMemory, customers can have 3TB or 6TB of Optane storage in a FlashArray//X70 or //X90 storage. It almost looks plug and play, you take out one or two flash modules and plug in Optane SSD(s) and off it goes. DirectMemory went GA at the show.

Pure also announced FlashArray//C at Accelerate. This is a new capacity optimized storage solution. They have re-designed their flash module to support higher capacity flash, and supply higher capacity storage (targeted for QLC flash but will originally ship with TLC). FlashArray//C supplies ~5PB of effective (~1.4PB raw) capacity in 9U. Although, FlashArray//C offers cheaper storage on $/GB basis it is also much slower (RT latency on order of 2-4msec) than FlashArray//X storage.. Pure like other vendors we have talked with are trying to drive disk technology out of the enterprise. We had some interesting discussions with Pure (and others) on this topic at the reception. Just remember, tape is still alive and well in the enterprise AND cloud, 52 years after being pronounced dead.

Pure had announced CloudBlockStore (CBS) previously but it is now GA through partners or on AWS marketplace. Give them kudos for their approach as they have taken a different approach to Pure storage in the cloud. With CBS, they have effectively re-archetected and re-implemented Pure FlashArray using AWS EC2, IO1, EBS and S3 storage and ended up with a highly available (iSCSI) block software defined storage. It will be interesting to see how well it’s adopted. Picture is from me explaining CBS architecture to @DVellante.

For Pure’s FlashBlade storage, they have doubled the number of blades in a cluster (or name space), from 75 to 150 FlashBlades. Each FlashBlade contains storage and compute (almost computational storage), so one should see an increase in bandwidth with the added blades. None at Pure would go on record with specific numbers on any performance improvement because it’s still undergoing testing.

Finally, FlashArray//X will offer full NFS and SMB file support. This is coming from a recent acquisition (Compuverde). They plan to differentiate between file on FlashArraiy//X file storage and FlashBlade by saying that FlashArray//X file is for those customers with mostly block storage requirements but also need small amount of file storage and FlashBlade for everyone else that needs file.

The podcast is ~23 minutes. Keith is a long time friend and co-host of our GreyBeards On Storage podcast. He’s always got an interesting perspective on how new technology can benefit the data center today. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Keith Townsend, The CTO Advisor

Keith Townsend (@CTOAdvisor) is a IT thought leader who has written articles for many industry publications, interviewed many industry heavyweights, worked with Silicon Valley startups, and engineered cloud infrastructure for large government organizations. Keith is the co-founder of The CTO Advisor, blogs at Virtualized Geek, and can be found on LinkedIN.

86: Greybeards talk FMS19 wrap up and flash trends with Jim Handy, General Director, Objective Analysis

This is our annual Flash Memory Summit podcast with Jim Handy, General Director, Objective Analysis. It’s the 5th time we have had Jim on our show. Jim is also an avid blogger writing about memory and SSD at TheMemoryGuy and TheSSDGuy, respectively.

NAND market trends

Jim started off our discussion on the significant price drop in the NAND market over the last two years. He said that prices ($/GB) have dropped 60% last year and are projected to drop about 30% this year.

The problem is over production and as vendors are prohibited from dropping prices below cost, they tend to flatten out at production cost. NAND pricing will remain there until supplies start tightening again. Jim doesn’t see that happening until 2021.

He says although this NAND price drops don’t end up reducing SSD prices, it does allow us to buy more SSD storage for the same price. So maybe back earlier this century NAND cost $10K/GB, now it’s around $0.05/GB.

Jim also mentioned that Chinese NAND fabs should start coming online in 2021 too. They have been spending lots of money trying to get their own NAND manufacturing running. Jim said the reason they want to do this is because the Chinese are spending more $s on chips , than they do for oil.

Computational storage, a bright spot

At the show, computational storage (for more hear our GBoS podcast with Scott Shadley, NGD Systems) was hot again this year. Jim took a shot at defining computational storage and talked about the proliferation of ARM cores in SSDs. Keith mentioned that Moore’s law is making the incremental cost of adding more cores close to zero.

Jim said SAMSUNG already have 6 ARM cores in their SSDs, but most other vendors use 3 cores. I met with NetInt at the show who are focused on computational storage for video transcoding. Keith doesn’t think this would be a good fit, because it takes a lot of computation. But maybe as it’s easily distributable (out to a gaggle of SSDs) and it’s data intensive it might work ok. Jim also mentioned while adding cores may be cheap, increasing memory (DRAM) is not.

According to Jim, hyper-scalars are starting to buy computational storage technology. He’s not sure if they are just trying it out or have some real work running on the technology.

SCM news

We talked about Toshiba’s new XC flash and SSDs. Jim said this is just SLC NAND (expensive $/GB and high endurance) with increased parallelism and reduced latency data paths. Samsung’s Z-NAND is similar. Toshiba claims XL Flash SSDs are another storage class memory (SCM, see our 3DX blog post). Toshiba are pricing XL Flash SSDs at about 10X the $/GB price of 3D TLC NAND, or roughly the same as Optane SSDs.

We next turned to Optane DC PM, which Intel is selling at a loss but as it works only with Cascade Lake CPUs, can help increase CPU adoption. So Intel can absorb Optane DC PM losses by selling more (highly profitable) Cascade Lake systems.

Keith mentioned that SAP HANA now works with Cascade Lake-Optane DC PM. This is driving up demand for the new DC PM and new CPUs. Keith said with the new larger size in memory databases from DC PM, HANA able to do more work, increasing Cascade Lake-Optane DC PM-SAP HANA adoption.

Micron also manufacturers 3DX. Jim said they are in an enviable position as they can . supply the chips (at costs) to Intel, so they know chip volumes and can see what Intel is charging for the technology. So, if at some point, it has runway to become profitable, they can easily enter as a sole secondary source for the technology.

Other NAND news

How high can 3D TLC NAND go? Jim said most 3D NAND sold on the market is 64 layers high but suppliers are already shipping more layers than that. All NAND suppliers, bar one, have said their next generation 3D TLC NAND will be over 100 layers. Some years back one vendor said the technology could go up to 500 layers. This year Samsung, said they see the technology going to 800 layers.

We’ve heard of SLC, MLC, TLC and QLC but at the show there was talk of PLC or five level cell NAND technology. If they can make the technology successful, PLC should reduce manufacturing costs, another 10% ($/GB).

We discussed a lot more that was highlighted at the show, including PCIe fabric/composable infrastructure, zoned (NVMe) name spaces (redux SMR disks) and the ongoing success of the show. We had a brief discussion on when if ever NAND costs will be less than disk ($/GB).

The podcast is a little under ~40 minutes. Jim is an old friend, who is extremely knowledgeable about NAND & DRAM technology as well as semiconductor markets in general. Jim’s always been a kick to talk with. Listen to the podcast to learn more.

Jim Handy, General Director, Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon.

A frequent presenter at trade shows, Mr. Handy is known for his technical depth, accurate forecasts, widespread industry presence and volume of publication.

He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media. 

He posts blogs at www.TheMemoryGuy.com, and www.TheSSDguy.com