106: Greybeards talk Intel’s new HPC file system with Kelsey Prantis, Senior Software Eng. Manager, Intel

We had talked with Intel at Storage Field Day 20 (SFD20), about a month ago. At the virtual event, Intel’s focus was on their Optane PMEM (persistent memory) technology. Kelsey Prantis (@kelseyprantis), Senior Software Engineering Manager, Intel was on the show and gave an introduction into Intel’s DAOS (Distributed Architecture Object Storage, DAOS.io) a new HPC (high performance computing, super computers) file system they developed from scratch to use leading edge, Intel technologies, and Optane PMEM was one of them.

Kelsey has worked on LUSTRE and other HPC file systems for a long time now and came into the company from the acquisition of Whamcloud. Currently, she manages the development team working on DAOS. DAOS is a new HPC object storage file system which is completely open source (available on GitHub).

DAOS was designed from the start to take advantage of NVMe SSDs and Optane PMEM. With PMEM, current servers can support up to 20TB of memory. Besides the large memory sizes, Optane PMEM also offers non-volatile memory and byte addressability (just like DRAM). These two characteristics opens up new functionality that allows DAOS to move beyond legacy, block oriented, storage architectures that have been the only storage solution for HPC (and the enterprise) for decades now.

What’s different about DAOS

DAOS uses PMEM for all metadata and for storing small files. HPC IO has always focused on heavy bandwidth (IO using large blocks) oriented but lately newer applications have emerged, such as AI/ML/DL, data analytics and others, that use smaller files/blocks. Indeed, most new HPC clusters and supercomputers are deploying almost as many GPUs as CPUs in their configurations to support AI activities.

The problem is that these newer applications typically consume much smaller files. Matt mentioned one HPC client he worked with was processing small batches of seismic data, to predict, in real time, earthquakes that were happening around the world.

By using PMEM for metadata and small files, DAOS can be much more responsive to file requests (open, close, delete, status) as well as provide higher performing IO for small files. All this leads to a much better performing system for the new HPC workloads as well as great sustainable performance for the more traditional large file workloads.

DAOS storage

DAOS provides a cluster storage system that can be configured with from 1 (no data protection), but more normally 3 nodes (with data protection) at a minimum to 512 nodes (lab tested). Data protection in DAOS is currently based on mirroring data and can use from 0 to the number of nodes in a cluster as data mirrors.

DAOS system nodes are homogeneous. That is they all come with the same amount of PMEM and NVMe SSDs. Note, DAOS doesn’t support disk drives. Kelsey mentioned DAOS node hardware can be tailored to suit any particular application environment. But they typically require an average of 6% of overall DAOS system capacity in PMEM for metadata and small file activity.

DAOS current supports their own API, POSIX, HDFS5, MPIIO and Apache Spark storage protocols. Kelsey mentioned that standard POSIX uses a pessimistic conflict resolution mode which leads to performance bottlenecks during parallel access. In contrast, DAOS’s versos of POSIX uses optimistic conflict resolution, which means DAOS starts writes assuming there’s no conflict, but if one occurs it handles the conflict in real time. Of course with all the metadata byte addressable and in PMEM this doesn’t take up a lot of (IO) time.

As mentioned earlier, DAOS data protection uses mirror-replicas. However, unlike most other major file systems, DAOS mirroring can be done at the object level. DAOS internally is an object store. Data organization on DAOS starts at the pool level, underneath that is data containers, and then under that are objects. Any object in DAOS can have its own mirroring configuration. DAOS is working towards supporting Erasure Coding as another form of data protection for a future release.

DAOS performance

There’s a new storage benchmark that was developed specifically for HPC, called the IO500. The IO500 benchmark simulates a number of different HPC workloads, measures performance for each of them, and computes an (aggregate) performance score to rank HPC storage systems.

IO500 ranks system performance using two lists: one is for any sized configuration that typically range from 50 to 1000s of nodes and their other list limits the configuration to 10 nodes. The first performance ranking can sometimes be gamed by throwing more hardware into a cluster. The 10 node rankings are much harder to game this way and from our perspective, show a fairer comparison of system performance.

As presented (virtually) at ISC 2020, DAOS took the top spot on the IO500 any size configuration list and performed better than 2X the next best solution. And on the IO500 10 node list, Intel’s DAOS configuration, Texas Advanced Computing (TAC) DAOS configuration, and Argonne Nat Labs DAOS configuration took the top 3 spots and had 3X better performance than the next best, non-DAOS storage system.

The Argonne National Labs has already stated that they will be using DAOS in their new HPC system to be deployed in the near future. Early specifications for storage at the new Argonne Lab required support for 230PB of data and 25TB/sec of bandwidth.

The podcast ran ~43 minutes. Kelsey was great to talk with and very knowledgeable about HPC systems and HPC IO in particular. Matt has worked at Argonne in the past so understood these systems better than I. Sadly, we lost Matt’s end of the conversation about 1/2 way into the recording. Both Matt and I thought that DAOS represents the birth of a new generation of HPC storage. Listen to the podcast to learn more.


This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png

This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png

Kelsey Prantis, Senior Software Engineering Manager, Intel

 Kelsey Prantis heads the Extreme Storage Architecture and Development division at Intel Corporation. She leads the development of Distributed Asynchronous Object Storage (DAOS), an open-source, low-latency and high IOPS object store designed from the ground up for massively distributed Non-Volatile Memory (NVM).

She joined Intel in 2012 with the acquisition of Whamcloud, where she led the development of the Intel Manager for Lustre* product.

Prior to Whamcloud, she was a software developer at personal genomics and biotechnology company 23andMe.

Prantis holds a Bachelor’s degree in Computer Science from Rochester Institute of Technology

103: GreyBeards talk scale-out file and cloud data with Molly Presley & Ben Gitenstein, Qumulo

Sponsored by:

Ray has known Molly Presley (@Molly_J_Presley), Head of Global Product Marketing for just about a decade now and we both just met Ben Gitenstein (@Qumulo_Product), VP of Products & Solutions, Qumulo on this podcast. Both Molly and Ben were very knowledgeable about the problems customers have with massive data troves.

Molly has been on our podcast before (with another company, see: GreyBeards talk HPC storage with Molly Rector, CMO & EVP, DDN ). And we have talked with Qumulo before as well (see: GreyBeards talk data-aware, scale-out file systems with Peter Godman, Co-founder & CEO, Qumulo ).

Qumulo has a long history of dealing with customer issues with data center application access to data, usually large data repositories, with billions of small or large files, they have accumulated over time. But recently Qumulo has taken on similar problems in the cloud as well.

Qumulo’s secret has always been to allow researchers to run their applications wherever their data resides. This has led Qumulo’s software defined storage to offer multiple protocol access as well as a completely native, AWS and GCP cloud version of their solution.

That way customers can run Qumulo in their data center or in the cloud and have the same great access to data. Molly mentioned one customer that creates and gathers data using SMB protocol on prem and then, after replication, processes it in the cloud.

Qumulo Shift

Ben mentioned that many competitive storage systems are business model focused. That is they are all about keeping customer data within their solutions so they can charge for capacity. Although Qumulo also charges for capacity, with the new Qumulo Shift service, customer can easily move data off Qumulo and into native cloud storage. Using Shift, customers can free up Qumulo storage space (and cost) for any data that only needs to be accessed as objects.

With Shift, customers can replicate or move on prem or in the cloud Qumulo file data to AWS S3 objects. Once in S3, customers can access it with AWS native applications, other applications that make use of AWS S3 data, or can have that data be accessible around the world.

Qumulo customers can select directories to Shift to an AWS S3 bucket. The Qumulo directory name will be mapped to a S3 bucket name and each file in that directory will be copied to an S3 object in that bucket with the same file name.

At the moment, Qumulo Shift only supports AWS S3. Over time, Qumulo plans to offer support for other public cloud storage targets for Shift.

Shift is based on Qumulo replication services. Qumulo has a number of patents on replication technology that provides for sophisticated monitoring, control and high performance for moving vast amounts of data.

How customers use Shift

One large customer uses Qumulo cloud file services to process seismic data but then makes the results of that analysis available to other clients as S3 objects.

Customers can also take advantage of AWS and other applications that support objects only. For example, AWS SageMaker Machine Learning (ML) processes S3 object data. Qumulo customers could gather training data as files and Shift it to S3 objects for ML training.

Moreover, customers can use Shift to create AWS S3 object backups, archives and DR repositories of Qumulo file data. Ben mentioned DevOps could also use Qumulo Shift via APIs to move file data to S3 objects as part of new application deployment.

Finally, using Shift to copy or move file data to AWS S3, makes it ideal for collaboration by researchers, analysts and just about other entity that needs access to data.

The podcast ran ~26 minutes. Molly has always been easy to talk with and Ben turned out also to be easy to talk with and knew an awful lot about the product and how customers can use it. Keith and I enjoyed our time with Molly and Ben discussing Qumulo and their new Shift service. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Spotify_Logo_CMYK_Black-1024x307.png
This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Ben Gitenstein, VP of Products and Solutions, Qumulo

Ben Gitenstein runs Product at Qumulo. He and his team of product managers and data scientists have conducted nearly 1,000 interviews with storage users and analyzed millions of data points to understand customer needs and the direction of the storage market.

Prior to working at Qumulo, Ben spent five years at Microsoft, where he split his time between Corporate Strategy and Product Planning.

Molly Presley, Head of Global Product Marketing, Qumulo

Molly Presley joined Qumulo in 2018 and leads worldwide product marketing. Molly brings over 15 years of file system and archive technology leadership experience to the role.

Prior to Qumulo, Molly held executive product and marketing leadership roles at Quantum, DataDirect Networks (DDN) and Spectra Logic.

Presley also created the term “Active Archive”, founded the Active Archive Alliance and has served on the Board of the Storage Networking Industry Association (SNIA).

0102 GreyBeards talk big memory data with Charles Fan, CEO & Co-founder, MemVerge

It’s been a couple of months since we last talked with a startup, so the GreyBeards thought it was time. We reached out to Charles Fan (@CharlesFan14), CEO and Co-Founder of MemVerge to find out about their big memory solution or as Charles likes to call it, “software defined (big) memory”. Although neither Matt or I had ever talked with Charles before, he’s been just about everywhere in the storage industry throughout his career.

If you have been following my RayOnStorage blog you will have seen a post (Need memory, Intel’s Optane DC PM to the rescue) last year on Intel’s new Persistent Memory solutions using 3D XPoint, called Optane DC PM (data center, persistent memory) . At the announcement Intel made available a couple of ways customers could use Optane DC PM (PMem).

Optane DC PM primer

Native Optane DC PM access modes include:

  • A Memory Mode, which has Pmem emulating a large volatile memory space and uses a defined ratio of DRAM to PMem as a cache to access the Optane DC PM memory behind it.
  • An Application Direct (AppDirect) Mode which supports two sub-modes: a storage device mode that uses Pmem to emulate a persistent, 4KB block storage device; and a byte addressable, persistent memory address space mode that uses Pmem to emulate a large, non-volatile memory space . AppDirect memory content persists across boots, power failures and other system crashes.

Native PMem modes are selectected in the BIOS and are deployed at Boot time. Optane DC PM on a server can be split up into any of the three modes. And currently with Optane DC PM (Gen 1), a single server can have up to 6TB of DC PM which will go up to 8TB with Optane DC PM Gen 2 coming out later this year.

MemVerge Memory Machine

MemVerge has written a “software defined memory” service called the Memory Machine, that sits above the Intel Optane DC PM in server(s) and provides application access AND data services for PMem. .

Charles likens their Memory Machine to what VMware did for CPU cores, ie. they provide memory virtualization. This, Charles believes will bring on the age of Big Memory applications. He feels that PMem, with Memory Machine on top of it, will eliminate the need for high performance, tier 0 storage. Tier 0 storage is ~$10B market today, which he sees shifting from networked storage to PMem solutions. 

Memory Machine Data Services

One of the data services that the Memory Machine offers is a Pmem snapshot service. PMem thick or thin snapshots can be taken any (infinite) number of times (for thick snapshots storage space availability may limit their number) and can be taken up to once per minute. PMem thin snapshots take little time to accomplish and are very PMem space efficient but thick snapshots are a PMem to PMem copy of data, which will take longer to accomplish and will take double the memory of the original PMem being snapshot.

One significant use case for Pmem snapshots is for checkpoint crash recovery. Charles mentioned many securities and financial analysis firms use KDB as streaming data base service to monitor/analyze market activity and provide automated trading and other market services. These firms are always trying to gain an advantage through speed and reduced latency and as a result have moved their time sensitive processing to use in memory data structures/databases.

However, because checkpointing for crash recovery takes time, they usually checkpoint in memory databases only once a day (after market close) and maintain a log of database transactions on SSD. If there’s a system crash, they reload the last checkpoint and re-play all the transaction logs since that checkpoint to bring their in memory database back to the point of crash. Due to the number of transactions these firms do, this sort of crash recoverys can take hours.

With Memory Machine, these customers can take in memory checkpoints every minute and in the event of a crash, only have to re-play a minutes worth of transaction logs which could be done in no time to get back up

Other environments do similar checkpoint crash recoveries all of which could also take advantage of PMem snapshots to take more frequent checkpoints. Charles mentioned Rendering farms on the podcast but long scientific simulations (HPC) and others use checkpoints for crash recovery.

Another data (or application) service offered by Memory Machine is application cloning. Most in memory applications are single threaded. meaning they can only take advantage of a single CPU core (thread). In order to speed up processing, customers must shard (split up) or copy their database and application onto other servers/CPU/cores to provide more processing power. Memory Machine can use its thick or thin snapshots to clone applications in seconds.

Charles also mentioned that Memory Machine offers PMem dynamic reconfiguration. That is instead of having to make BIOS changes and re-boot server(s) to re-allocate PMem across different applications, Memory Machine is allocated 100% of the PMem at boot time but then, on demand, anytime its operating, operators using MemVerge’s GUI/CLI can carve Pmem up into any number of application memory spaces. That is as application demand for in memory data changes, operations can use the Memory Machine to re-allocate PMem to keep up.

Memory Machine also supports PMem clustering or scaling across servers. With the current 6TB (and soon 8TB) per server PMem limit, some customer applications still run out of memory. Memory Machine is able to cluster or aggregate PMem across up to 32 servers to support a single larger, PMem address space of 192TB (Gen 1) or 256TB (Gen 2) DC PM. The Memory Machine uses an RDMA (RoCE Ethernet or InfiniBand) cluster interconnect which adds ~1 microsecond of overhead to access PMem in another server. This comes with PMem automatic data tiering using DRAM, local (on the server) PMem and remote (across cluster interconnect) PMem.

Charles mentioned another data service provided by Memory Machine is (Synch or Asynch) replication. One use case for replication is to create a Pub-Sub service for market data.

Charles believes that in memory databases and data processing workloads are just starting to become popular these days. Besides KDB and rendering, other data processing such as AI training/inferencing, Reddis applications, and other database systems are able to take advantage of in memory, large data structures to speed up their data processing

MemVerge’s EAP (early access program) opened up recently (5/19/2020). Charles suggested anyone using large, in memory data processing, take a look at what the Memory Machine can do and contact them to sign up.

The podcast runs ~45 minutes. Charles was very articulate as well as knowledgeable about the technology and its applications. He was great to talk tech with. Matt and I had a fun time talking Optane DC PM and Memory Machine functionality/applications with him. Listen to the podcast to learn more.

Charles Fan, CEO & Co-founder, MemVerge

Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was a SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product.

Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO.

Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

094: GreyBeards talk shedding light on data with Scott Baker, Dir. Content & Data Intelligence at Hitachi Vantara

Sponsored By:

At Hitachi NEXT 2019 Conference, last month, there was a lot of talk about new data services from Hitachi. Keith and I thought it would be a good time to sit down and talk with Scott Baker (@Kraken-Scuba), Director of Content and Data Intelligence, at Hitachi Vantara about what’s going on with data operations these days and how customers are shedding more light on their data.

Information supply chain

Something Scott said in his opening remarks caught my attention when he mentioned customer information supply chains. The information supply chain is similar to manufacturing supply chains, but it’s all about data. Just like manufacturing supply chains where parts and services come from anywhere and are used to create products/services for customers,

information supply chains are about the data used in their organization operations. Information supply chain data is A) being sourced from many places (or applications); B) being added to by supply chain processing (or other applications); and C) ultimately used by the organization to supply a product/service to customers.

But after the product/service is supplied the similarity between manufacturing and information supply chains breaks down. With the information supply chain, data is effectively indestructible, is infinitely re-useable and can live forever. Who throws data away anymore?

The problem most organizations have with information supply chains is once the product/service is supplied, data is often put away never to be seen again or as Scott puts it, goes dark.

This is where Hitachi Content intelligence (HCI) comes in. HCI is designed to take (unstructured or structured) data and analyze it (using natural language and other processing tools) to surround it with information and other metadata, so that it can become more visible and useful to the organization for the life of its existence.

Customers can also use HCI to extract and blend data streams together, automating the creation of an information rich, data repository. The data repository can readily be searched to re-discover or uncover attributes about the data not visible before.

Scott also mentioned the Hitachi Pentaho Platform which can be used to make real time decision from structured data. Pentaho information can also be fed into HCI to provide more intelligence for your structured data.

But HCI can also be used to analyze other database data as well. For instance, database blob and text elements can be fed to and analyzed by HCI. HCI analysis can include natural language processing and other functionality to tag the data by adding key:value information, all of which can be supplied back to the database or Pentaho to add further value to structured data.

Customers can also use HCI to read and transform database tables into XML files. XML files can be stored in object stores as objects or in file systems. XML data could easily be textually indexed and be searched by various tools to better understand the structured data information

We also talked about Hadoop data that can be offloaded to Hitachi Content Platform (HCP) object storage with a stub left behind. Once data is in HCP, HCI can be triggered to index and add more metadata, which can then later be used to decide when to move data back to Hadoop for further analysis.

Finally, Keith mentioned that he just got back from KubeCon and there was an increasing cry for data being used with containerized applications. Scott mentioned HCP for Cloud Scale, the newest member of the HCP object store family, focused on scale out capabilities to provide highly consistent, object storage performance for customers that need it. Customers running containerized workloads use scale-out capabilities to respond to user demand and now they have on premises object storage that can scale with them, as needs change.

The podcast ran ~24 minutes. Scott was very knowledgeable about data workflows, pipelines and the need for better discovery tools. We had a great time discussing information supply chains and how Hitachi can help customers optimize their data pipelines. Listen to the podcast to learn more.

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Scott Baker, Director of Content and Data Intelligence at Hitachi Vantara

Scott Baker is, and has been, an active member of the information technology, data analytics, data management, and data protection disciplines for longer than he is willing to admit.

In his present role at Hitachi, Scott is the Senior Director of the Content and Data Intelligence organization focused on Hitachi’s Digital Transformation, Data Management, Data Governance, Data Mobility, Data Protection and Data Analytics solutions which includes Hitachi Content Platform (HCP), HCP Anywhere, HCP Gateway, Hitachi Content Intelligence, and Hitachi Data Protection Solutions.

Scott is a VMware Certified Professional, recognized as a subject matter expert, industry speaker, and author. Scott has been a panelist on topics related to storage, cloud, information governance, data security, infrastructure standardization, and social media topics. His educational background includes an MBA, Master’s & Bachelor’s in Computer Science.

When he’s not working, Scott is an avid scuba diver, underwater photographer, and PADI Scuba Instructor. He has a passion for public speaking, whiteboarding, teaching, and traveling the world.

93: GreyBeards talk HPC storage with Larry Jones, Dir. Storage Prod. Mngmt. and Mark Wiertalla, Dir. Storage Prod. Mkt., at Cray, an HPE Enterprise Company

Supercomputing Conference 2019 (SC19) is coming to Denver next week and in anticipation of that show, we thought it would be a good to talk with some HPC storage group. We contacted HPE and given their recent acquisition of Cray, they offered up Larry and Mark to talk about their new ClusterStor E1000 storage system.

There are a number of components that go into Cray supercomputers and besides the ClusterStor, Larry and Mark mentioned their new SlingShot cluster interconnect which is Ethernet based with significant enhancements to congestion handling. But the call focused on ClusterStor.

What is ClusterStor

ClusterStor, is a Lustre file system hardwareappliance. Lustre has always been popular with the HPC crowd as it offered high bandwidth file services. But Lustre often took a team of (PhD) scientists to configure, deploy and run properly because of all the parameters that had to be setup for optimum performance.

Cray’s ClusterStor was designed to make configuring, deploying and running Lustre a lot simpler with a GUI and system defaults that provided an optimal running environment. But if customers still want access to all Lustre features and functionality, all the Lustre parameters can still be tweaked to personalize it.

What sort of appliance

The ClusterStore team has created a Lustre storage appliance using two systems, a 2U-24 NVMe SSD system and a 4U-106 disk drive system. Both systems use PCIe Gen 4 buses which offer 2X the bandwidth of Gen 3 and NVMe Gen 4 SSDs. Each ClusterStore E1000 appliance comes with 2 servers for HA and the storage behind it.

Larry said the 2U NVMe Gen 4 appliance offers 80GB/sec of read and 60GB/sec of write data bandwidth. And a full rack of these, could support ~2.5TB/sec of data bandwidth. One TB/sec seems like an awful lot to the GreyBeards, 2.5TB/sec, out of this world.

We asked if it supported InfiniBAND interconnects? Yes, they said it supports the latest generation of InfiniBAND but it also offers Cray’s own (SlingShot) Ethernet interconnect, unusual for HPC environments. And as in any Lustre parallel file system, servers accessing storage use Lustre client software.

ClusterStor Data Services

But on the backend, where normally one would see only LDISKFS for backend storage, ClusterStor also offers ZFS. Larry and Mark said that LDISKFS is faster but ZFS offers more functionality like snapshots and data compression.

Many of the Top 100 & Top 500 supercomputing environments are starting to deploy ML DL (machine learning-deep learning) workloads along with their normal HPC activities. But whereas HPC work has historically depended on bandwidth to read, write and move large files around, ML DL deals with small files and needs high IOPS. ClusterStor was designed to satisfy both high bandwidth and high IOPS workloads.

In previous HPC Lustre flash solutions, customers had to deal with the complexity of where to place data, such as on flash or on disk. But with net ClusterStor E1000, the system can do all this for you. That is it will move data from disk to flash when it sees an advantage to doing so and move it back again when that advantage is gone. But, just as with Lustre configuration parameters above, customers can still pre-stage data to flash.

The other challenge for HPC environments is extreme size. Cray and others are starting to see requirements for Exascale (exabyte, 10**18) byte) storage systems. In fact, Cray has a couple of ClusterStor E1000 configurations of 400PB or more already, As these systems age they may indeed grow to exceed an exabyte.

With an exabyte of data, systems need to support billions of files/inodes and better metadata services and indexing. ClusterStor offers optimized inode indexing and search to enable HPC users to quickly find the data they are looking for. Further, ClusterStor offers, data at rest encryption and supports virtual file systems, for multi-tenancy.

With a ZFS backend, ClusterStor can supply data compression and snapshots. Cray has tested ZFS compression on HPC scientific ( mostly already application compressed) data and still see ~30% reduction is storage footprint. At an exabyte of storage 30% can be a significant cost reduction

The podcast ran long, ~46 minutes. Larry and Mark had a good knowledge of the HPC storage space and were easy to talk with. Matt’s an old ZFS hand, so wanted to take even more about ZFS. I had a good time discussing ClusterStor and Lustre features/functionalit and how the HPC workloads are changing. Listen to the podcast to learn more. [The podcast was recorded on November 6th, not the 5th as mentioned in the lead in, Ed.]

This image has an empty alt attribute; its file name is Subscribe_on_iTunes_Badge_US-UK_110x40_0824.png
This image has an empty alt attribute; its file name is play_prism_hlock_2x-300x64.png

Larry Jones, Director Storage Product Management

Larry Jones is a director of storage product management for Cray, a Hewlett Packard Enterprise company.

Jones previously held senior product management roles at Seagate, DDN and Panasas.

Mark Wiertalla, Director Storage Product Marketing

Mark Wiertalla is a product marketing director for Cray, a Hewlett Packard Enterprise company.

Prior to Cray, Wiertalla held product manager roles at EMC and SGI.