Category Archives: Software defined storage

51: GreyBeards talk hyper convergence with Lee Caswell, VP Product, Storage & Availability BU, VMware

Sponsored by:

VMware

In this episode we talk with Lee Caswell (@LeeCaswell), Vice President of Product, Storage and Availability Business Unit, VMware.  This is the second time Lee’s been on our show, the previous one back in April of last year when he was with his prior employer. Lee’s been at VMware for a little over a year now and has helped lead some significant changes in their HCI offering, vSAN.

VMware vSAN/HCI business

Many customers struggle to modernize their data centers with funding being the primary issue. This is very similar to what happened in the early 2000s as customers started virtualizing servers and consolidating storage. But today, there’s a new option, server based/software defined storage like VMware’s vSAN, which can be deployed for little expense and grown incrementally as needed. VMware’s vSAN customer base is currently growing by 150% CAGR, and VMware is adding over 100 new vSAN customers a week.

Many companies say they offer HCI, but few have adopted the software-only business model this entails. The transition from a hardware-software, appliance-based business model to a software-only business model is difficult and means a move from a high revenue-lower margin business to a lower revenue-higher margin business. VMware, from its very beginnings, has built a sustainable software-only business model that extends to vSAN today.

The software business model means that VMware can partner easily with a wide variety of server OEM partners to supply vSAN ReadyNodes that are pre-certified and jointly supported in the field. There are currently 14 server partners for vSAN ReadyNodes. In addition, VMware has co-designed the VxRail HCI Appliance with Dell EMC, which adds integrated life-cycle management as well as Dell EMC data protection software licenses.

As a result, customers can adopt vSAN as a build or a buy option for on-prem use and can also leverage vSAN in the cloud from a variety of cloud providers, including AWS very soon. It’s the software-only business model that sets the stage for this common data management across the hybrid cloud.

VMware vSAN software defined storage (SDS)

The advent of Intel Xeon processors and plentiful, relatively cheap SSD storage has made vSAN an easy storage solution for most virtualized data centers today. SSDs removed any performance concerns that customers had with hybrid HCI configurations. And with Intel’s latest Xeon Scalable processors, there’s more than enough power to handle both application compute and storage compute workloads.

From Lee’s perspective, there’s still a place for traditional SAN storage, but he sees it more for cold storage that is scaled independently from servers or for bare metal/non-virtualized storage environments. But for everyone else using virtualized data centers, they really need to give vSAN a look.

Storage vendors shifting sales

It used to be that major storage vendor sales teams would lead with hardware appliance storage solutions and then move to HCI when pushed. The problem was that a typical SAN storage sale takes 9 months to complete and then 3 years of limited additional sales.

To address this, some vendors have taken the approach where they lead with HCI and only move to legacy storage when it’s a better fit. With VMware vSAN, it’s a quicker sales cycle than legacy storage because HCI costs less up front and there’s no need to buy the final storage configuration with the first purchase. VMware vSAN HCI can grow as the customer applications needs dictate, generating additional incremental sales over time.

VMware vSAN in AWS

Recently, VMware has announced VMware Cloud in AWS.What this means is that you can have vSAN storage operating in an AWS cloud just like you would on-prem. In this case, workloads could migrate from cloud to on-prem and back again with almost no changes. How the data gets from on-prem to cloud is another question.

Also the pricing model for VMware Cloud in AWS moves to a consumption based model, where you pay for just what you use on a monthly basis. This way VMware Cloud in AWS and vSAN is billed monthly, consistent with other AWS offerings.

VMware vs. Microsoft on cloud

There’s a subtle difference in how Microsoft and VMware are adopting cloud. VMware came from an infrastructure platform and is now implementing their infrastructure on cloud. Microsoft started as a development platform and is taking their cloud development platform/stack and bringing it to on-prem.

It’s really two different philosophies in action. We now see VMware doing more for the development community with vSphere Integrated Containers (VIC), Docker Containers, Kubernetes, and Pivotal Cloud foundry. Meanwhile Microsoft is looking to implement the Azure stack for on-prem environments, and they are focusing more on infrastructure. In the end, enterprises will have terrific choices as the software defined data center frees up customers dollars and management time.

The podcast runs ~25 minutes. Lee is a very knowledgeable individual and although he doesn’t qualify as a Greybeard (just yet), he has been in and around the data center and flash storage environments throughout most of his career. From his diverse history, Lee has developed a very business like perspective on data center and storage technologies and it’s always a pleasure talking with him.  Listen to the podcast to learn more.

Lee Caswell, V.P. of Product, Storage & Availability Business Unit, VMware

Lee Caswell leads the VMware storage marketing team driving vSAN products, partnerships, and integrations. Lee joined VMware in 2016 and has extensive experience in executive leadership within the storage, flash and virtualization markets.

Prior to VMware, Lee was vice president of Marketing at NetApp and vice president of Solution Marketing at Fusion-IO (now SanDisk). Lee was a founding member of Pivot3, a company widely considered to be the founder of hyper-converged systems, where he served as the CEO and CMO. Earlier in his career, Lee held marketing leadership positions at Adaptec, and SEEQ Technology, a pioneer in non-volatile memory. He started his career at General Electric in Corporate Consulting.

Lee holds a bachelor of arts degree in economics from Carleton College and a master of business administration degree from Dartmouth College. Lee is a New York native and has lived in northern California for many years. He and his wife live in Palo Alto and have two children. In his spare time Lee enjoys cycling, playing guitar, and hiking the local hills.

48: Greybeards talk object storage with Enrico Signoretti, Head of Product Strategy, OpenIO

In this episode we talk with Enrico Signoretti, Head of Product Strategy for OpenIO, a software defined, object storage startup out of Europe. Enrico is an old friend, having been a member of many Storage Field Day events (SFD) in the past which both Howard and I attended and we wanted to hear what he was up to nowadays.

OpenIO open source SDS

It turns out that OpenIO is an open source object storage project that’s been around since 2008 and has recently (2015) been re-launched as a new storage startup. The open source, community version is still available and OpenIO has links to downloads to try it out. There’s even one for a Raspberry PI (Raspbian 8, I believe) on their website.

As everyone should recall object storage is meant for multi-PB data storage environments. Objects are assigned an ID and are stored in containers or buckets. Object storage has a flat hierarchy unlike file systems that have a multi-tiered hierarchy.

Currently, OpenIO is in a number of customer sites running 15-20PB storage environments. OpenIO supports AWS S3 compatible protocol and OpenStack Swift object storage API.

OpenIO is based on open source but customer service and usability are built into the product they license to end customers  on a usable capacity basis. Minimum license is for 100TB and can go into the multiPB range. There doesn’t appear to be any charge for enhancements of additional features or additional cluster nodes.

The original code was developed for a big email service provider and supported a massive user community. So it was originally developed for small objects, with fast access and many cluster nodes. Nowadays, it can also support very large objects as well.

OpenIO functionality

Each disk device in the OpenIO cluster is a dedicated service. By setting it up this way,  load balancing across the cluster can be at the disk level. Load balancing in OpenIO, is also a dynamic operation. That is, every time a object is created all node’s current capacity is used to determine the node with the least used capacity, which is then allocated to hold that object. This way there’s no static allocation of object IDs to nodes.

Data protection in OpenIO supports erasure coding as well as mirroring (replication{. This can be set by policy and can vary depending on object size. For example, if an object is say under 100MB it can be replicated 3 times but if it’s over 100MB it uses erasure coding.

OpenIO supports hybrid tiering today. This means that an object can move from OpenIO residency to public cloud (AWS S3 or BackBlaze B2) residency over time if the customer wishes. In a future release they will support replication to public cloud as well as tiering.  Many larger customers don’t use tiering because of the expense. Enrico says S3 is cheap as long as you don’t access the data.

OpenIO provides compression of objects. Although many object storage customers already compress and encrypt their data so may not use this. For those customers who don’t, compression can often double the amount of effective storage.

Metadata is just another service in the OpenIO cluster. This means it can be assigned to a number of nodes or all nodes on a configuration basis. OpenIO keeps their metadata on SSDs, which are replicated for data protection rather than in memory. This allows OpenIO to have a light weight footprint. They call their solution “serverless” but what I take from that is that it doesn’t use a lot of server resources to run.

OpenIO offers a number of adjunct services besides pure object storage such as video transcoding or streaming that can be invoked automatically on objects.

They also offer stretched clusters where an OpenIO cluster exists across multiple locations. Objects can have dispersal-like erasure coding for multi-site environments so that if one site goes down you still have access to the data. But Enrico said you have to have a minimum of 3 sites for this.

Enrico mentioned one media & entertainment customer stored only one version of a video in the object storage but when requested in another format automatically transcoded it in realtime. They kept this newly transcoded version in a CDN for future availability, until it aged out.

There seems to be a lot of policy and procedural flexibility available with OpenIO but that may just be an artifact of running in Linux.

They currently support RedHat, Ubuntu and CentOS. They also have a Docker container in Beta test for persistent objects, which is expected to ship later this year.

OpenIO hardware requirements

OpenIO has minimal hardware requirements for cluster nodes. The only thing I saw on their website was the need for at least 2GB of RAM on each node.  And metadata services seem to require SSDs on multiple nodes.

As discussed above, OpenIO has a uniquely light weight footprint (which is why it can run on Raspberry PI) and only seems to need about 500MB of DRAM and 1 core to run effectively.

OpenIO supports heterogeneous nodes. That is nodes can have different numbers and types of disks/SSDs on them, different processor, memory configurations and OSs. We talked about the possibility of having a node go down or disks going down and operating without them for a month, at the end of which admins could go through and fix them/replacing them as needed. Enrico also mentioned it was very easy to add and decommission nodes.

OpenIO supports a nano-node, which is just an (ARM) CPU, ram and a disk drive. Sort of like Seagate Kinetic and other vendor Open Ethernet drive solutions. These drives have a lightweight processor with small memory running Linux accessing an attached disk drive.

Also, OpenIO nodes can offer different services. Some cluster nodes can offer metadata and object storage services and others only object storage services. This seems configurable on a server basis. There’s probably some minimum number of metadata and object services required in a cluster. Enrico mentioned three nodes as a minimum cluster.

The podcast runs ~42 minutes but Enrico is a very knowledgeable, industry expert and a great friend from multiple SFD/TFD events. Howard and I had fun talking with him again. Listen to the podcast to learn more.

Enrico Signoretti, Head of Product Strategy at OpenIO.

In his role as head of product strategy, Enrico is responsible for the planning design and execution of OpenIO product strategy. With the support of his team, he develops product roadmaps from the planning stages to development to ensure their market fit.

Enrico promotes OpenIO products and represent the company and its products at several industry events, conferences and association meetings across different geographies. He actively participates in the company’s sales effort with key accounts as well as by exploring opportunities for developing new partnerships and innovative channel activities.

Prior to joining OpenIO, Enrico worked as an independent IT analyst, blogger and advisor for six years, serving clients among primary storage vendors, startups and end users in Europe and the US.

Enrico is constantly keeping an eye on how the market evolves and continuously looking for new ideas and innovative solutions.

Enrico is also a great sailor and an unsuccessful fisherman.

47: Greybeards talk Storage as a Service with Lazarus Vekiarides, CTO & Co-Founder ClearSky Data

Sponsored By:

In this episode, we talk with ClearSky Data’s Lazarus Vekiarides, CTO and Co-founder,  who we have talked with before (see our podcast from October 2015). ClearSky Data provides a storage-as-a-service offering that uses an on-premises appliance plus point of presence (PoP) storage in the local metro area to hold customer data and offloads this data to cloud storage. In addition to the on-premises storage-as-a-service they offer access to customer data from an in-cloud virtual appliance. ClearSky provides the whole storage service, including gigabit metro Ethernet connections from the customer to the POP for simple capacity based charge every month.

How does it work

Their Edge (on premises) appliance supports 24 SSDs and can scale up to 4 appliances. Soon a single appliance will be able to hold up to 32TB of data.  It’s intended to hold a data center’s entire working set for one week of activity. So essentially it’s a big caching appliance for the local data center

For ClearSky Data the lone source of truth for customer data lies in the PoP. The PoP is connected to metro wide fibre that is available in a number of large metropolitan areas. Laz says they have measured sub 500 µsec round trip response time between their PoP equipment and Edge appliance. The PoP provides the backing store for the Edge appliance. Data written to the edge appliance(s) are written through to the PoP storage. This data and it’s metadata (<1% of LUN size) is flushed to cloud storage which holds the data indefinitely.

DR through the PoP

If customers have multiple data centers within the same metro area (100Km) then they can have a single “logical” array that accesses the same data, say a cluster file system across the two data centers. The PoP will take care of copying the metadata to the secondary edge device and will invalidate any data sitting in the secondary device which is no longer valid. In this way customers can have a Recovery Point Objective (RPO)=0 seconds. That is any data written to the primary data center is automatically available to the secondary data center as long as the PoP survives.

But even if you wanted to fail over to a different metro area the PoP data is offloaded to the cloud continuously so while you wouldn’t attain an RPO=0 seconds, it could be awfully short, on the order of a couple of seconds.

Recent enhancements

ClearSky Data has recently enhanced their storage as a service to provide policy management over snapshots. That is you can establish policies as to how often to take LUN snapshots and how long to retain them in the cloud.

Also, ClearSky Data has added VMware functionality via plugins that allow their storage to know which VMs are writing data or are being backed up to their appliance. And this is included in the metadata written for a LUN which is offloaded to the cloud. Someday soon when you can have vSphere running bare metal in a public cloud service, you will be able to run the Cloud Edge (cloud software version of their Edge appliance) and restore the data from your data center directly to the cloud and have an iSCSI LUN available to EC2 running VMware providing complete Cloud DR for a data center.

We talked a bit about our favorite topic, NVMe storage and Laz sees a potential for it to help their Edge appliances but at the moment fault-tolerence/high availability is not there. And as they are primary storage for data centers HA is a critical capability.

Pricing and availability

Their product is priced as a service in $0.nn/GB/Month and if you do a 36 month cost analysis they feel they would come out cheaper than hybrid storage. They currently have PoP’s in Boston, NyNy, Northern Virginia, Dallas, and California. Laz says they believe there’s 15 major metropolitan areas across the USA they have targeted for service.  What nothing in Europe or Asia? We would imagine this is merely a question of the number of customers, amount of data and metro infrastructure.

The podcast runs ~24 minutes. Laz has been in the storage industry across a number of companies and has been with a few startups as well. Laz is very knowledgeable about storage, cloud, and metro networking, a good friend and is always a pleasure to talk with.  Listen to the podcast to learn more.

Lazarus Vekiarides, CTO & Co-Founder ClearSky Data

For over 20 years Laz Vekiarides has served in key technical and leadership roles delivering breakthrough technologies to market. Most recently, he served as the Executive Director of Software Engineering for Dell’s EqualLogic Storage Engineering group, where he led the development of numerous storage innovations and established the EqualLogic product line as a leader in host OS and hypervisor integration.

Laz joined Dell from EqualLogic, which was acquired in early 2008, where he was a member of the core leadership team – playing a key role in the company’s early success as a Senior Engineering Manager and Architect for the PS Series SAN arrays and host tools. Prior to EqualLogic, Laz held senior engineering and management positions at several companies including 3COM and Banyan Systems.

An occasional blogger, Laz frequently speaks at industry conferences, particularly in the areas of virtualization and data storage. He holds several storage technology patents, as well as a BSEE from Northeastern University, and an MSCS from the Worcester Polytechnic Institute.

46: Greybeards discuss Dell EMC World2017 happenings on vBrownBag

In this episode Howard and I were both at Dell EMC World2017 this past month and Alastair Cooke (@DemitasseNZ) asked us to do a talk at the show for the vBrownBag group (Youtube video here). The GreyBeards asked for a copy of the audio for this podcast.

Sorry about the background noise, but we recorded live at the show, with a huge teleprompter in the background that was re-broadcasting keynotes/interviews from the show.

At the show

Howard was at Dell EMC World2017 on a media pass and I was at the show on an industry analyst pass. There were parts of the show that he saw, that I didn’t and vice versa, but all keynotes and major industry outreach were available to both of us.

As always the Dell EMC team put on a great show, and kudos have to go to their AR and PR teams for having both of us there and creating a great event. There were lots of news at the show and both of us were impressed by how well Dell EMC have come together, in such a short time.

In addition, there were a number of Dell partners at the show. Howard met  Datadobi on the show floor who have a file migration tool that walks a filesystem tree and migrates files as well as reports on files it can’t. And we both saw Datrium (who we talked with last year).

Servers and other news

We both liked Dell’s new 14th generation server. But Howard objected to the lack of technical specs on it. Apparently, Intel won’t let specs be published until they announce their new CPU chipsets, sometime later this year. On the other hand, there were a few server specs discussed. For example, I was impressed the new servers would support many more NVMe cards. Howard liked the new server support for NV-DIMMs, mainly for the potential latency reduction that could provide software defined storage.

That led us on a tangent discussion about whether there is a place for non-software defined storage anymore.  Howard mentioned the downside of HCI/software defined storage on upgrading server (DIMM, PCIe card) hardware.

However, appliance hardware seems to be getting easier to upgrade. The new Unity AFA storage can be upgraded, non-disruptively from the low end to high end appliance by just swapping out controller hardware canisters.

Howard was also interested in Dell EMC’s new CloudFlex purchasing model for HCI solutions. This supplies an almost cloud-like purchasing option for customers. Where for a one year commitment,  you pay as you go (no money down, just monthly payments) rather than an up front capital purchase. After the year’s commitment expires you can send the hardware back to Dell EMC and stop paying.

We talked about Tier 0 storage. EMC DSSD was an early attempt to provide Tier 0 but came with lots of special purpose hardware. When commodity hardware and software emerged last year with NVMe SSD speed, DSSD was no longer viable at the premium pricing needed for all that hardware and was shut down. Howard and I discussed how doing special hardware requires one to be much faster (10-100X) than commodity hardware solutions to succeed and the gap has to be continued.

The other big storage news was the new VMAX 950F AFA and its performance numbers. Dell EMC said the new VMAX could do 6.7M IOPS of RRH (random read hit) and had a 350µsec response time. Howard noted that Dell EMC didn’t say at what IO load they achieved the 350µsec response time. I told him it almost didn’t matter, even if it was a single IO at that response time, it was significant.

The podcast runs about 40 minutes. It’s just Howard and I talking about what we saw/heard at the show and the occasional, tangental topic.  Listen to the podcast to learn more.


Howard Marks, DeepStorage

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.

Ray Lucchesi, Silverton Consulting

Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage Blog, and can be found on twitter @RayLucchesi.

43: GreyBeards talk Tier 0 again with Yaniv Romem CTO/Founder & Josh Goldenhar VP Products of Excelero

In this episode, we talk with another next gen, Tier 0 storage provider. This time our guests are Yaniv Romem CTO/Founder  & Josh Goldenhar (@eeschwa) VP Products from Excelero, another new storage startup out of Israel.  Both Howard and I talked with Excelero at SFD12 (videos here) earlier last month in San Jose. I was very impressed with their raw performance and wrote a popular RayOnStorage blog post on their system (see my 4M IO/sec@227µsec 4KB Read… post) from our discussions during SFD12.

As we have discussed previously, Tier 0, next generation flash arrays provide very high performing storage at very low latencies with modest to non-existent advanced storage services. They are intended to replace server, direct access SSD storage with a more shared, scaleable storage solution.

In our last podcast (with E8 Storage) they sold a hardware Tier 0 appliance. As a different alternative, Excelero is a software defined, Tier 0 solution intended to be used on any commodity or off the shelf server hardware with high end networking and (low to high end) NVMe SSDs.

Indeed, what impressed me most with their 4M IO/sec, was that target storage system had almost 0 CPU utilization. (Read the post to learn how they did this). Excelero mentioned that they were able to generate high (11M random 4KB) IO/sec on  Intel Core 7, desktop-class CPU. Their one need in a storage server is plenty of PCIe lanes. They don’t even need to have dual socket storage servers, single socket CPU’s work just fine as long as the PCIe lanes are there.

Excelero software

Their intent is to bring Tier 0 capabilities out to all big storage environments. By providing a software only solution it could be easily OEMed by cluster file system vendors or HPC system vendors and generate amazing IO performance needed by their clients.

That’s also one of the reasons that they went with high end Ethernet networking rather than just Infiniband, which would have limited their market to mostly HPC environments. Excelero’s client software uses RoCE/RDMA hardware to perform IO operations with the storage server.

The other thing having little to no target storage server CPU utilization per IO operation gives them is the ability to scale up to 1000 of hosts or storage servers without reaching any storage system bottlenecks.  Another concern eliminated by minimal target server CPU utilization is that you can’t have a noisy neighbor problem, because there’s no target CPU processing to be shared.  Yet another advantage with Excelero is that bandwidth is only  limited by storage server PCIe lanes and networking.  A final advantage of their approach is that they can support any of the current and upcoming storage class memory devices supporting NVMe (e.g., Intel Optane SSDs).

The storage services they offer include RAID 0, 1 and 10 and a client side logical volume manager which supports multi-pathing. Logical volumes can span up to 128 storage servers, but can be accessed by almost any number of hosts. And there doesn’t appear to be a specific limit on the number of logical volumes you can have.

 

They support two different protocols across the 40GbE/100GbE networks. Standard NVMe over Fabric or RDDA (Excelero patented, proprietary Remote Direct Disk Array access). RDDA is what mainly provides the almost non-existent target storage server CPU utilization. But even with standard NVMe over Fabric they support low target CPU utilization. One proviso, with NVMe over Fabric, they do add shared volume functionality to support RAID device locking and additional fault tolerance capabilities.

On Excelero’s roadmap is thin provisioning, snapshots, compression and deduplication. However, they did mention that adding advanced storage functionality like this will impede performance. Currently, their distributed volume locking and configuration metadata is not normally accessed during an IO but when you add thin provisioning, snapshots and data reduction, this metadata needs to become more sophisticated and will necessitate some amount of access during and after an IO operation.

Excelero’s client software runs in Linux kernel mode client and they don’t currently support VMware or Hyper-V. But they do support KVM as a hypervisor and would be willing to support the others, if APIs were published or made available.

They also have an internal OpenStack Cinder driver but it’s not part of their OpenStack’s release yet. They’re waiting for snapshot to be available before they push this into the main code base. Ditto for Docker Engine but this is more of a beta capability today.

Excelero customer experience

One customer (NASA Ames/Moffat Field) deployed a single 2TB NVMe SSD across 128 hosts and had a single 256TB logical volume shared and accessed by all 128 hosts.

Another customer configured Excelero behind a clustered file system and was able to generate 30M randomized IO/sec at 200µsec latencies but more important, 140GB/sec of bandwidth. It turns out high bandwidth is important to many big data applications that have to roll lots of data into their analytics clusters, processing it and output results, and then do it all over again. Bandwidth limitations can impact the success of these types of applications.

By being software only they can be used in a standalone storage server or as a hyper-converged solution where applications and storage can be co-resident on the same server. As noted above, they currently support Linux O/Ss for their storage and client software and support any X86 Intel processor, any RDMA capable NIC, and any NVMe SSD.

Excelero GTM

Excelero is focused on the top 200 customers, which includes the hyper-scale providers like FaceBook, Google, Microsoft and others. But hyper-scale customers have huge software teams and really a single or few, very large/complex applications which they can create/optimize a Tier 0 storage for themselves.

It’s really the customers just below the hyper-scalar class, that have similar needs for high low latency IO/sec or high IO bandwidth (or both) but have 100s to 1000s of applications and they can’t afford to optimize them all for Tier 0 flash. If they solve sharing Tier 0 flash storage in a more general way, say as a block storage device. They can solve it for any application. And if the customer insists, they could put a clustered file system or even an object storage (who would want this) on top of this shared Tier 0 flash storage system.

These customers may currently be using NVMe SSDs within their servers as a DAS device. But with Excelero these resources can be shared across the data center. They think of themselves as a top of rack NVMe storage system.

On their website they have listed a few of their current customers and their pretty large and impressive.

NVMe competition

Aside from E8 Storage, there are few other competitors in Tier 0 storage. One recently announced a move to an NVMe flash storage solution and another killed their shipping solution. We talked about what all this means to them and their market at the end of the podcast. Suffice it to say, they’re not worried.

The podcast runs ~50 minutes. Josh and Yaniv were very knowledgeable about Tier 0, storage market dynamics and were a delight to talk with.   Listen to the podcast to learn more.


Yaniv Romem CTO and Founder, Excelero

Yaniv Romem has been a technology evangelist at disruptive startups for the better part of 20 years. His passions are in the domains of high performance distributed computing, storage, databases and networking.
Yaniv has been a founder at several startups such as Excelero, Xeround and Picatel in these domains. He has served in CTO and VP Engineering roles for the most part.


Josh Goldenhar, Vice President Products, Excelero

Josh has been responsible for product strategy and vision at leading storage companies for over two decades. His experience puts him in a unique position to understand the needs of our customers.
Prior to joining Excelero, Josh was responsible for product strategy and management at EMC (XtremIO) and DataDirect Networks. Previous to that, his experience and passion was in large scale, systems architecture and administration with companies such as Cisco Systems. He’s been a technology leader in Linux, Unix and other OS’s for over 20 years. Josh holds a Bachelor’s degree in Psychology/Cognitive Science from the University of California, San Diego.