84: GreyBeards talk ultra-secure NAS with Eric Bednash, CEO & Co-founder, RackTop Systems

We were at a recent vendor conference where Steve Foskett (@SFoskett) introduced us to Eric Bednash (@ericbednash), CEO & Co-Founder, RackTop Systems. They have taken ZFS and made it run as a ultra-secure NAS system. Matt Leib, my co-host for this episode, has on-the-job experience with ZFS and was a great co-host for this episode.

It turns out that Eric and his CTO (perhaps other RackTop employees) have extensive experience with intelligence and other government agencies that depend on data security. These agencies deal with cyber security threats an order of magnitude larger, than what corporations see .

All that time in intelligence gave Eric a unique perspective on what it takes to build secure, bullet proof NAS systems. Nine years or so ago, he and his CTO, took OpenZFS (and OpenSolaris) and used it as the foundation for their new highly available and ultra-secure NAS system.

Most storage systems support user access data protection based on authorization. If a user is authorized to see/write data, they have unrestricted access to the data. Perhaps if an organization is paranoid, they might also use data at rest encryption. But RackTop takes all this to a whole other level.

Data security to the Nth degree

RackTop offers dual encryption for data at rest. Most organizations would say single encryption’s enough. The data’s encrypted, how will another level of encryption make it more secure.

It all depends on how one secures keys (and just my thoughts here, maybe how easily quantum computing can decrypt singly encrypted data). So RackTop systems uses self encrypting drives (1st level of encryption) as well as software encryption (2nd level of encryption). Each having their own unique keys RackTop can maintain either in their own system or in a KMIP service provided by the data center.

They also supply user profiling. User data access can be profiled with a dataset heat map and other statistical/logging information. When users go outside their usual access profiles, it may signal a security breach. At the moment, when this happens RackTop notifies security administrators, but Eric mentioned a future release will have the option to automatically shut that user down.

And with all the focus on GDPR and similar regulations coming to a state near you, having user access profiles and access logs can easily satisfy any regulatory auditing requirements.

Eric said that any effective security has to be multi-layered. With RackTop, their multi-layer approach goes way beyond just data-at-rest encryption and user access authentication. RackTop also offers their appliance hardware sourced from secure supply chains and manufactured inside secured facilities. They have also modified OpenSolaris to be more secure and hardened it and its OS against cyber threat.

RackTop even supports cloud tiering with an internally developed secure data mover. Their data mover can securely migrate data (retaining meta-data on their system) to any S3 compatible object storage.

As proof of the security available from a RackTop NAS system, an unnamed US government agency had a “red-team” attack their storage. Although Eric shared only a few details on what the red-team attempted, he did say RackTop NAS survived the assualt without security breach.

He also mentioned that they are trying to create a Zero Trust storage environment. Zero Trust implies constant verification and authentication. Rather like going beyond one time entered login credentials and making users re-authenticate every time they access data. Eric didn’t say when, if ever they’d reach this level of security but it’s a clear indication of a direction for their products.

ZFS based NAS system

A RackTop NAS supplies a ZFS-based file system. As such, it inheritnall the features and advanced functionality of OpenZFS but within a more secured, hardened and highly available storage system

ZFS has historically had issues with usability and its multiplicity of tuning knobs. RackTop has worked hard to make ZFS easier to operate and removed much of the manual tuning required to make it perform well.

The podcast is a long and runs over ~44 minutes. We spent most of our time talking about security and less on the storage functionality of RackTop NAS. The security of RackTop systems takes some getting used to but the need exists today and not many storage systems are implementing security quite to their level. Much of what RackTop does to improve data security blew Matt and I away. Eric is a very smart security expert in addition to being a storage vendor CEO. Listen to the podcast to learn more.

Eric Bednash, CEO & Co-founder, RackTop Systems

Eric Bednash is the co-founder and CEO of RackTop Systems, the pioneer of CyberConvergedTM data security, a new market that fuses data storage with advanced security and compliance into a single platform.   

A serial entrepreneur and innovator, Bednash has more than 20 years of experience in solving the most complex and challenging data problems through designing products and solutions for the U.S. Intelligence Community and commercial enterprises.

Bednash co-founded RackTop in 2010 with partner and current CTO Jonathan Halstuch. Prior to co-founding RackTop, he served as co-founder and CTO of a mid-sized consulting firm, focused on developing mission data systems within the Department of Defense and U.S. intelligence communities.

Bednash started his professional career in data center systems at Time-Warner, and spent the better part of the dot-com boom in the Washington, D.C. area connecting businesses to the internet. His career path began while still in high school, where Bednash’s contracted with small businesses and individuals to write software and build computers. 

Bednash attended Rochester Institute of Technology and Penn State University, and completed both undergrad and graduate coursework in Business and Technology Management at Stevenson University. A Forbes Technology Council member, he regularly hosts thought leadership & technology video blogs, and is a technology writer and speaker. He is a multi-instrument musician, recreational athlete and a die-hard Pittsburgh Steelers fan. He currently resides in Fulton, Md. with his wife Laura and two children

83: GreyBeards talk NVMeoF/TCP with Muli Ben-Yehuda, Co-founder & CTO and Kam Eshghi, VP Strategy & Bus. Dev., Lightbits Labs

This is the first time we’ve talked with Muli Ben-Yehuda (@Muliby), Co-founder & CTO and Kam Eshghi (@KamEshghi), VP of Strategy & Business Development, Lightbits Labs. Keith and I first saw them at Dell Tech World 2019, in Vegas as they are a Dell Ventures funded organization. The company has 70 (mostly engineering) employees and is based in Israel, with offices in NY and the Valley as well as elsewhere around the world. Kam was previously with (Dell) EMC DSSD and Muli’s spent years as a Master Inventor with IBM Research.

[This was Keith Townsend’s (@CTOAdvisor & The CTO Advisor), first time as a GreyBeard co-host and we had a great time with him on the show.]

I would have to say it was a far ranging discussion but focused on their software defined, NVMeoF/TCP storage. As you may recall we talked with Solarflare Communications last year who were also working on a NVMeoF/TCP, only in their case it was an accelerator board. After the recording, Muli said the hardware accelerator they have is their own design.

Why NVMeoF/TCP?

Most NVMeoF today, that uses Ethernet, requires RoCE or iWARP compatible NICs and switches. Lightbits Labs has long been active in the NVMeoF/RoCE-iWARP market place. Early on they noticed that enterprise and cloud service providers were reluctant to adopt NVMeoF technology because of the need to change out all their networking equipment to use it. This is what brought about their focus on NVMeoF/TCP.

The advantage of NVMeoF/TCP is that it can be run on any Ethernet NIC and switch available today. From Muli’s perspective, NVMeoF/TCP is going to become the next SAN of choice for the data center. They were active, early on, in the standards committee to push for NVMeoF/TCP adoption.

How does it work?

Their software defined solution runs LightOS® storage software, a Linux based package, and uses off the shelf, server hardware with persistent storage (Optane DC PM/SSDs, NV DIMMs, V-NAND, etc.). They use persistent memory for a FAST write buffer and a place where they can “mold” the written data into something that can be better written to backend NVMe SSDs.

One surprise about Lightbits solution is that it offers a decent set of data services. These include erasure coding, thin provisioning, wire-speed inline compression, QoS and wide striping. It seems like any of these can be disabled by a customers want. But they only add very little overhead. I think Muli mentioned one Lightbits customer with encrypted data that disabled compression.

Lightbits also offers a global FTL (flash translation layer), which means they control SSD addressing which maps data to physical/raw NAND locations at the storage system level. If done well, a global FTL can help improve flash endurance and may offer better write performance (through increased parallelism).

Lightbits claim to inline, wire speed data compression is premised on the use of more current CPUs with high (>=28) core counts in a storage server. If the storage server has older CPUs (<28 cores), they suggest you install their LightField™ hardware accelerator add in card. LightField offers a number of hardware based, performance accelerations in addition to compression speedups.

LightOS requires no host (client) software. Muli’s a long time Linux kernel contributor and indicated that the only thing LightOS needs is a current Linux Kernel (5.0 or later) which has the NVMeoF/TCP driver software (and persistent memory). Lightbits believes that it’s only a matter of time until other OSs also implement NVMeoF/TCP drivers.

Lightbits business considerations

Long term, Lightbits sees a need for compute-storage disaggregation in hyper scalar and enterprise cloud environments. Early on it was relatively easy to replicate servers with DAS storage but as NVMe SSDs came out the expense to do this throughout their >>1000 server environment starts to become exorbitant. If they only had an easy way to disaggregate their storage from compute and still enjoy all the performance advantages of DAS NVMe SSDS. With LightOS they can do that.

Lightbits can be sold today through Dell, as a partner solution, which means that Dell can integrate, test and validate their servers with LightField accelerator card and deliver that package to your data center. I believe you still need to purchase and install their LightOS software yourself.

Lightbits charges for LightOS software on a per storage node basis, but they have different charges based on the maximum number of NVMe SSD slots available is in a server. There is no capacity charge. They also offer worldwide service and support for LightOS software and LightField hardware.

It’s all about performance

From a performance perspective, one Fortune 500 hyper-scalar benchmarked their storage solution against a DAS NVMe server and found it added about 30 µsec to the IO latency as compare to DAS NVMe SSDs. From their perspective, the added data services, better endurance, and disaggregated compute-storage environment provided by LightOS more than made up for the additional overhead.

Finally, I asked about whether multiple LightOS storage servers could be clustered together. Muli intervened, after stating some legal stuff, said they were working on the next generation LightOS and it will support clustered storage servers, local data replication as well as distributed (across storage servers) erasure coding.

The podcast is a long one and runs over ~47 minutes. There was a lot to talk about and Kam and Muli seem to know it all. It was interesting to hear the history of their pivot to TCP. They seem to have the right technology to address the market. Listen to the podcast to learn more.

Muli Ben-Yehuda, Co-founder and CTO, Lightbits Labs

Muli Ben-Yehuda is the CTO and Co-Founder of Lightbits Labs, where he leads technological developments.

Prior to founding Lightbits, he was chief scientist at Stratoscale and a researcher and Master Inventor at IBM Research.

He holds an M.Sc. in Computer Science (summa cum laude) from the Technion — Israel Institute of Technology and a B.A. (cum laude) from the Open University of Israel.

He is a long time Linux kernel contributor and his code and ideas are most likely included in an operating system or hypervisor running near you. He is also one of the authors of the NVMe/TCP standard and technology. 

Kam Eshghi, VP Strategy & Business Development, Lightbits Labs

Kam joined Lightbits Labs from Dell EMC and has over 20yrs of experience in strategic marketing and business development with startups and public companies.

Most recently as VP of strategic alliances at startup DSSD, Kam led business development with technology partners and developed DSSD’s partnership with EMC, leading to EMC’s acquisition of DSSD.

Previously as Sr. Director of Marketing & Business Development at IDT, Kam built their NVMe Controller business from scratch. Previous to that, Kam worked in data center storage, compute and networking markets at HP, Intel, and Crosslayer Networks. 

Kam is a U.C. Berkeley and MIT graduate with a BS and MS in Electrical Engineering and Computer Science and an MBA.

82: GreyBeards talk composable infrastructure with Sumit Puri, CEO & Co-founder, Liqid Inc.

This is the first time we’ve had Sumit Puri, CEO & GM Co-founder of Liqid on the show but both Greg and I have talked with Liqid in the past. Given that we talked with another composable infrastructure company (see our DriveScale podcast), we thought it would be nice to hear from their  competition.

We started with a brief discussion of the differences between them and DriveScale. Sumit mentioned that they were mainly focused on storage and not as much on the other components of composable infrastructure.

[This was Greg Schulz’s (@storageIO & StorageIO.com), first time as a GreyBeard co-host and we had some technical problems with his feed, sorry about that.]

Multi-fabric composable infrastructure

At Dell Tech World (DTW) 2019 last week, Liqid announced a new, multi-fabric composability solution. Originally, Liqid composable infrastructure only supported PCIe switching, but with their new announcement, they also now support Ethernet and InfiniBand infrastructure composability. In their multi-fabric solution, they offer JBoG(PUs) which can attach to Ethernet/InfiniBand as well as other compute accelerators such as FPGAs or AI specific compute engines.

For non-PCIe switch fabrics, Liqid adds an “HBA-like” board in the server side that converts PCIe protocols to Ethernet or InfiniBand and has another HBA-like board sitting in the JBoG.

As such, if you were a Media & Entertainment (M&E) shop, you could be doing 4K real time editing during the day, where GPUs were each assigned to a separate servers running editing apps and at night, move all those GPUs to a central server where they could now be used to do rendering or transcoding. All with the same GPU-sever hardware andusing Liqid to re-assign those GPUs, back and forth during day and night shifts.  

Even before the multi-fabric option Liqid supported composing NVMe SSDS and servers. So with a 1U server which in the package may support 4 SSDS, with Liqid you could assign 24-48 or whatever number made the most sense  to that 1U server for a specialized IO intensive activity. When that activity/app was done, you could then allocate those NVMe SSDs to other servers to support other apps.

Why compose infrastructure

The promise of composability is no more isolated/siloed/dedicated hardware in your environment. Resources like SSDs, GPUS, FPGAs and really servers can be torn apart and put back together without sending out a service technician and waiting for hours while they power down your system and move hardware around. I asked Sumit how long it took to re-configure (compose) hardware into a new congfiguration and he said it was a matter of 20 seconds.

Sumit was at an NVIDIA show recently and said that Liqid could non-disruptively swap out GPUs. For this you would just isolate the GPU from any server and then go over to the JBoG and take the GPU out of the cabinet.

How does it work

Sumit mentioned that they have support for Optane SSDs to be used as DRAM memory (not Optane DC PM) using IMDT (Intel Memory Drive Technology). In this way you can extend your DRAM up to 6TB for a server. And with Liqid it could be concentrated on one server one minute and then spread across dozens the next.

I asked Sumit about the overhead of the fabrics that can be used with Liqid. He said that the PCIe switching may add on the order of 100 nanoseconds and the Ethernet/InfiniBand networks on the order of 10-15 microseconds or roughly 2 orders of magnitude difference in overhead between the two fabrics.

Sumit made a point of saying that Liqid is a software company. Liqid software runs on switch hardware (currently Mellanox Ethernet/InfiniBand switches) or their PCIe switches.

But given their solution can require HBAs, JBoGs and potentially PCIe switches there’s at least some hardware involved. But for Ethernet and InfiniBand their software runs in the Mellanox switch gear. Liqid control software has a CLI, GUI and supports an API.

Liqid supports any style of GPU (NVIDIA, AMD or ?). And as far as they were concerned, anything that could be plugged into a PCIe bus was fair game to be disaggregated and become composable.

Solutions using Liqid

Their solution is available from a number of vendors. And at last week’s, DTW 2019 Liqid announced a new OEM partnership with Dell EMC. So now, you can purchase composable infrastructure, directly from Dell. Liqid’s route to market is through their partner ecosystem and Dell EMC is only the latest.

Sumit mentioned a number of packaged solutions and one that sticks in my mind was a an AI appliance pod solution (sold by Dell), that uses Liqid to compose an training data ingestion environment at one time, a data cleaning/engineering environment at another time, a AI deep learning/model training environment at another time, and then an scaleable inferencing engine after that. Something that can conceivably do it all, an almost all in one AI appliance.

Sumit said that these types of solutions would be delivered in 1/4, 1/2, or full racks and with multi-fabric could span racks of data center infrastructure. The customer ultimately gets to configure these systems with whatever hardware they want to deploy, JBoGs, JBoFs, JBoFPGAs, JBoAIengines, etc.

The podcast runs ~42 minutes. Sumit was very knowledgeable data center infrastructure and how composability could solve many of the problems of today. Some composability use cases he mentioned could apply to just about any data center. Ray and Sumit had a good conversation about the technology. Both Greg and I felt Liqid’s technology represented the next step in data center infrastructure evolution. Listen to the podcast to learn more.

Sumit Perl, CEO & Co-founder, Liqid, Inc.

Sumit Puri is CEO and Co-founder at Liqid. An industry veteran with over 20 years of experience, Sumit has been focused on defining the technology roadmaps for key industry leaders including Avago, SandForce, LSI, and Toshiba.

Sumit has a long history with bringing successful products to market with numerous teams and large-scale organizations.

80: Greybeards talk composable infrastructure with Tom Lyon, Co-Founder/Chief Scientist and Brian Pawlowski, CTO, DriveScale

We haven’t talked with Tom Lyon (@aka_pugs) or Brian Pawlowski before on our show but both Howard and I know Brian from his prior employers. Tom and Brian work for DriveScale, a composable infrastructure software supplier.

There’s been a lot of press lately on NVMeoF and the GreyBeards thought it would be good time to hear from another way to supply DAS like performance and functionality. Tom and Brian have been around long enough to qualify as greybeards in their own right.

The GreyBeards have heard of composable infrastructure before but this was based on PCIe switching hardware and limited to a rack or less of hardware. DriveScale is working with large enterprises and their data center’s full of hardware.

Composable infrastructure has many definitions but the one DriveScale probably prefers is that it manages resource pools of servers and storage, that can be combined, per request, to create any mix of servers and DAS storage needed by an application running in a data center. DriveScale is targeting organizations that have from 1K to 10K servers with from 10K to 100K disk drives/SSDs.

Composable infrastructure for large enterprises

DriveScale provides large data centers the flexibility to better support workloads and applications that change over time. That is, these customers may, at one moment, be doing big data analytics on PBs of data using Hadoop, and the next, MongoDB or other advanced solution to further process the data generated by Hadoop.

In these environments, having standard servers with embedded DAS infrastructure may be overkill and will cost too much. For example., because one has no way to reconfigure (1000) server’s storage for each application that comes along, without exerting lots of person-power, enterprises typically over provision storage for those servers, which leads to higher expense.

But if one had some software that could configure 1 logical server or a 10,000 logical servers, with the computational resources, DAS disk/SSDs, or NVMe SSDs needed to support a specific application, then enterprises could reduce their server and storage expense while at the same time provide applications with all the necessary hardware resources.

When that application completes, all those hardware resources could be returned back to their respective pools and used to support the next application to be run. It’s probably not that useful when an enterprise only runs one application at a time, but when you have 3 or more running at any instant, then composable infrastructure can reduce hardware expenses considerably.

DriveScale composable infrastructure

DriveScale is a software solution that manages three types of resources: servers, disk drives, and SSDs over high speed Ethernet networking. SAS disk drives and SAS SSDs are managed in an EBoD/EBoF (Ethernet (iSCSI to SAS) bridge box) and NVMe SSDs are managed using JBoFs and NVMeoF/RoCE.

DriveScale uses standard (RDMA enabled) Ethernet networking to compose servers and storage to provide DAS like/NVMe like levels of response times.

DriveScale’s composer orchestrator self-discovers all hardware resources in a data center that it can manage. It uses an API to compose logical servers from server, disk and SSD resources under its control available, throughout the data center.

Using Ethernet switching any storage resource (SAS disk, SAS SSD or NVMe SSD) can be connected to any server operating in the data center and be used to run any application.

There’s a lot more to DriveScale software. They don’t sell hardware. but have a number of system integrators (like Dell) that sell their own hardware and supply DriveScale software to run a data center.

The podcast runs ~44 minutes. The GreyBeards could have talked with Tom and Brian for hours and Brian’s very funny. They were extremely knowledgeable and have been around the IT industry almost since the beginning of time. They certainly changed the definition of composable infrastructure for both of us, which is hard to do. Listen to the podcast to learn more. .

Tom Lyon, Co-Founder and Chief Scientist

Tom Lyon is a computing systems architect, a serial entrepreneur and a kernel hacker.

Prior to founding DriveScale, Tom was founder and Chief Scientist of Nuova Systems, a start-up that led a new architectural approach to systems and networking. Nuova was acquired in 2008 by Cisco, whose highly successful UCS servers and Nexus switches are based on Nuova’s technology.

He was also founder and CTO of two other technology companies. Netillion, Inc. was an early promoter of memory-over-network technology. At Ipsilon Networks, Tom invented IP Switching. Ipsilon was acquired by Nokia and provided the IP routing technology for many mobile network backbones.

As employee #8 at Sun Microsystems, Tom was there from the beginning, where he contributed to the UNIX kernel, created the SunLink product family, and was one of the NFS and SPARC architects. He started his Silicon Valley career at Amdahl Corp., where he was a software architect responsible for creating Amdahl’s UNIX for mainframes technology.

Brian Pawlowski, CTO

Brian Pawlowski is a distinguished technologist, with more than 35 years of experience in building technologies and leading teams in high-growth environments at global technology companies such as Sun Microsystems, NetApp and Pure Storage.

Before joining DriveScale as CTO, Brian served as vice president and chief architect at Pure Storage, where he focused on improving the user experience for the all-flash storage platform provider’s rapidly growing customer base. He also was CTO at storage pioneer NetApp, which he joined as employee #18.

Brian began his career as a software engineer for a number of well-known technology companies. Early in his days as a technologist, he worked at Sun, where he drove the technical analysis and discussion on alternate file systems technologies. Brian has also served on the board of trustees for the Anita Borg Institute for Women and Technology as well as a member of the board at the Linux Foundation.

Brian studied computer science at Arizona State University, physics at the University of Texas at Austin, as well as physics at MIT.

79: GreyBeards talk AI deep learning infrastructure with Frederic Van Haren, CTO & Founder, HighFens, Inc.

We’ve talked with Frederic before (see: Episode #33 on HPC storage) but since then, he has worked for an analyst firm and now he’s back on his own again, at HighFens. Given all the interest of late in AI, machine learning and deep learning, we thought it would be a great time to catch up and have him shed some light on deep learning and what it needs for IT infrastructure.

Frederic has worked for HPC / Big Data / AI / IoT solutions in the speech recognition industry, providing speech recognition services for some of the largest organizations in the world. As I understand it, the last speech recognition AI application he worked on implemented deep learning.

A brief history of AI

Frederic walked the Greybeards through the history of AI from the dawn of computing (1950s) until the recent emergence of deep learning (2010).

He explained that, early on one could implement a chess playing program, using hand coded rules based on a chess expert’s playing technique. Later when machine learning came out, one could use statistical analysis on multiple games and limited rule creation to teach a AI machine learning system how to play chess. With deep learning (DL), all you have to do now is to feed a DL model all the games you have and it learns how to play chess well all by itself. No rule making needed.

AI DL training and deployment infrastructure

Frederic described some of the infrastructure and data needs for various phases of an industrial scale, AI DL workflow.

Training deep learning models takes data and the more, the better. Gathering/saving large amounts of data used for DL training is a massive write workload and at the end of that process, hopefully you have PB of data to work with.

Selecting DL training data from all those PBs, involves a lot of mixed read and write IO. In the end, one has selected and extracted the data to use to train your DL models.

During DL training, IO needs are all about heavy data read throughput. But there’s more, in the later half of the talk, Frederic talked about the need to keep expensive GPU cores busy and that requires sophisticated caching or Tier 0 storage supporting low latency IO.

Ray’s been doing a lot of blogging and other work on AI machine and deep learning (e.g., see Learning machine learning – parts 1, 2, & 3) so it was great to hear from Frederic, a real practitioner of the art. Frederic (with some of Ray’s help) explained the deep learning training process. But it wasn’t detailed enough for Howard, so per Howard’s request, we went deeper into how it really works.

Once you have a DL model trained and working within specifications (e.g., prediction accuracy), Frederic said deploying DL models into production involves creating two separate clusters. One devoted to deep learning model inferencing, which takes in data from the world and performs inferencing (prediction, classification, interpretations, etc.) and the other uses that information for model adaption to fine tune DL models for specific instances.

Adaption and inferencing were both read and write IO workloads and the performance of this IO was dependent on a specific model’s use

Model adaption would personalize model predictions for each and every person, car, genotype, etc. This would be done periodically (based on SLAs, e.g. every 4 hrs). After that, a new, adapted model could be introduced into production, adapted for that specific person/car/genotype.

If the adaption applied more generally, that data and its human-machine validated/vetted prediction, classification, interpretation, etc. would be added back into the DL model training set to be used the next time a full model training pass was to be done. Frederic said AI DL model training is never done.

Sometime later, all this DL training, production and adaption data needs to be archived for long term access.

We then discussed the recent offerings from NVIDIA and major storage vendors that package up a solution for AI deep learning. It seems we are seeing another iteration of Converged Infrastructure, only this time for AI DL.

Finally, over the course of Ray’s AI DL education, he had come to the belief that AI deep learning could be applied by anyone. Frederic corrected Ray stating that AI deep learning should be applied by anyone.

The podcast runs ~44 minutes. Frederic’s been an old friend of Howard’s and Ray’s, since before the last podcast. He’s one of the few persons in the world that the GreyBeards know that has real world experience in deploying AI DL, at industrial scale. Frederic’s easy to talk with and very knowledgeable about the intersection of Ai DL and IT infrastructure. Howard and I had fun talking with him again on this episode. Listen to the podcast to learn more. .

Frederic Van Haren

Frederic Van Haren is the Chief Technology Officer @ HighFens. He has over 20 years of experience in high tech and is known for his insights in HPC, Big Data and AI from his hands-on experience leading research and development teams. He has provided technical leadership and strategic direction in the Telecom and Speech markets.

He spent more than a decade at Nuance Communications building large HPC and AI environments from the ground up and is frequently invited to speak at events to provide his vision on the HPC, AI, and storage markets. Frederic has also served as the president of a variety of technology user groups promoting the use of innovative technology.

As an engineer, he enjoys working directly with engineering teams from technology vendors and on challenging customer projects.

Frederic lives in Massachusetts,  USA but grew up in the northern part of Belgium where he received his Masters in Electrical Engineering, Electronics and Automation.