CTERA, Cloud NAS on steroids

We attended SFD22 last week and one of the presenters was CTERA, (for more information please see SFD22 videos of their session) discussing their enterprise class, cloud NAS solution.

We’ve heard a lot about cloud NAS systems lately (see our/listen to our GreyBeards on Storage podcast with LucidLink from last month). Cloud NAS systems provide a NAS (SMB, NFS, and S3 object storage) front-end system that uses the cloud or onprem object storage to hold customer data which is accessed through the use of (virtual or hardware) caching appliances.

These differ from file synch and share in that Cloud NAS systems

  • Don’t copy lots or all customer data to user devices, the only data that resides locally is metadata and the user’s or site’s working set (of files).
  • Do cache working set data locally to provide faster access
  • Do provide NFS, SMB and S3 access along with user drive, mobile app, API and web based access to customer data.
  • Do provide multiple options to host user data in multiple clouds or on prem
  • Do allow for some levels of collaboration on the same files

Although admittedly, the boundary lines between synch and share and Cloud NAS are starting to blur.

CTERA is a software defined solution. But, they also offer a whole gaggle of hardware options for edge filers, ranging from smart phone sized, 1TB flash cache for home office user to a multi-RU media edge server with 128TB of hybrid disk-SSD solution for 8K video editing.

They have HC100 edge filers, X-Series HCI edge servers, branch in a box, edge and Media edge filers. These later systems have specialized support for MacOS and Adobe suite systems. For their HCI edge systems they support Nutanix, Simplicity, HyperFlex and VxRail systems.

CTERA edge filers/servers can be clustered together to provide higher performance and HA. This way customers can scale-out their filers to supply whatever levels of IO performance they need. And CTERA allows customers to segregate (file workloads/directories) to be serviced by specific edge filer devices to minimize noisy neighbor performance problems.

CTERA supports a number of ways to access cloud NAS data:

  • Through (virtual or real) edge filers which present NFS, SMB or S3 access protocols
  • Through the use of CTERA Drive on MacOS or Windows desktop/laptop devices
  • Through a mobile device app for IOS or Android
  • Through their web portal
  • Through their API

CTERA uses a, HA, dual redundant, Portal service which is a cloud (or on prem) service that provides CTERA metadata database, edge filer/server management and other services, such as web access, cloud drive end points, mobile apps, API, etc.

CTERA uses S3 or Azure compatible object storage for its backend, source of truth repository to hold customer file data. CTERA currently supports 36 on-prem and in cloud object storage services. Customers can have their data in multiple object storage repositories. Customer files are mapped one to one to objects.

CTERA offers global dedupe, virus scanning, policy based scheduled snapshots and end to end encryption of customer data. Encryption keys can be held in the Portals or in a KMIP service that’s connected to the Portals.

CTERA has impressive data security support. As mentioned above end-to-end data encryption but they also support dark sites, zero-trust authentication and are DISA (Defense Information Systems Agency) certified.

Customer data can also be pinned to edge filers, Moreover, specific customer (director/sub-directorydirectories) data can be hosted on specific buckets so that data can:

  • Stay within specified geographies,
  • Support multi-cloud services to eliminate vendor lock-in

CTERA file locking is what I would call hybrid. They offer strict consistency for file locking within sites but eventual consistency for file locking across sites. There are performance tradeoffs for strict consistency, so by using a hybrid approach, they offer most of what the world needs from file locking without incurring the performance overhead of strict consistency across sites. For another way to do support hybrid file locking consistency check out LucidLink’s approach (see the GreyBeards podcast with LucidLink above).

At the end of their session Aron Brand got up and took us into a deep dive on select portions of their system software. One thing I noticed is that the portal is NOT in the data path. Once the edge filers want to access a file, the Portal provides the credential verification and points the filer(s) to the appropriate object and the filers take off from there.

CTERA’s customer list is very impressive. It seems that many (50 of WW F500) large enterprises are customers of theirs. Some of the more prominent include GE, McDonalds, US Navy, and the US Air Force.

Oh and besides supporting potentially 1000s of sites, 100K users in the same name space, and they also have intrinsic support for multi-tenancy and offer cloud data migration services. For example, one can use Portal services to migrate cloud data from one cloud object storage provider to another.

They also mentioned they are working on supplying K8S container access to CTERA’s global file system data.

There’s a lot to like in CTERA. We hadn’t heard of them before but they seem focused on enterprise’s with lots of sites, boatloads of users and massive amounts of data. It seems like our kind of storage system.

Comments?

Storageless data!?

I (virtually) attended SFD21 earlier this year and a company called Hammerspace presented discussing their vision for storageless data (see videos of their session at SFD21).

We’ve talked them before but now they have something to offer the enterprise – data mobility or storageless data.

The white board after David Flynn’s session at SFD8

In essence, customers want to be able to run their workloads wherever it makes the most sense, on prem, in private cloud, and in the public cloud among other places. Historically, it’s been relatively painless to transfer an application’s binary from one to another data center, to a managed service provider or to the public cloud.

And with VMware Cloud Foundation, Kubernetes, Docker and Linux operating everywhere, the runtime environment and other OS services that applications depend on are pretty much available in any of those locations. So now customers have 2 out of 3, what’s left?

It’s all about the Data

Data can take a very long time to move around a data center, let alone across the web between locations. MBs and even GBs of data may be relatively painless to move, but TBs of data can be take days, and moving PBs of data is suicidal.

For instance, when we signed up for a globally accessible file synch and share storage service, I probably had 75GB or so of data I wanted managed. It took literally several days of time to upload this. Yes, I didn’t have data center class internet access, but even that might have only sped this up 2-5X. Ok, now try this with 1TB or more and it’s pretty much going to take days, and you can easily multiple that by 10 to do a PB or more. And that’s if it happens to continue to perform the transfer without disruption.

So what’s Hammerspace storageless data got to do with any of this.

Hammerspace’s idea

It’s been sort of a ground truth of storage, since I’ve been in the industry (40+ years now), that not all random IO data is accessed at the same frequency. That is, some data is accessed a lot and other data accessed hardly at all. That’s why DRAM caching of data can be so important to a host or storage system.

Similarly for sequential access, if you can get the first blocks of data to the host and then stream the rest in time, a storage system can appear to read fast.

Now I won’t go into all the tricks of doing good data caching, (the secret sauce to every vendor’s enterprise storage), but if you can appear to cache data well, you don’t actually need to transfer all the data associated with an application to a location it’s running in, you can appear as if all the data is there, when actually only some of it is present.

Essentially, Hammerspace creates a global file system for your data, across any locations you wish to use it, with great caching, optimized data transfer and with real storage behind it. Servers running your applications mount a Hammerspace file system/share that stitches together all the file storage behind it, across all the locations it’s operating in.

An application request goes to Hammerspace and if the data is not present there, Hammerspace goes and fetches and caches blocks of data as fast as it can. This will let the application start performing IO while the rest of the data is being cached and if allowed, moved to the new location.

Storage can be not managed by Hammerspace, read-write managed by Hammerspace or read-only managed by Hammerspace. For customers who want the whole Hammerspace storageless data functionality they would use read-write mode. For those who just want to access data elsewhere read-only would suffice. Customers who want to continue to access data directly but want read access globally, would use the read-only mode.

Once read-write storage is assigned to Hammerspace grabs all the file metadata information on the storage system. Once this process completes, customers no longer access this file data directly, but rather must access it through Hammerspace. At that point, this data is essentially storageless and can be accessed wherever Hammerspace services are available.

How does Hammerspace do it

Behind the scenes is a lot of technology. Some of which is discussed in the SFD21 sessions (see video’s above). Hammerspace is not in the data path but rather in the control path of data access. But it does orchestrate data movement, and it does route data IO requests from an application to where the data (currently) resides.

Hammerspace also supports Service Level Objectives (SLOs) for performance, geolocation, security, data protection options,, etc. These can be used to keep data in particular regions, to encrypt data (using KMIP), ensure high performance, high data availability, etc.

Hammerspace can manage data across 32 separate sites. It takes a couple of hours to deploy. per site. Each site has a Hammerspace metadata service with standalone access to all data within that site. For example, standalone access could be used, in the event of a network loss.

At the moment, they support eventual consistency and don’t support a global lock service. Rather, Hammerspace uses a conflict resolution service in the event data is overwritten by two or more applications. For any file that was being updated in two or more locations, that file would be flagged as in conflict, Hammerspace would provide snapshots of the various versions of the file(s) and it would require some sort of manual intervention to resolve the conflict. Each location would have (temporary) access to the data it had written directly, but at some point the conflict would need resolution.

They also support NFS and SMB file access for the front end and use object storage services for backend data. Data is copied on demand to the local site’s storage when accessed based on the SLO policies in effect for it. During data movement it is copied up, temporarily into objects on AWS, Microsoft Azure, or GCP, and then copied down to the location it’s being moved to. I believe this temporary object data is encrypted and compressed. Hammerspace support KMIP key providers.

Pricing for Hammerspace is on a managed capacity basis. But anyone can use Hammerspace for up to 10TB for free. Hammerspace is available in AWS marketplace for configuration there.

~~~~

Well it’s been a long time coming, but it appears to be here. Any customers wanting hybrid-cloud operations or global access to their data would be remiss to not check out Hammerspace.

[Edited after posting, The Eds.]

Data Science storage with NetApp’s Python Toolkit

I’ve got a book someplace (yet to be read completely) with the title Data science with Python. At a recent Storage Field Day 21 last month, NetApp was there discussing a number of their product offerings one of which was their Python SDK to manage NetApp storage for data scientists and AI researchers (see videos of their sessions here).

I’m not a data science expert but a Python SDK for storage management just makes so much sense to me I just had to take a look. Their GitHub repo is available online and they call it the NetApp Data Science Toolkit.

But first please take our new poll:

The challenge for data science and AI researchers is that it’s all about the data. How do you find the data, gain access to it, clean it, and process it quickly so you can do it all over again. Having some sort of Python SDK that allows you to do some rudimentary storage volume configuration, access, snapshotting etc. can make these sorts of pipelines be self-serviced rather than going back and forth with operations to get volumes configured, mounted, and services established.

NetApp Data Science Toolkit

The NetApp Data Science Toolkit can be PIP installed into anything with Python 3.5 or later and can be invoked via a command line or as a library of Python functions that can be invoked. The command line utility and the Python calls appear to be functionally equivalent.

pip3 install netapp-ontap pandas tabulate requests boto3

The Toolkit must be configured for your environment and NetApp storage but once that’s done your ready to rock and roll.

MLOps pipeline from Google

The command line is invoked with

./ntap_dsutil.py

following that command are subcommands and parameters specifying what ONTAP operation you want to perform and how it is to be done. Python function calls seem to follow the same parameterization as the CLI.

The CLI and Python function calls can run on MacOS or any Linux distribution. There’s a paper that discusses how to use the SDK to accelerate AI pipelines as well as another ReadMe that describes it’s use in Kubernetes with NetApp’s Trident CSI plugin.

The functionality supports NetApp AFF, FAS, Cloud Volumes and Select that are running ONTAP 9.7 or later. For a current list of ONTAP functions available, check out the toolkit. But for a overview these ONTAP functions were available.

  • For Volume Management – cloning, creating, listing all, deleting or mounting a volume,
  • For Snapshot Management – creating, deleting, listing and restoring snapshots (of volumes)
  • For Data Fabric Management – listing all cloud sync relationships, triggering a cloud sync operation, multi-thread pulling a bucket down from S3 storage (into a NetApp volume directory), pulling a single object down from S3 into a file, pushing the contents of a directory to bucket on S3 and pushing a file into an object on S3.
  • For Advanced Data Fabric Management – listing all SnapMirror relationships and triggering a sync operation for an existing SnapMirror relationship.

This is a pretty comprehensive list of NetApp ONTAP storage functionality. Having all this under control of Python and CLI for data scientist or AI researcher seems pretty impressive.

Of course not every option for all those functions are supported but it’s just a start (V1.1 of the toolkit). I’m sure there’s more to come, especially if customers demand it.

However, it would be nice to have an ONTAP simulator available with the toolkit that could be used to test out your Python code and CLI commands before using real NetApp storage. This would be very useful for those of us lacking our own test ONTAP storage, just hanging around on prem or in the cloud.

As Python becomes the language of choice for AI and now data science, it seems only natural that storage and data protection companies would start releasing Python SDKs/APIs for their product functionality. That way AI and data science researchers could embed any storage functionality they needed directly into their Python code or Jupyter Notebook application.

Having a Python SDK for NetApp ONTAP storage, means using data storage for your MLops or data science pipelines is that much easier.

Great move by NetApp. Ok where’s the rest of the industry?

Picture credit(s):

cOAlition S requires open access to funded research

I read a Science article this last week (A new mandate highlights costs and benefits of making all scientific articles free) about a group of funding organizations that have come together to mandate open access to all peer-reviewed research they fund called Plan S. The list of organizations in cOAlition S is impressive including national R&D funding agencies from UK, Ireland, Norway, and a number of other countries, charitable R&D funding agencies from WHO, Welcome Trust, Bill&Melinda Gates Foundation and more, and the group is also being funded by the EU. Plan S takes effect this year.

Essentially, all research funded by these organizations must be immediately published in open access forum, open access journals or be freely available in an open access section of a publishers website which means it could be free to be read by anyone worldwide with access to the web. Authors and institutions will retain copyright for the work and the work will be published under an open access license such as the CC BY (Creative Commons Attribution) license.

Why open access is important

At this blog, frequently we find ourselves writing about research which is only available on a paid subscription or on a pay per article basis. However, sometimes, if we search long enough, we find a duplicate of the article published in pre-print form in some preprint server or open access journal.

We have written about open access journals before (see our New Science combats Coronavirus post). Much of what we do on this blog would not be possible without open access journals like PLoS, BioRxiv, and PubMed.

Open access mandates are trending

Open access mandates have been around for a while now. And even the US Gov’t got into the act, mandating all research funded by the NIH be open access by 2008, with Dept of Agriculture and Energy following later (see wikipedia Open access mandates).

In addition, given the pandemic emergency, many research publishers like Nature and Elsevier made any and all information about the Coronavirus free access on their websites.

Impacts and R&D research publishing business model

Although research is funded by public organizations such as charities and government agencies, prior to open access mandates, most research was published in peer-reviewed journal magazines which charged a fee for access. For many research organizations, those fees were a cost of doing research. If you were an independent researcher or in an institution that couldn’t afford these fees, attempting to do cutting edge research was impossible without this access.

Yes in some cases, those journal repositories waved these fees for deserving institutions and organizations but this wasn’t the case for individual researchers. Or If you were truly diligent, you could request a copy of a paper from an author and wait.

Of course, journal publishers have real expenses they needed to cover, as well as make a reasonable profit. But due to business consolidation, there were fewer independent journals around and as a result, they charged bundled license fees for vast swathes of research articles. Such a wide bundle may or may not be of interest to an individual or an institution. That plus with consolidation, profits were becoming a more significant consideration.

So open access mandates, often included funding to cover fees for publishers to supply open access. Such fees varied widely. So open access mandates also began to require fees to be published and to be supplied a description how prices were calculated. By doing so, their hope was to make such costs more transparent

Impacts on authors of research articles

Somewhere there’s an aphorism for researchers that says “publish or perish“, which means you must publish research in order to become a recognized expert in your field. Recognition often the main driver behind better academic employment and more research funding.

However, it’s not just about volume of published papers, the quality of research also matters. And the more highly regarded publishing outlets have an advantage here, in that they are de facto gatekeepers to whats published in their journals. As such, where you publish can often lend credibility to any research.

Another thing changed over the last few decades, judging the quality of research has become more quantative. Nowadays, research quality is also dependent on the number of citations it receives. The more popular a publisher is, the more readers it has which increases the possibility for citations.

Thus, most researchers try to publish their best work in highly regarded journals. And of course, these journals have a high cost to provide open access.

Successful research institutions can afford to pay these prices but those further down the totem pole cannot.

Most mandates come with additional funding to support paying the cost to supply open access. But they also require publishing and justifying these. In the belief that in doing this so it will lend some transparency to these costs.

So the researcher is caught in the middle. Funding organizations want open access to research they fund. And publishers want to be paid a profit for that access.

History of research publication

Nature magazine first started publishing research in 1859, Science magazine first published in 1880, the Royal Society first published research in 1665. So publishing research has been going on for 350 years, and at least as a for profit business model, since the mid-1800s.

Research prior to being published in journals was only available in books. And more than likely, the author of the research had to pay to have a book published and the publisher made money only when those books were sold. And prior to that, scientific research was mostly only available in a course of study, also mostly paid for by the student.

So science has always had a cost to access. What open access mandates are doing is moving this cost to something added to the funding of research.

Now if open access can only solve the reproducibility crisis in science we could have us a real scientific revolution.

Comments?

Photo Credits:

Google Docs as subversive technology

Read an article the other day in TechReview (How Google Docs became the social media of the resistance) about how Google Docs was being used to help coordinate and promote the resistance surrounding the recent Black Lives Matter movement.

The article points out that Google Docs are sharing resources around anti-racism, email templates, bail resources, pro-bono legal assistance, etc. to help inform and coordinate the movements actions and activities.

Social unrest, the killer app for Google Docs

Protests could be the killer app for shared Google Docs. Facebook and other social media sites are better used for documenting the real time interactions during protests, but coordinating, motivating and informing the protests and protestors is better accomplished using Google Docs, a simple web based, document editor and sharing service.

In pre-internet days, I suppose all this would have been done on hand copied, typeset printed, carbon copied or photocopied theses/phamplets/fliers/printouts. For example, Luther’s list of grievances nailed to the cathedral door, Common Sense pamphlet during the USA revolutionary war to countless fliers during the 60’s protests, all these used the technology of the day to promote protest and revolution.

Nowadays all it takes is a shared Google Doc and a Google (drive) account.

Google Docs are everywhere

The high school that one of my kids went to uses Google Docs for sharing and submitting homework assignments.

Google Docs are shareable because they are hosted on Google Drives. Docs is just one component of the Google (G-)suite of web based apps that includes Google Sheets (spreadsheets), Google Slides (presentations) and Google Drives (object storage).

Moreover, any Google Doc, Sheet or Slide file can be shared and edited by anyone. And Google services like Docs, Sheets, and Slides are useable anonymously, Anyone onlin, can make a change to a shareable/editable doc, sheet, or slide and their changes are automatically saved to the google drive file.

Another thing is that any Google Doc can be shared with just a URL. And they can also be made read-only (or uneditable) by their owner at any time. And of course any Google Doc is backed up automatically by Google drive services.

Owners of documents can revert to previous versions of a Doc file. So if someone incorrectly (or maliciously) changes a doc, the originator can revert it back to a prior version.

Why not use a Wiki

I would think a Wiki would be better to use to coordinate, motivate and inform a protest. Once a Wiki is setup and started, it can be much easier to navigate, as easy to update, and can become a central repository of all information about a movement/protest.

But it takes a lot more effort and IT-web knowledge to set up a Wiki. And it has to have it’s own web address.

Another problem with a Wiki, is that it can become a central point which can be more easily attacked or disturbed. And Wiki edit wars are pretty common, so they too are not immune to malicious behavior.

But with 10s to 100s of Google Docs, spread across user a similar number of user Google drives, Google Docs are a much more distributed resource, less prone to single point of attack. And they can be created and edited almost on a whim. And the only thing it takes is a Google log in and Google drive.

~~~~

Photo copiers were a controlled technology in the old Soviet Union and even today facebook and twitter are restricted in China and other authoritarian states.

But Google Doc’s seems to have become a much more ubiquitous tool and have become the latest technology, to aid, abet and support social resistance.

Photo credit(s):

New science used to combat COVID-19 disease

Read an article last week in Science Magazine (A completely new culture on doing research… ) on how the way science is done to combat disease has changed the last few years.

In the olden days (~3-5 years ago), disease outbreaks would generate a slew of research papers to be written, submitted for publication and there they would sit, until peer-reviewed, after which they might get published for the world to see for the first time. Estimates I’ve seen say that the scientific research publishing process takes anywhere from one month (very fast) to 4-8 months, assuming no major revisions are required.

With the emergence of the Zika virus and recent Ebola outbreaks, more and more biological research papers have become available through pre-print servers. These are web-sites which accept any research before publication (pre-print), posting the research for all to see, comment and understand.

Open science via pre-print

Most of these pre-print servers focus on specific areas of science. For example bioRxiv is a pre-print server focused on Biology and medRxiv is for health sciences. On the other hand, arXiv is a pre-print server for “physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.” These are just a sampling of what’s available today.

In the past, scientific journals would not accept research that had been published before. But this slowly change as well. Now most scientific journals have policies gol pre-print publication and will also publish them if they deem it worthwhile, (see wikipedia article List of academic journals by pre-print policies).

As of today (9 March 2020) ,on biorXiv there are 423 papers with keyword=”coronavirus” and 52 papers with the keyword COVID-19, some of these may be the same. The newest (Substrate specificity profiling of SARS-CoV-2 Mpro protease provides basis for anti-COVID-19 drug design) was published on 3/7/2020. The last sentence in their abstract says “The results of our work provide a structural framework for the design of inhibitors as antiviral agents or diagnostic tests.” The oldest on bioRxiv is dated 23 January 2020. Similarly, there are 326 papers on medRxiv with the keyword “coronavirus”, the newest published 5 March 2020.

Pre-print research is getting out in the open much sooner than ever before. But the downside, is that pre-print papers may have serious mistakes or omissions in them as they are not peer-reviewed. So the cost of rapid openness is the possibility that some research may be outright wrong, badly done, or lead researchers down blind alleys.

However, the upside is any bad research can be vetted sooner, if it’s open to the world. We see similar problems with open source software, some of it can be buggy or outright failure prone. But having it be open, and if it’s popular, many people will see the problems (or bugs) and fixes will be rapidly created to solve them. With pre-print research, the comment list associated with a pre-print can be long and often will identify problems in the research.

Open science through open journals

In addition to pre-print servers , we are also starting to see the increasing use of open scientific journals such as PLOS to publish formal research.

PLOS has a number of open journals focused on specific arenas of research, such as PLOS Biology, PLOS Pathogyns, PLOS Medicine, etc.

Researchers or their institutions have to pay a nominal fee to publish in PLOS. But all PLOS publications are fully expert, peer-reviewed. But unlike research from say Nature, IEEE or other scientific journals, PLOS papers are free to anyone, and are widely available. (However, I just saw that SpringerNature is making all their coronavirus research free).

Open science via open data(sets)

Another aspect of scientific research that has undergone change of late is the sharing and publication of data used in the research.

Nature has a list of recommended data repositories. All these data repositories seem to be hosted by FAIRsharing at the University of Oxford and run by their Data Readiness Group. They list 1349 databases of which the vast majority (1250) are for the natural sciences with over 1380 standards used for data to be registered with FAIRsharing.

We’ve discussed similar data repositories in the past (please see Data banks, data deposits and data withdrawals, UK BioBank, Big open data leads to citizen science, etc). Having a place to store data used in research papers makes it easier to understand and replicate science.

Collaboration software

The other change to research activities is the use of collaborative software such as Slack. Researchers at UW Madison were already using Slack to collaborate on research but when Coronavirus went public, they Slack could help here too. So they created a group (or channel) under their Slack site called “Wu-han Clan” and invited 69 researchers from around the world. The day after they created it they held their first teleconference.

Other collaboration software exists today but Slack seems most popular. We use Slack for communications in our robotics club, blogging group, a couple of companies we work with, etc. Each has a number of invite-only channels, where channel members can post text, (data) files, links and just about anything else of interest to the channel.

Although I have not been invited to participate in Wu-han Clan (yet), I assume they usee Slack to discuss and vet (pre-print) research, discuss research needs, and other ways to avert the pandemic.

~~~~

So there you have it. Coronavirus scientific research is happening at warp speed compared to diseases of yore. Technologies to support this sped up research have all emerged over the last five to 10 years but are now being put to use more than ever before. Such technological advancement should lead to faster diagnosis, lower worldwide infection/mortality rates and a quicker medical solution.

Photo Credit(s):

All that AI DL training data comes from us

Read a couple of articles the past few weeks that highlighted something that not many of us are aware of, most of the data used to train AI deep learning (DL) models comes from us.

That is through our ignorance or tacit acceptation of licenses for apps that we use every day and for just walking around/interacting with the world.

The article in Atlantic, The AI supply chain runs on ignorance, talks about Ever, a picture sharing app (like Flickr), where users opted in to its facial recognition software to tag people in pictures. Ever also used that (tagged by machine or person) data to train its facial recognition software which it sells to government agencies throughout the world.

The second article, in Engadget , Colorado College students were secretly used to train AI facial recognition (software), talks about a group using a telephoto security camera than was pointed at a high traffic area on campus. The data obtained was used to help train an AI DL model to identify facial characteristics from far away.

The article went on to say that gathering photos from people in public places is not against the law. The study was also cleared by the school. The database was not released until after the students graduated but it did have information about the time and date the photos were taken.

But that’s nothing…

The same thing applies to video sharing and photo animation models, podcasting and text speaking models, blogging and written word generation models, etc. All this data is just lying around the web, freely available for any AI DL data engineer to grab and use to train their models. The article which included the image below talks about a new dataset of millions of webpages.

From an OpenAI paper on better language models showing the accuracy of some AI DL models “trained on a new dataset of millions of webpages called WebText.”

,Google photo search is scanning the web and has access to any photo posted to use for training data. Facebook, IG, and others have millions of photos that people are posting online every day, many of which are tagged, with information identifying people in the photos. I’m sure some where there’s a clause in a license agreement that says your photos, when posted on our app, no longer belong to you alone.

As security cameras become more pervasive, camera data will readily be used to train even more advanced facial recognition models without your say so, approval or even appreciation that it is happening. And this is in the first world, with data privacy and identity security protections paramount, imagine how the rest of the world’s data will be used.

With AI DL models, it’s all about the data. Yes much of it is messy and has to be cleaned up, massaged and sometimes annotated to be useful for DL training. But the origins of that training data are typically not disclosed to the AI data engineers nor the people that created it.

We all thought China would have a lead in AI DL because of their unfettered access to data, but the west has its own way to gain unconstrained access to vast amounts of data. And we are living through it today.

Yes AI DL models have the potential to drastically help the world, humanity and government do good things better. But a dark side to AI DL models also exist to help bad actors, organizations and even some government agencies do evil.

Caveat usor (May the user beware)

~~~~

Comments?

Photo Credit(s): “Still Watching You” by jhcrow is licensed under CC BY-NC 2.0 

Computational Photography Homework 1 Results.” by kscottz is licensed under CC BY-NC 2.0 

From Language models are unsupervised multi-task learners OpenAI research paper

For data that never rests, NetApp NDAS

NetApp co-founder, Dave Hitz announced he was becoming a NetApp Founder Emeritus at the Storage Field Day (SFD18) show. He gave a great session about what he and his Hitz foundation’s been doing (for one example see our Archeology meets big data, post). He also discussed at length where he felt the storage world (and NetApp) must do to address the opportunities of the new cloud world. But this post isn’t about Dave, it’s about NetApp Data Availability Service, NDAS.

NetApp NDAS, currently in Beta but GAing (hopefully) later this year, is an AWS marketplace data orchestration solution that manages primary to secondary to S3 movement for ONTAP data. Essentially, NetApp Data Availability Services extends ONTAP data lifecycle management to AWS cloud. But it’s more than just a way to archive ONTAP data.

NDAS orchestrates Snapmirror services across ONTAP systems and AWS. But once your ONTAP data is in S3 it supplies access to that data for authorized AWS applications and services. That way one can use their ONTAP data to provide data analytics, train AI models, and do just about anything you can do with AWS applications today. By using NDAS, customers can extract more value from their ONTAP data.

NDAS is not just copying data to S3 but is also copying ONTAP metadata, catalogues and other information that provides context for that data. By copying ONTAP catalog information, customers and authorized end users can have file level access to ONTAP data residing in S3 objects.

NDAS today, only supports copying data from secondary ONTAP systems to S3. But a future enhancement will expand this to copy primary ONTAP data to S3.

How does NDAS work

NDAS provisions (your) EC2 instances, and middleware to read the data from the secondary systems and copy it to S3 buckets which you provide. NDAS after initial configuration to point to your ONTAP secondary storage systems, will autodiscover all the data available that can be copied to the cloud.

NDAS will start cataloguing your ONTAP data. NDAS EC2 instances support the NDAS copy, view and a Google-like search processes.

NDAS search presents a simplified file system view into your ONTAP data copied to S3. That way customers can identify data that could be used for AI training or data analytics that run in the cloud to access the data.

There’s extensive security to insure that NDAS is properly authorized to access your ONTAP data. Normal S3 security options also apply such as to have the data be encrypted on S3. NDAS data is automatically encrypted in flight.

Moreover, NDAS S3 bucket data can be replicated across AWS regions . Also serverless/lambda funationality are fully supported from or NDAS S3 buckets. .

What can it do with the data

AWS applications can access the data directly through NDAS APIs. Or customers can manually extract data they want to further process using the NDAS GUI to identify and copy data of interests. NDAS essentially creates a small app layer that allows users to view and access the ONTAP data in S3 as a file system.

One can have different NDAS AMIs operating in different regions for faster access or to support GDPR compliance requirements. Alternatively, a customer could have one NDAS AMI accessing all their secondary ONTAP instances.

NDAS is intended to provide a data analyst or IT generalist access to ONTAP data. This way AI training and big data analytics applications which run easily in the cloud, can have access to ONTAP data. In this way, customers can more effectively utilize data that IT has been storing and maintaining, since time began.

One NDAS beta customer is a MLB team. They have over time instrumented their stadiums to generate lot’s of data about pitch speed, rotation, ball location as it crosses the plate, etc.   The problem with all this data is siloed in onprem or IOT systems that generated it. But the customer wants to use the data to improve players, coaches and the viewer experience. And all that needs tools, applications and software that’s just not available to run in the data center. But with NDAS all this data is now available to cloud applications.

NDAS is supported by any ONTAP 9.5 or later (FAS, AFF, Cloud ONTAP, ONTAPselect) secondary storage system. ONTAP 9.5 software contains all the services required to support NDAS. This includes the copy-to-cloud APIs, as well as the NDAS proxy, which supplies the secure interface to NDAS operating in the cloud.

NetApp’s NDAS sessions are pretty informative. Anyone interested in finding out more should checkout the videos available on TechFieldDay website and Dave’s session is also worth a view.

For more information on Dave’s session and NDAS check out:

NetApp, Cloudier than ever by Enrico Signoretti (@ESignoretti)

NetApp and the space in between by Dan Frith (@PenguinPunk)

~~~~

Comments?