Can we back up a PB?

Tradition says no way. IT backup history says not on your life. Common sense would say never in a million years.

Most organizations with PB of data or more, depend on remote replication to protect against data center outage or massive loss of data. This of course costs ~2X your original data center. And for some organizations one copy is not enough, so ~3X .

I don’t know what a PB scale data storage costs these days but I can’t believe it’s under a couple Million $ USD in hw and sw costs and probably at least another Million or so in OpEx/year. Multiply that by 2 or 3X and you’re now talking real money.

How could backup help?

Well for one you wouldn’t need replicas, so that would cut your hw & sw acquisition costs by a factor of 2 or 3. But backup storage is not free either. So you’d probably need to add back 30-50% of the original data center in hw & sw costs for backups.

You certainly wouldn’t need as many admins. And power for backup storage should also be substantially less. So maybe your OpEx would only be 1.5X in total for the original PB and its backups.

But what could possibly back up a PB of data?

We were talking with Igneous at Cloud Field Day 8 (CFD8, see their video here)  a couple of weeks back and they said they could and do backup PBs of data for customers today. A while back, e also talked with them on a GreyBeards on Storage podcast.

The problems with backing up a PB seem insurmountable. First you have to be able to scan a PB of data. This means looking into multiple file systems on many different hardware platforms, across potentially multiple data centers, and that’s just to get a baseline of what all needs to be backed up.

Then at some point you actually have to store all that data on backup storage. So, to gain some cost advantage, you’d want to compress and deduplicate a PB of data, so that the first full backup wouldn’t take a full PB of backup storage.

Then of course you have to transfer a PB of data to your backup storage, in something that wouldn’t take months to perform. And that just gets you the first full backup.

Next, comes the daily scan of what’s changed. This has to re-scan your PB of data to find that 100TB or so, that’s changed over the last 24 hrs. Sometime after that scan completes, then all that 100TB or so of changed data needs to be compressed, deduped and transferred again to backup storage

And if that’s not enough, you have to do it all over again, every day, from now on, almost forever. And data continues to grow. So 1PB today is likely to be 2PB of more in 12 months (it’s great to be in the storage business). 

So those are the challenges. How can it be done, effectively, day in and day out, enough so that IT can depend on their data being backed up.

Igneous to the rescue…

First, Igneous came out of stealth a while back (listen to our podcast) with a couple of unique capabilities needed for massive data repository discovery and analysis. That is they built a unique engine to scan and index PB scale data repositories. This was so they couldd provide administrators better visibility into their PB scale data repositories. But this isn’t about that product, it’s about backup. 

But some of the capabilities they needed to support that product helped them perform backups as well. For instance, their scan needed to handle PBs of data. They came up with AdaptiveSCAN, which didn’t use standard NFS and SMB data transfer protocols to gain access to file metadata. To open a file on NFS or SMB takes quite a lot of NFS or SMB transactions. But to access metadata only, one doesn’t have to use all those NFS and SMB capabilities, it can be done with much less overhead even when using NFS or SMB.

Of course having a way to scan Billions of files was a major accomplishment, but then where do you put all that metadata. And how can you access it effectively to support backup up a PB data repository. So they needed some serious data indexing capabilities and so came up with InfiniteINDEX

Now a trillion item index, seems a bit much, even for PB scale repositories. But my guess is they have eyes on taking their PB scale backups and going after even bigger fish,. That is offering backups for EB scale data repository. And that might just take a trillion item index

Next, there’s moving PB or even TB of data quickly is no small trick. As the development team at Igneous mostly came from unstructured data providers, they also understood and have access to APIs for most storage vendors (NetApp, Dell-EMC Isilon, Pure FlashBlade, Qumulo, etc.). As such, where available, they utilized those native vendor storage API calls to help them move data rather than having to Open an NFS or SMB file and Read it. 

Of course, even doing all that, moving 100TBs of data around or scanning PB sized data repositories is going to take a lot of processing and IO bandwidth to do in a reasonable period of time. 

So another capability they developed is massive parallelism. That is being able to distribute scan, indexing or data movement work, out to multiple systems. In that fashion it can be accomplished in significantly less wall clock time. 

Well with all that, they pretty much had the guts of a backup application system for PB data repositories but they still didn’t have the glue to put it all together. But recently they announced just that a Igneous’s DataProtect, a full scale backup application for PB of data. 

I suppose I haven’t done justice to all of what they have developed or talked about at their session, so I would suggest viewing their talk at CFD8 and listening to our GBoS podcast to learn more. They did demo their product at CFD8 but I believe it was a canned demo.

I didn’t think I’d see the day when some vendor would offer backup services for PBs of data let alone be shooting for more, but there you have it. Igneous means to take your PB scale data repositories and make them as easy to operate as TB scale data repositories. They call that democratizing data.

Comments?

See these other CFD8 bloggers write ups on Igneous.

CFD8  – Igneous Follow Up  by Nate Avery (@Nathaniel_Avery)

Picture credit(s): All from screen saves during Igneous’s session at CFD8

Open source ASICs – Hardware vs. Software innovation (round 5)

A good friend of mine sent me an article yesterday (Produce your own physical chips for free, in the open) that announced a collaboration between Google, Skywater Technology Foundry and FOSSi (Free and open source silicon) Foundation that ultimately supplies a completely open source set of tools to create ASICs at 130nm node ASIC level. The last piece of this toolkit was an open source PDK (Process Design Kit) data that was produced by Google-Skywater technologies and their offer for free fab services to manufacture chips that were designed with the tool set.

Layout snapshots of 2D and 3D ICs designed in 130-nm process technology: (a) 2D IC (2D-130); (b) the top and bottom tiers of a 3D IC using macro-level partitioning (3D-MP-130); and (c) the top and bottom tiers of a 3D IC using pipeline-level partitioning (3D-PP-130). 

The industry and I have had a long term discussion in this blog and elsewhere about the superiority of hardware innovation vs. software innovation using commodity hardware (e.g., see TPU and hardware vs. software innovation (Round 3) and Hardware vs. software innovation – Round 4). Most of the tech industry believes that software innovation on commodity hardware is better than hardware innovation. We beg to differ and in our mind, it’s the combination of hardware AND software innovation that is remaking the world.

Much of this can be seen with smart phone technology. The smart phone would not be possible without significant hardware innovation and has supplied ubiquitous computing for the world. That is it has connected billions to the internet that had no connection before.

But historically, hardware innovation has been hard to do, took a long time, and costs a lot vs. software innovation with commodity hardware, which by definition, is easier to do, takes almost no time (with continuous innovation even less) and costs almost nothing, especially when using open source.

The one innovation that emerged over the last few decades to make new hardware creation easier, has been the FPGA. FPGAs allow for “programing” hardware logic in the lab (sometime in the field) rather than having it be set in silicon in the fab. The toolchains to support FPGA programming can be proprietary but some are also available in open source. For example, SymbiFlow (open source) takes in Verilog (IEEE standard hardware definition language) and converts it into a binary bit stream used to program most (Xilinx-7 and Lattice) FPGAs.

But this recent announcement makes the process to create ASICs completely open source and much easier and cheaper to do

ASICs design flow

Prior to this announcement, most PDKs were expensive and specific to a particular FAB and process node. With Google’s and Skywater’s release of open source PDK (on GitHub) data, designers and engineers now have a completely open source tool kit that is they have RTL (Register-transfer-level, hardware description logic) design tools, EDA tools and PDK data to create their own ASICs. And with this toolkit Skywater together with Google will manufacture ASICs for you, at no cost.

The FOSSi dial up talk (embedded in the announcement above) goes into much detail about the FPGA and ASIC tool chain. but prior to this announcement the PDK data which is used to help the RTL and EDA tools simulate, verify and determine the optimum layout for the hardware design was always proprietary.

Open source RTL tools have been available for years now starting with OpenCores, OpenRISC, RISC-V and now OpenPower. RISC-V and OpenPower include RTL to implement sophisticade instruction set CPUs. OpenRISC is RTL for a precursor to RISC-V and OpenCores supplies the RTL for a number of other (CPU) cores. But this is just a sample of the RTL that’s available in open source.

EDA Tools are also available in open source. The most recent incarnation would be the DARPA funded, OpenROAD project. OpenROAD will ultimately provide a completely open source EDA Tool set for electronic design. The first component of this is a set of EDA tools that convert RTL to GDS II (industry standard graphical design stream description of a IC chip componentry and layout). GDS II streams are used to create masks for IC fabrication.

And now with the open source Google-Skywater PDK data, one has a complete open source tool chain to create ASICs at the 130nm node level for the Skywater Fab in Minnesota.

A PDK contains a lot of data about the ASIC fabrication process including process design rules, analog and digital design cells and models, behavioral models for analog and digital design, extracted data for simulation and other supporting functionality.

The Google-Skywater Technologies open source PDK is Apache 2.0 Licensed. The PDK is used in the SKY130 process node, which includes 130nm technologies, high voltage support, 5 metal layers and one interconnect layer.

At the moment the PDK includes standard digital cell support (“nor” gates, “and” gates, flip flops, etc.) but over time they are planning to add analog cells, IO & periphery cells, analog RF as well a fully automated design rule checking, with SRAM/flash build spaces.

The PDK does include standard SRAM bit cells and in combination with OpenRAM project one can use SRAM cells to create SRAM memory for the ASIC.

Google-Skywater are going to be fabricating, for free, up to 40 ASIC designs starting in Fall of 2020 and then six months later, they will start fabricating ~40 ASICs ever 3 months.

However to qualify for free fabrication, your design has to be completely open source (located on GitHub). To submit your ASIC you need to send your public GitHub URL repository to efabless and they will perform verification processes on it. If it works, they will respond with an email that it was accepted. If more than 40 designs are submitted for a run, the Google-Skywater team will decide on which 40 will be manufactured

The 16mm**2 ASIC automatically comes with a RISC-V CPU, RAM and power plus ~40 IOs. There is another 10mm**2 space for all of your ASIC specific logic. If successful, you will get back ~100 to 400 packaged chips.

~~~~

ASICs were always lengthy and costly to design and then fabrication took more money and time, before you got anything back to test. With Open source tool kits, design should no longer cost anything but engineering time and with the sophistication available is todays toolchain, should not be that lengthy. And if your one of the lucky 40 designs, ASIC fabrication is free. And then starting next year fabrication runs will occur every 3 months. So you could potentially get your design back in an ASIC in as little as 3 months.

And while the 130nm technology node dates back to 2001-2003, there were plenty of sophisticated ASICss made during those years (at a previous job, we did a couple ourselves). And of course, with your very own RISC-V CPU inside, you could pretty much do anything you want with your ASIC. Yeah RAM, SRAM and other constraints may limit you, but that’s what hardware innovation is all about, deal with the physical constraints but open up a whole new architectural world.

Welcome to the a new era of ASIC (hardware) innovation.

Photo Credit(s):

Software defined power grid

Read an article this past week in IEEE Spectrum (The Software Defined Power Grid is here) about a company that has been implementing software defined power grids throughout USA and the world to better integrate and utilize renewable energy alongside conventional power generation equipment.

Moreover, within the last year or so, Tesla has installed a Virtual Power Plant (VPP) using residential solar and grid scale batteries to better manage the electrical grid of South Australia (see Tesla’s Australian VPP propped up grid during coal outage). VPP use to offset power outages would necessitate something like a software defined power grid.

Software defined power grid

Not sure if there’s a real definition somewhere but from our perspective, a software defined power grid is one where power generation and control is all done through the use of programatic automation. The human operator still exists to monitor and override when something goes wrong but they are not involved in the moment to moment control of which power is saved vs. fed into the grid.

About a decade ago, we wrote a post about smart power meters (Smart metering’s data storage appetite) discussing the implementation of smart meters for home owners that had some capabilities to help monitor and control power use. But although that technology still exists, the software defined power grid has moved on.

The IEEE Spectrum article talks about a phasor measurement units (PMUs) that are already installed throughout most power grids. It turns out that most PMUs are capable of transmitting phasor power status at 60 times a second granularity and each status report is time stamped with high accuracy, GPS synchronized time.

On the other hand, most power grids today use SCADAs (supervisory control and data acquisition) to monitor and manage the power grid. But SCADAs only send data every 2-4 seconds. PMU’s are also installed in most power grids, but their information is not as important as SCADA to the monitoring, management and control of most (non-software defined) power grids.

One software defined power grid

PXiSE, the company in the IEEE Spectrum article, implemented their first demonstration project in Hawaii. That power grid had reached the limit of wind and solar power that it could support with human management. The company took their time and implemented a digital simulation of the power grid. But with the simulation in hand, battery storage and a off the shelf PC, the company was able to manage the grids power generation mix in real time with complete automation.

After that success, the company next turned to a micro-grid (building level power) with electronic vehicles, battery and solar power. Their software defined power grid reduced peak electricity demand within the building, saving significant money. With that success the company took their software defined power grid on the road to South Korea, Chile, Mexico and a number of other locations the world.

Tesla’s VPP

The Tesla VPP in South Australia, is planned to consists of up to 50K houses with solar PV panels and 13.5Kwh of batteries, able to deliver up to 250Mw of power generation and 650Mwh of power storage.

At the present time, the system has ~1000 house systems installed but even with that limited generation and storage capability it has already been called upon at least twice to compensate for coal generation power outage. To manage each and every household, they’d need something akin to the smart meters mentioned above in conjunction with a plethora of PMUs.

Puerto Rico’s power grid problems and solutions

There was an article not so long ago about the disruption to Puerto Rico’s power grid caused by Hurricanes Irma and Maria in IEEE Spectrum (Rebuilding Puerto Rico’s Power Grid: The Inside Story) and a subsequent article on making Puerto Rico’s power grid more resilient to hurricanes and other natural disasters (How to harden Puerto Rico’s power grid). The later article talked about creating micro grids, community PV and battery storage that could be disconnected from the main grid in times of disaster but also used to distribute power generation throughout the island.

Although the researchers didn’t call for the software defined power grid, it is our understanding that something similar would be an outstanding addition to their work there.

~~~~

As the use of renewables goes up and the price of batteries decreases while their capabilities go up over time, more and more power grids will need to become software defined. In the end, more software defined power grids with increasing renewables power generation and storage will make any power grid, more resilient and more fault tolerant.

Photo Credit(s):

Silq and QUA vie for Quantum computing language

Read a couple of articles this past week on new Quantum computing programing languages. Specifically, one in ScienceDaily on Silq, (The 1st intuitive programming language for quantum computers) and another in TechCrunch (Quantum Machines announces QUA, its universal lang. for quantum computing). The Silq discussion is based on an ACM SIGPLAN paper (Silq: A High-Level Quantum Language with Safe Uncomputation and Intuitive Semantics). programing

Up until this point there have been a couple of SDK’s for various quantum computers, most notably QASM for IBM’s, Q# for Microsoft’s and ? for Google’s Quantum Computers. We have discussed QASM in a prior post (see: Quantum Computer Programming post).

But both QUA and Silq are steps up the stack from QASM and Q#, both of which are more realistically likened to machine microcode thanassembly code. For example, with QASM you are talking directly to mechanisms to cohere qubits, electronics needed to connect qubits, electronics to excite qubit states, etc.


QUA and Silqs seem to take different tacks to providing their services.

Silq control flow
  • Silq is trying to abstract itself above the hardware layer and to provide some underlying logical constructs and services that any quantum programmer would want to use. Most notably, Silq mentions that they provide automatic erasure of intermediate calculations results which can impact future quantum calculations if they are not erased. They call this “specific uncomputation“. Silq also offers types, loops, conditionals, superposition (the adding together of two quantum states) and diffusion (spreading of quantum states out).
  • QUA on the other hand is Quantum Machines full stack implementation for quantum computer orchestration. QUA is only a one component of this stack (the highest level) but underneath this is a compiler and a Quantum Machine OPX box, a hardware appliance that interfaces with quantum computers of various types. There’s not much detail about QUA other than it offers conditionals and looping constructs and internal error detection.

From what I see, we are a long ways away from having a true programming language for quantum computers. Quantum Machines sees the problem with today’s quantum computers as the lack of a stack problem.

The Silq group see the problem with today’s quantum computers as a lack of any useful abstraction problem. Silq is trying to provide simpler semantics and control structures that maybe someday could become the foundation of a true quantum computing programming language.

Silq has compared itself to Q#, used in Microsoft’s Quantum Computing solution. So our guess is it works only with Microsoft’s quantum computer.

In contrast, QUA offers an orchestration solution for many different quantum computers but you have to buy into their orchestration hardware and stack.

Who will win out in the end is anyone’s guess. There’s a great need for something that can abstract the quantum hardware from the quantum algorithms being implemented. At the moment I like what I see in Silq just wish it was applied more generically.

At press time there were not many details available on Quantum Machines QUA language. Their stack approach may be better in the long run, but having to use their hardware appliance to run it seems counter productive.

~~~~

However, if the programming gods were to ask my opinion as to where a new programming language was really needed, I’d have to say neuromorphic computing (see Our neuromorphic chips a dead end? post). Neuromorphic computing really needs abstraction help. Without some form of suitable abstraction layer, neuromorphic computing seems dead as it stands.

Comments.

Picture Credit(s):

Thoughts on my first virtual conference

I attended a virtual event this week. It was scheduled to last 3 hours. But I only stayed for 2.5 Hours. Below I describe the event from my perspective and after that some notes on how it could be made better.

The virtual event experience

The event home page had a welcome video that you could start when you got there. I didn’t have any idea what to expect so this was nice. It could have spent time discussing the mechanics of the site and how to attend the event but it just was a welcome video, welcoming me to the event and letting me know they appreciated me being able to attend.

Navigation on the site wasn’t that easy to figure out at first. It was at the bottom of the page not at the top or the side. And the navigation home button brought up a list of videos that you could watch (or attend). And that page was in front of the conference page.

I launched the 1st (actually 2nd after the welcome video) which was the CEO keynote session. I thought this was good and the occasional interruption by executives ringing the CEO’s doorbell asking for toilet paper was entertaining. Again he welcomed us to the event and discussed how the pandemic has changed their world and ours. He thanked the customers in attendance and made brief mention of the video (tracks) that one could follow. I don’t recall but the CEO keynote didn’t seem to have any (or many) slides during his session it was just like an informal talk (but) scripted.

It took me a while to figure out how to get back to the main agenda page but once there I proceeded on my chosen track to watch the next video. When I was finished with that I watched the other 3 track videos. The video tracks were not as good as the CEO keynote session and some of them had many more slides than they needed.

They also had a customer interview with an exec which was great and well done. Especially given it seemed to have been recorded over the prior 48 hours.

Somewhere in all of this, I happened to reach the Expo floor. It had a series of technical break out sessions and then the exhibitor buttons which had their own videos, reports, webinars that one could watch/read.

I watched most of the technical breakouts (at least part way through). The tech breakouts were ok, but also had mixed quality as I remember it. That is some having more or less slides and more or less webinar like.

I also watched a few of the exhibitor videos. Some of these auto started when you clicked on their expo buttons, some did not. Some videos were very loud while others were fine.

I’d say the mixed quality of the exhibits were similar to what one might see at any conference with bigger vendors having more polished content while smaller vendors had less polished content.

The conference had a public chat channel but there was one channel for the whole conference and it didn’t appear until much later (maybe when I entered the first breakout sessions or expo “hall”)

How to make our next virtual conference better

Below are my thoughts on ways to improve the virtual conference experience.

• Have real scheduled times to watch the videos/webinars/tech sessions. Yes there all online and can truly be watched at any time you want. But I expected a scheduled agenda with breaks between sessions and to have to pick which one I wanted to go to, meaning that some would have to be unattended. I would suggest that the videos only be available during the event at scheduled time slots and that the event organizers build in breaks between each session. They could always be made available at a later date under a conference media page for further viewing but having them scheduled to run in a conference room would make it more conference like. The tracks could be scheduled in other side rooms of the conference.

• Also, would it be too much to ask that they have some sort of video roll call of participants with headshots and maybe a title. Something akin to a conference badge. Perhaps they could show this during the breaks between sessions. Even if you rolled through the virtual badge shots quickly, during breaks, it would act as sort of an analog of walking from one session to another.

• I don’t know whether there was any interest in social media, but having a twitter, facebook, other social media event hash tag prominently displayed on the bottom 1/3rd or on some early slide deck would have been useful. To generate some social buz

• Also, at conferences, one can typically see a screen which tracks the social media hash tag. I saw none of this at the event. Having some small panel running social media activity might have led to more social media interaction. It could be along the side of the main page, viewable during all videos, breaks and other sessions.

• As for the public chat. I think it would have been better to have a separate chat channels for each video, breakout, exhibit, etc. rather than having a single chat room for the whole conference. It would have been great if the separate chat window popped up when you started viewing a video, breakout or entered an exhibit.

• Have lots more technical breakouts. didn’t see a great quantity of these maybe 5-7 tech breakouts and the 4 original tech track videos. Again separate chat channels so one could ask questions pertaining to the session would have been great.

• The exhibits were all other vendors (sponsors) showing there stuff. I didn’t see any show and tell for the conference event organizers that one would see in any conference if you walked out on the show floor. Would it have been to much to ask to have a virtual walk through tour of each of the conference organizers products and a couple of demos of their products/services. Just like one could see at any conference.

• The expo floor exhibitor sessions could be left available to view anytime the event was “open” but the tech breakout sessions would be available multiple times a day but scheduled just like any other event sessions. And it would be nice to have a separate chat channel for each expo exhibitor and tech break out sessions., so we could ask questions of their staff.

• Another thing available at most conference events is a social media booth where bloggers, podcasters, and vloggers could sit around and talk about the event and their products and whatever else came to mind. I didn’t see anything like this and having a separate chat window for these booths would be useful.

• Also, it would be nice if one could obtain vendor certifications or a detailed tutorials on some product/service.

• On a personal note, I am an industry analyst it would be nice to have a separate analyst track. I come to these events to have face time with execs and get a download on what their upcoming strategy is and how they did over the last year or so. Yes these could all be done offline but they could also be accomplished during the event with its own secure chat channel

• I’m also an influencer. So having a separate press track would have been great as well. Often the analyst and press track overlap for a couple of sessions and then go there separate (NDA) ways.

• For both the analysts and the Press/influencers having a live Q&A session with the execs, technical team, and select customers would have been great. But alas there was nothing like this. But with a separate secure chat room this could have also been done.

• I can’t stress enough that the conference event navigation needs to be better and more intuitive.

I know that there’s a lot here and there’s probably a whole bunch more that could be done better. Other people will no doubt have their own opinions. But these are mine.

It was the first virtual conference (I attended) and the vendor sort of played iit by ear and designing it almost in real time. Given all that, they did a great job. Now it’s time to do better.

I’m a conference geek. I go to an average of 10 or more vendor conferences a year so this is a major part of what I do.

IMHO, nothing besides ubiquitous, true virtual reality will ever replace the effectiveness of in real life conferences. That being said, there are ways to make current virtual events come closer to real conferences.

~~~~

I thought about sending this to the conference organizers but their conference is over, and hopefully next year it will be back IRL. But there’s plenty more virtual conferences left on my schedule for this year.

I would prefer all of them to be done better, for me, analysts, press/influencers and ultimately customers.

Were all in this together.

Comments.

Google Docs as subversive technology

Read an article the other day in TechReview (How Google Docs became the social media of the resistance) about how Google Docs was being used to help coordinate and promote the resistance surrounding the recent Black Lives Matter movement.

The article points out that Google Docs are sharing resources around anti-racism, email templates, bail resources, pro-bono legal assistance, etc. to help inform and coordinate the movements actions and activities.

Social unrest, the killer app for Google Docs

Protests could be the killer app for shared Google Docs. Facebook and other social media sites are better used for documenting the real time interactions during protests, but coordinating, motivating and informing the protests and protestors is better accomplished using Google Docs, a simple web based, document editor and sharing service.

In pre-internet days, I suppose all this would have been done on hand copied, typeset printed, carbon copied or photocopied theses/phamplets/fliers/printouts. For example, Luther’s list of grievances nailed to the cathedral door, Common Sense pamphlet during the USA revolutionary war to countless fliers during the 60’s protests, all these used the technology of the day to promote protest and revolution.

Nowadays all it takes is a shared Google Doc and a Google (drive) account.

Google Docs are everywhere

The high school that one of my kids went to uses Google Docs for sharing and submitting homework assignments.

Google Docs are shareable because they are hosted on Google Drives. Docs is just one component of the Google (G-)suite of web based apps that includes Google Sheets (spreadsheets), Google Slides (presentations) and Google Drives (object storage).

Moreover, any Google Doc, Sheet or Slide file can be shared and edited by anyone. And Google services like Docs, Sheets, and Slides are useable anonymously, Anyone onlin, can make a change to a shareable/editable doc, sheet, or slide and their changes are automatically saved to the google drive file.

Another thing is that any Google Doc can be shared with just a URL. And they can also be made read-only (or uneditable) by their owner at any time. And of course any Google Doc is backed up automatically by Google drive services.

Owners of documents can revert to previous versions of a Doc file. So if someone incorrectly (or maliciously) changes a doc, the originator can revert it back to a prior version.

Why not use a Wiki

I would think a Wiki would be better to use to coordinate, motivate and inform a protest. Once a Wiki is setup and started, it can be much easier to navigate, as easy to update, and can become a central repository of all information about a movement/protest.

But it takes a lot more effort and IT-web knowledge to set up a Wiki. And it has to have it’s own web address.

Another problem with a Wiki, is that it can become a central point which can be more easily attacked or disturbed. And Wiki edit wars are pretty common, so they too are not immune to malicious behavior.

But with 10s to 100s of Google Docs, spread across user a similar number of user Google drives, Google Docs are a much more distributed resource, less prone to single point of attack. And they can be created and edited almost on a whim. And the only thing it takes is a Google log in and Google drive.

~~~~

Photo copiers were a controlled technology in the old Soviet Union and even today facebook and twitter are restricted in China and other authoritarian states.

But Google Doc’s seems to have become a much more ubiquitous tool and have become the latest technology, to aid, abet and support social resistance.

Photo credit(s):

Societal growth depends on IT

Read an interesting article the other day in SciencDaily (IT played a key role in growth of ancient civilizations) and a Phys.Org article (Information drove development of early states) both of which were reporting on a Nature article (Scale and information processing thresholds in Holocene social evolution) which discussed how the growth of society during ancient times was directly correlated to the information processing capabilities they possessed. In these articles IT meant writing, accounting, currency, etc., relatively primitive forms of IT but IT nonetheless.

Seshat: Global History Databank

What the researchers were able to do was to use the Seshat: Global History Databank which “systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time” and use the data to analyze the use of IT by societies.

We have talked about Seschat before (See our Data Analysis of History post)

The Seshat databank holds information on 30 (natural) geographical areas (NGA), ~400 societies and, their history from 4000 BCE to 1900CE.

Seschat has a ~100 page Code Book that identifies what kinds of information to collect on each society, how it is to be estimated, identified, listed, etc. to normalize the data in their databank. Their Code Book provides essential guidelines on how to gather the ~1500 variables collected on societies.

IT drives society growth

The researchers used the Seshat DB and ran a statistical principal component analysis (PCA) of the data to try to ascertain what drove society’s growth.

PCA (see wikipedia Principal Component Analysis article) essentially produces a list of variables and their inter-relationships. Their combined inter-relationships is essentially a percentage (%Var) of explanatory power in how much those variables explains the variance of all variables. PCA can be one, two, three or N-dimensional.

The researchers took Seshat 51 society variables and combined them into 9 (societal) complexity characteristics (CC)s and did a PCA of those variables across all the (285) society’s information available at the time.

Fig, 2 says that the average PC1 component of all societies is driven by the changes (increases and decreases) in PC2 components. Decreases of PC2 depend on those elements of PC2 which are negative and increases in PC2 depend on those elements of PC2 which are negative.

The elements in PC2 that provide the largest positive impacts are writing (.31), texts (.24), money (.28), infrastructure (.12) and gvrnmnt (.06). The elements in PC2 that provide the largest negative impacts are PolTerr (polity area, -0.35), CapPop (capital population, -0.27), PolPop (polity population, -0.25) and levels (?, -0.15). Below is another way to look at this data.

The positive PC2 CC’s are tracked with the red line and the negative PC2 CC’s are tracked with the blue line. The black line is the summation of the blue and red lines and is effectively equal to the blue line in Fig 2 above.

The researchers suggest that the inflection points in Fig 2 and the black line in Fig 3),represent societal information processing thresholds. Once these IT thresholds have passed they change the direction that PC2 takes on after that point

In Fig4 they have disaggregated the information averaged in Fig. 2 & 3 and show PC2 and PC1 trajectories for all 285 societies tracked in the Seshat DB. Over time as PC1 goes more positive, societie, start to converge on effectively the same level of PC2 . At earlier times, societies tend to be more heterogeneous with varying PC2 (and PC1) values.

Essentially, societies IT processing characteristics tend to start out highly differentiated but over time as societies grow, IT processing capabilities tend to converge and lead to the same levels of societal growth

Classifying societies by I

The Kadashev scale (see wikipedia Kardashev scale article) identifes levels or types of civilizations using their energy consumption. For example, The Kardashev scale lists the types of civilizations as follows:

  • Type I Civilization can use and control all the energy available on its planet,
  • Type II Civilization can use and control all the energy available in its planetary system (its star and all the planets/other objects in orbit around it).
  • Type III Civilization can use and control all the energy available in its galaxy

I can’t help but think that a more accurate scale for civilization, society or a polity’s level would a scale based on its information processing power.

We could call this the Shin scale (named after the primary author of the Nature paper or the Shin-Price-Wolpert-Shimao-Tracy-Kohler scale). The Shin scale would list societies based on their IT levels.

  • Type A Societies have non-existant IT (writing, money, texts, money & infrastructure) which severely limits their population and territorial size
  • Type B Societies have primitive forms of IT (writing, money, texts, money & infrastructure, ~MB (10**6) of data) which allows these societies to expand to their natural boundaries (with a pop of ~10M).
  • Type C Societies have normal (2020) levels of IT (world wide Internet with billions of connected smart phones, millions of servers, ZB (10**21) of data, etc.) which allows societies to expand beyond their natural boundaries across the whole planet (pop of ~10B).
  • Type D Societies have high levels of IT (speculation here but quintillion connected smart dust devices, trillion (10**12) servers, 10**36 bytes of data) which allows societies to expand beyond their home planet (pop of ~10T).
  • Type E Societies have high levels of IT (more speculation here, 10**36 smart molecules, quintillion (10**18) servers, 10**51 bytes of data ) which allows societies to expand beyond their home planetary system (pop of ~10Q).

I’d list Type F societies here but a can’t think of anything smaller than a molecule that could potentially be smart — perhaps this signifies a lack of imagination on my part.

Comments?

Photo Credit(s):

Hybrid digital training-analog inferencing AI

Read an article from IBM Research, Iso-accuracy DL inferencing with in-memory computing, the other day that referred to an article in Nature, Accurate DNN inferencing using computational PCM (phase change memory or memresistive technology) which discussed using a hybrid digital-analog computational approach to DNN (deep neural network) training-inferencing AI systems. It’s important to note that the PCM device is both a storage device and a computational device, thus performing two functions in one circuit.

In the past, we have seenPCM circuitry used in neuromorphic AI. The use of PCM here is not that (see our Are neuromorphic chips a dead end? post).

Hybrid digital-analog AI has the potential to be more energy efficient and use a smaller footprint than digital AI alone. Presumably, the new approach is focused on edge devices for IoT and other energy or space limited AI deployments.

Whats different in Hybrid digital-analog AI

As researchers began examining the use of analog circuitry for use in AI deployments, the nature of analog technology led to inaccuracy and under performance in DNN inferencing. This was because of the “non-idealities” of analog circuitry. In other words, analog electronics has some intrinsic capabilities that induce some difficulties when modeling digital logic and digital exactitude is difficult to implement precisely in analog circuitry.

The caption for Figure 1 in the article runs to great length but to summarize (a) is the DNN model for an image classification DNN with fewer inputs and outputs so that it can ultimately fit on a PCM array of 512×512; (b) shows how noise is injected during the forward propagation phase of the DNN training and how the DNN weights are flattened into a 2D matrix and are programmed into the PCM device using differential conductance with additional normalization circuitry

As a result, the researchers had to come up with some slight modifications to the typical DNN training and inferencing process to improve analog PCM inferencing. Those changes involve:

  • Injecting noise during DNN neural network training, so that the resultant DNN model becomes more noise resistant;
  • Flattening the resultant DNN model from 3D to 2D so that neural network node weights can be implementing as differential conductance in the analog PCM circuitry.
  • Normalizing the internal DNN layer outputs before input to the next layer in the model

Analog devices are intrinsically more noisy than digital devices, so DNN noise sensitivity had to be reduced. During normal DNN training there is both forward pass of inputs to generate outputs and a backward propagation pass (to adjust node weights) to fit the model to the required outputs. The researchers found that by injecting noise during the forward pass they were able to create a more noise resistant DNN.

Differential conductance uses the difference between the conductance of two circuits. So a single node weight is mapped to two different circuit conductance values in the PCM device. By using differential conductance, the PCM devices inherent noisiness can be reduced from the DNN node propagation.

In addition, each layer’s outputs are normalized via additional circuitry before being used as input for the next layer in the model. This has the affect of counteracting PCM circuitry drift over time (see below).

Hybrid AI results

The researchers modeled their new approach and also performed some physical testing of a digital-analog DNN. Using CIFAR-10 image data and the ResNet-32 DNN model. The process began with an already trained DNN which was then retrained while injecting noise during forward pass processing. The resultant DNN was then modeled and programed into a PCM circuit for implementation testing.

Part D of Figure 4 shows the Baseline which represents a completely digital implementation using FP32 multiplication logic; Experiment which represents the actual use of the PCM device with a global drift calibration performed on each layer before inferencing; Mode which represents theira digital model of the PCM device and its expected accuracy. Blue band is one standard-deviation on the modeled result.

One challenge with any memristive device is that over time its functionality can drift. The researchers implemented a global drift calibration or normalization circuitry to counteract this. One can see evidence of drift in experimental results between ~20sec and ~60 seconds into testing. During this interval PCM inferencing accuracy dropped from 93.8% to 93.2% but then stayed there for the remainder of the experiment (~28 hrs). The baseline noted in the chart used digital FP32 arithmetic for infererenci and achieved ~93.9% for the duration of the test.

Certainly not as accurate as the baseline all digital implementation, but implementing DNN inferencing model in PCM and only losing 0.7% accuracy seems more than offset by the clear gain in energy and footprint reduction.

While the simplistic global drift calibration (GDC) worked fairly well during testing, the researchers developed another adaptive (batch normalization statistical [AdaBS]) approach, using a calibration image set (from the training data) and at idle times, feed these through the PCM device to calculate an average error used to adjust the PCM circuitry. As modeled and tested, the AdaBS approach increased accuracy and retained (at least modeling showed) accuracy over longer time frames.

The researchers were also able to show that implementing part (first and last layers) of the DNN model in digital FP32 and the rest in PCM improved inferencing accuracy even more.

~~~~

As shown above, a hybrid digital-analog PCM AI deployment can provide similar accuracy (at least for CIFAR-10/ResNet-24 image recognition) to an all digital DNN model but due to the efficiencies of the PCM analog circuitry allowed for a more energy efficient DNN deployment.

Photo Credit(s):