More women in tech

Read an interesting article today in the NY Times on how Some Universities Crack Code in Drawing Women to Computer Science. The article discusses how Carnegie Mellon University, Harvey Mudd University and the University of Washington have been successful at attracting women to enter their Computer Science (CompSci) programs.

When I was more active in IEEE there was a an affinity group called Women In Engineering (WIE) that worked towards encouraging female students to go into science, technology, engineering and math (STEM).  I also attended a conference for school age girls interested in science and helped to get the word out about IEEE and its activities.  WIE is still active encouraging girls to go into STEM fields.

However, as I visit startups around the Valley and elsewhere I see lots of coders which are male but very few that are female. On the other hand, the marketing and PR groups have almost a disproportionate representation of females although not nearly as skewed as the male to female ratio in engineering (5:6 in marketing/PR to 7:1 in engineering).

Some in the Valley are starting to report on diversity in their ranks and are saying that only 15 to 17% of their employees in technology are females.

On the other hand, bigger companies seem to do a little better than startups by encouraging more diversity in their technical ranks. But the problem is prevalent throughout the technical industry in the USA, at least.

Universities to the rescue

The article goes on to say that some universities have been more successful in recruiting females to CompSci than others and these have a number of attributes in common:

  • They train female teachers at the high school level in how to teach science better.
  • They host camps and activities where they invite girls to learn more about technology.
  • They provide direct mentors to supply additional help to girls in computer science
  • They directly market to females by changing brochures and other material to show women in science.

Some Universities eliminated programming experience as an entry criteria. They also broadened the appeal of the introductory courses in CompSci to show real world applications of doing technology figuring that this would appeal more to females.  Another university re-framed some of their course work to focus on creative problem solving rather than pure coding.

Other universities are not changing their programs at all and finding with better marketing, more mentorship support and early training they can still attract more females to computer science.

The article did mention one other thing that is attracting more females to CompSci and that is the plentiful, high paying jobs that are currently available in the field.

From my perspective, more females in tech is a good thing and we as an industry should do all we can to encourage this.

~~~~

Comments?

Photo credits: Circuit Bending Orchestra: Lara Grant at Diana Eng’s Fairytale Fashion Show, Eyebeam NYC / 20100224.7D.03621.P1.L1.SQ.BW / SML

Posted in Information economy, Strategic Inflection Points, Strategy, Visionary leadershp, Visionary organizations | Tagged , , , , , , | Leave a comment

Vacuum tubes on silicon

Read an interesting article the other day about researchers at NASA having invented a vacuum tube on a chip (see ExtremeTech, Vacuum tube strikes back). Their report was based on an IEEE Spectrum article called Introducing the Vacuum Transistor.

Computers started out early in the last century being mechanical devices (card sorters), moved up to electronic sorters/calculators/computers with vacuum tubes and eventually transitioned to solid state devices with the silicon transistor. Since then the MOS and CMOS transister have pretty much ruled the world of electronic devices.

Vacuum tube?

Vacuum tubes had a number of problems not the least of which was power consumption, size and reliability. It was nothing for a vacuum tube to burn out every couple of times it was powered on and the ENIAC (panel pictured here) had over 17,000 of them, took over 200 sq meters of space, used a lot (150KW) of power and weighed (27 metric) tons.

Of course each vacuum tube was the equivalent of just one transistor and the latest generation Intel Quad Core processors have over 2B transistors in them. So to implement an Intel Quad Core processor with vacuum tubes this might take over 3,000 football fields of space and over 17GW for power/cooling.

There were plenty of niceties with vacuum tubes not the least of which was their nice ruler flat frequency response, ability to support much higher frequencies, significantly less prone to noise and had less problems with radiation than transistors.  This last item meant that vacuum tubes were less susceptible to electromagnetic pulses. Many modern musical/instrument amplifiers are still made today using vacuum tube technology due to their perceived better sound.

But the main problems was their size and power consumption. If you could only shrink a vacuum tube to the size of a MOS field effect transistor (FET) and correspondingly reduce its power consumption, then you would have something.

NASA shrinks the vacuum tube

NASA researchers have shrunk the vacuum tube to nanometer dimensions in a vacuum- channel transistor. They believe it can be fabricated on standard CMOS technology lines and that it can operate at 460GHz. 

This new vacuum-channel transistor marries the benefits of vacuum tubes to the fabrication advantages of MOSFET technology. Making them as small as MOSFET transistors eliminates all of the problems with vacuum tube technology and handily solves a serious problem or two with MOSFETs.

07OLVacuumtransistors-1403115198821

One problem with MOSFET technology today is that we can no longer speed it up any faster than a 4-5GHz.  This limit was reached in 2004 when Intel and others determined that clock speed couldn’t be sped up much more without serious problems resulting and as a result, they started using additional transistors to offer multi-core processor chips.  A lot of time and money is continuing to be spent on seeing how best to offer even more cores but in the end there’s only so much parallelism that can be achieved in most applications and this limits the speed ups that can be attained with multi-core architectures.

But a shrunken vacuum tube doesn’t seem to have the same issues with higher clock speeds.  Also, there is a serious reduction in power consumption that accrues along with reduction in size.

The vacuum in a vacuum tube was there to inhibit electrons from being interfered with by gases. With the vacuum-channel transistor they don’t think they need a vacuum anymore due to the reduction of size and power being used but there’s a little problem on how to creating a helium filled enclosure which they feel will work instead of a vacuum. NASA feels that with todays chip packaging this shouldn’t be a problem.

Also, their current prototypes use 10V but other researchers have reduced other vacuum-channel transistors to use only 1-2v. As of yet the NASA researchers haven’t fabricated their vacuum-channel transistors on a real CMOS line but that’s the next major hurdle.

Imagine a much faster IT

A 400GHz processor in your desktop and maybe a 200GHz processor in your phone/tablet could all be possible with vacuum-channel transistors. They would be so much faster than today’s multi-core systems, that it would be almost impossible to compare the two. Yes there are some apps where multi-core could speed things up considerably but something that’s 10X faster than todays processors would operate much faster than a 10 core CPU. And it still doesn’t mean you couldn’t have multi-core vacuum-channel systems as well.

SSD or NAND flash storage is essentially based on CMOS transistors and the speed of flash is a somewhat of a function of the speed of its transistors.  A 400GHz vacuum-channel transistor could speed up flash storage by an order of magnitude or more. Flash access times are already at the 7µsec level (see my posts on MCS and UltraDIMM storage here and here).  How much of that 7µsec access time is due to the memory channel aand how much is a function of the SanDisk SSD storage is an open question. But whatever portion is on the SSD side could be potentially reduced by a factor of 10 or more with the use of vacuum-channel transistors.

From a disk perspective there are myriad issues that effect how much data can be stored linearly on a disk platter. But one of them is the speed of switching of electromagnetic  (GMR) head and the electronics. Vacuum-channel transistors should be able to eliminate that issue at least in the electronics and maybe with some work in the head as well so disk densities would no longer have to worry about switching speeds. Similar issues apply to magnetic tape densities as well.

Unclear to me how faster switching time would impact network transmission speeds. But it seems apparent that optical transmission times have already reached some sort of limit based on light frequencies used for transmission. However, electronic networking transfer speeds may be able to be enhanced significantly with faster speed switching.

Naturally, WIFI and other forms of radio transmission are seriously impeded by the current frequency and power of electronic switching. That’s one of the reasons why radio stations still depend somewhat on vacuum tubes. However, with vacuum-channel transistors problems with switching speed go away.  Indeed, NASA researchers believe that their vacuum-channel transistors should be able to reach terahertz (1000GHz) transmission switching. Which might make WIFI almost faster than any direct connect networking today.

~~~~
Comments?

Photo Credit(s): ENIAC panel (rear) by Erik Pittit, The Vacuum Tube Transistor from IEEE Spectrum

Posted in Energy efficiency, Mobile computing, Processing performance, Strategy, Systems, Visionary leadershp | Tagged , , , , , | Leave a comment

MCS, UltraDIMMs and memory IO, the new path ahead – part 2

IMG_2337In part 1 (see previous post here), we discussed the underlying technology for SanDisk‘s UltraDIMMs based on Diablo Technologies MCS hardware and software. IBM will be shipping UltraDIMMs in their high end servers later this year as their new eXFlash.

In this segment we will discuss what SanDisk has put on top of the Diablo Technology’s MCS to supply SSD storage.

SanDisk UltraDIMM SSD storage

In the UltraDIMM package, SanDisk supports 200 or 400GB of 19nm MLC NAND SSD storage that is accessed via SATA [corrected after this went out, Ed.] internally, but the main interface is the 1600MHz, DDR3 to the UltraDIMMs.  As each UltraDIMM card plugs into any DDR3 memory slot you can potentially support multiples of these cards in a single server. I believe the maximum number is 7 UltraDIMMs, not sure if IBM supports this many [corrected after this went out, Ed.] dependent on the number of memory slots in your server. IBM on their x3850 and x3950 can support up to 32 UltraDIMMs per server.

SanDisk uses their Guardian Technology to enhance NAND endurance beyond what’s possible with native NAND controllers. One of the things that Guardian Technology does is to vary the voltage used to program the NAND bits over the life of the bit cells/pages. So early on when the cell is fresh, they can use less voltage and as it ages they increase the voltage to insure that the bits are properly programmed. With other NAND controllers, using the same voltage across the whole NAND lifetime it will unduly stress the NAND bits early on and later as they age, it will be unable to program properly and will need to be flagged as bad.  The NAND chips/bits are characterized so that SanDisk Guardian Technology can use an optimum voltage curve over the chips lifetime.

The UltraDIMMs also have powerloss protection. This means that any write to an UltraDIMM memory that’s been acknowledged to the server is guaranteed to have sufficient power to make it all the way to the SSD storage.

Another thing that MCS memory interface brings to the picture is Error Correction Circuitry (ECC). Data written to UltraDIMMs has ECC protection throughout the data path up from the server DRAM memory, through the DIMM socket, all the way to the SSD flash.

As discussed extensively in Part 1 of this post, access times for UltraDIMM storage is on the order 7µsec, which is ~7X faster than best of class PCIe Flash storage and a single UltraDIMM card is capable of sustaining 20GB/second of data throughput. I know of enterprise class storage systems that can’t do half that in throughput.

On the other hand, one problem with UltraDIMM storage is that they are not hot swappable. This is primarily a memory interface problem and not an UltraDIMM issue but nonetheless, you can’t swap an UltraDIMM module until the server is powered down. And who would want to do such a thing when the server is powered anyway?

SanDisk long history in NAND

SanDisk1 SanDisk2 SanDisk3As you can see from the three photos at right SanDisk seems to have been involved in flash/NAND technology innovation since the early 1990′s.  At the time NOR and NAND were competing for almost the same market.

But sometime in the mid to late 1990′s NAND found a niche in consumer cameras and never looked back. Not sure where NOR marketis today but it’s a drop in the bucket compared to the NAND market

UltraDIMMs is just the latest platform to support NAND storage access.  It happens to be one with blazingly fast access times and high IO parallelism, but in the end it just represents another way to obtain the benefits of NAND for IT customers.

Also, SanDisk’s commercial NAND (Memory Card) business seems to be very healthy. What with higher resolution photos/video/audio coming online over the next decade or so it doesn’t seem to be going away anytime soon.

SanDisk is in a new joint venture (JV) with Toshiba to produce 3D NAND flash. But in the mean time they are still using 2D flash for their current SSD storage. Toshiba and SanDisk in their current JV together manufacture about 1/2 the NAND bits in the world today.

The rest of SanDisk NAND business also seem to be doing well. And the aforementioned JV with Toshiba on 3D NAND looks positioned to take all of this NAND to the next level of density as well which should make all of us happy.

SanDisk acquiring FusionIO

SanDisk was in the news lately as they have recently filed to acquire FusionIO, a prominent and early PCIe flash supplier that in recent years has broadened their portfolio to include enterprise storage with their acquisition of NexGen storage (renamed IO Control).

When FusionIO IPO’d the stock sold at ~$19/share and SanDisk is purchasing the company in an all cash deal for $11.25/share almost a 40% reduction in share price in 3 years (June’11 IPO) – ouch.  At IPO the company was valued at ~$2B, (some pundits said this was ~$1.5B, so there’s some debate on the original valuation). SanDisk is buying the company for ~$1.1B in cash. Any way you look at it, they paid significantly less than what the company was worth at IPO. Granted, it was valued at 41X earnings then and its recent stock price at $11.59 represents a 3.3P/E (ttm).

Not exactly certain what happened. Analysts seem to indicate that Apple and Facebook, FusionIO’s biggest customers were buying less FusionIO product. I also happen to think that the PCIe flash space has gotten pretty crowded over the last 3 years with entrants from Micron Technologies, Intel, LSI, Verident/Western Digital, and others.

In addition, for PCIe flash to broaden its market there’s a serious need to surround it with sophisticated caching software to enable a more general purpose IO solution (see Pernix Data, Proximal Data, and others). These general purpose, caching solutions have finally reached high levels of sophistication and just now are becoming more widely available.

~~~~

Originally, part 3 of this series was going to be on IBM’s release of the UltraDIMM technology  as their new eXFlash. However, I am somewhat surprised not to see other vendors taking up the MCS/UltraDIMM technology but IBM may have a limited exclusivity to it.

The only other thing thats this interesting happening in solid state storage is HP’s Memristor Machine which is still a ways off.

Nonetheless, a new much faster memory card based SSD is hitting the market and if history is any indication, it won’t be long until the data storage world will sit up and take notice.

Comments?

Posted in Block Storage, DAS, data protection, SSD storage, Storage | Tagged , , , , , , , , | Leave a comment

Computational Anthropology & Archeology

7068119915_732dd1ef63_zRead an article this week from Technology Review on The Emerging Science of Computational Anthropology. It was about the use of raw social media feeds to study the patterns of human behavior and how they change over time. In this article, they had come up with some heuristics that could be used to identify when people are local to an area and when they are visiting or new to an area.

Also, this past week there was an article in the Economist about Mining for Tweets of Gold about the startup DataMinr that uses raw twitter feeds to supply information about what’s going on in the world today. Apparently DataMinr is used by quite a few financial firms, news outlets, and others and has a good reputation for discovering news items that have not been reported yet. DataMinr is just one of a number of commercial entities doing this sort of analysis on Twitter data.

A couple of weeks ago I wrote a blog post on Free Social and Mobile Data as a Public Good. In that post I indicated that social and mobile data should be published, periodically in an open format, so that any researcher could examine it around the world.

Computational Anthropology

Anthropology is the comparative study of human culture and condition, both past and present. Their are many branches to the study of  Anthropology including but not limited to physical/biological, social/cultural, archeology and linguistic anthropologies. Using social media/mobile data to understand human behavior, development and culture would fit into the social/cultural branch of anthropology.

I have also previously written about some recent Computational Anthropological research (although I didn’t call it that), please see my Cheap phones + big data = better world and Mobile phone metadata underpins a new science posts. The fact is that mobile phone metadata can be used to create a detailed and deep understanding of a societies mobility.  A better understanding of human mobility in a region can be used to create more effective mass transit, more efficient  road networks, transportation and reduce pollution/energy use, among other things.

Social media can be used in a similar manner but it’s more than just location information, and some of it is about how people describe events and how they interact through text and media technologies. One research paper discussed how tweets could be used to detect earthquakes in real time (see: Earthquake Shakes Twitter Users: Real-time Event Detection by Social Sensors).

Although the location information provided by mobile phone data is more important to governments and transportation officials, it appears as if social media data is more important to organizations seeking news, events, or sentiment trending analysis.

Sources of the data today

Recently, Twitter announced that it would make its data available to a handful of research organizations (see: Twitter releasing trove of user data …).

On the other hand Facebook and LinkedIn seems a bit more restrictive in allowing access to their data. They have a few data scientists on staff but if you want access to their data you have to apply for it and only a few are accepted.

Although Google, Twitter, Facebook, LinkedIn and Telecoms represent the lions share of social/mobile data out there today, there are plenty of others sources of information that could potentially be useful that come to mind. Notwithstanding the NSA, currently there is limited research accessibility to the actual texts of mobile phone texts/messaging, and god forbid, emails.  Although privacy concerns are high, I believe ultimately this needs to change.

Imagine if some researchers had access to all the texts of a high school student body. Yes much of it would be worthless but some of it would tell a very revealing story about teenage relationships, interest and culture among other things. And having this sort of information over time could reveal the history of teenage cultural change. Much of this would have been previously available through magazines but today texts would represent a much more granular level of this information.

Computational Archeology

Archeology is just anthropology from a historical perspective, i.e, it is the study of the history of cultures, societies and life.  Computational Archeology would apply to the history of the use of computers, social media, telecommunications, Internet/WWW, etc.

There are only few resources that are widely available for this data such as the Internet Archive. But much of the history of WWW, social media, telecom, etc. use is in current and defunct organizations that aside from Twitter, continue to be very stingy with their data.

Over time all such data will be lost or become inaccessible unless something is done to make it available to research organizations. I believe sooner or later humanity will wise up to the loss of this treasure trove of information and create some sort of historical archive for this data and require companies to supply this data over time.

Comments?

Photo Credit(s): State of the Linked Open Data Cloud (LOD), September 2011 by Duncan Hull

Posted in Data, data access, Data availability, Data science, Information economy, Visionary leadershp | Tagged , , , , | Leave a comment

Google cloud offers SSD storage

Read an article the other day on Google Cloud tests out fast, high I/O SSD drives. I suppose it was only a matter of time before cloud services included SSDs in their I/O mix.

Yet, it doesn’t seem to me to be as simple as adding SSDs to the storage catalog. Enterprise storage vendors have had SSDs arguably since January of 2008 (see my EMC introduced SSDs to DMX dispatch). And although there are certainly a class of applications that can take advantage of SSD low latency/high IOPs, the vast majority of applications don’t seem to require these services.

Storage systems use of SSDs today

That’s why most enterprise storage system vendors support some form of automated storage tiering or flash caching of normal I/O for their high-end storage systems. Together with offering just plain old SSDs as data storage. In this more sophisticated solution customers have the option to assign application data to SSDs only, hybrid SSD-disks, or disk only storage. In this way the customer get’s to decide whether they want some sort of mix or just pure SSD or disk IO to satisfy their application IO requirements.

Storage startups have emerged that take on both the hybrid SSD-disk and all-flash model and add quality of service to the picture. An example of all-flash that supplies QoS version of all-flash storage is SolidFire (learn more about SolidFire in our GreyBeardsOnStorage podcast with Dave Wright).  An example that does the same sort of thing for hybrid storage is Fusion IOcontrol (formerly NexGen) storage.

Storage system QoS

In the case of SolidFire one can limit volume or volume groups with an IOPs max, throughput max, and a Burst max. The burst is sort of a credit that accrues on a time basis if the application doesn’t ask for the maximum IOPs/Througput which they then can consume above their maximums up to the burst max for a limited timeframe.

QoS capabilities are slowly making their way into enterprise storage systems as well but it will take some time for the instrumentation and capabilities to be put in place. But one can see limited QoS in IBM DS8000 priority IO, NetApp Storage QoS, EMC Unisphere QoS manager for VNX & SMC QoS for VMAX, and HDS SVOS QoS via partitioning. Most of these capabilities control access or partition cache, backend and frontend resources for host volumes. As such, they are not nearly as sophisticated or as easy to use as what SolidFire and other start ups are offering, but they are getting there.

Cloud SSD pricing

Back to the cloud offering. According to the GigaOm article, Google SSD volumes can sustain up to 15K IOPs and they are charging a premium price for this storage ($0.325/GB-month). Apparently Amazon AWS offers high IO EC2 storage as well with a maximum of 4K IOPs but charges a premium both for the storage ($0.125/GB month) and on an IOPs basis ($0.10/IOPS-month). GigaOM had a pricing comparison for 500GB and 2000 IOPs indicating that Google SSD storage would cost $163/month and the AWS provisioned SSD storage would cost $263 ($62.50 for storage and $200 for the 2000 IOPs).

The fact that you can drive the Google SSD to it’s limits without incurring any extra cost seems a serious advantage to me and would be very appealing to me to most enterprise customers.

But where’s latency

It seems to me after some IOPs level is attained, most mission critical applications are more interested in low latency IO (for more on why low latency matters seem my IO throughput vs. low latency post…). Many storage systems are capable of maximum of 100,000s of IOPS but most shops don’t run them that hard, ever. But with proper use of SSDs, most enterprise storage is now clocking IO at sub-msec. low latency IO.

However, I have yet to see any Cloud storage pricing or QoS for that matter that was based on latency guarantees.  I think this is a serious omission.

In any event, SSDs in the cloud is a good think now they just need to offer flash caching, automatic storage tiering and sophisticated QoS.  I realize this is partially re-inventing enterprise storage in the cloud but isn’t that what everyone actually wants, at cloud storage pricing of course.

Comments?

Posted in Block Storage, Cloud services, Cloud storage, Disk storage, SSD storage, Storage, Storage performance | Tagged , , , , , , , | Leave a comment

MCS, UltraDIMMs and memory IO, the new path ahead – part 1

IMG_2338I was  at Storage Field Day 5 (SFD5) last month and got a chance to talk with SanDisk and Diablo Technologies. It turns out that SanDisk’s UltraDIMM product is based on Diablo Technologies MCS hardware.  So the two of them provided a pretty deep dive into the technology and where they want to go with it. Before we go any deeper the UltraDIMMs will be released to the field by IBM under the eXFlash name.

Diablo Technologies

The team at Diablo have been focusing on the x86 standard memory channel for a while now and lately have been trying out different sorts of technologies to connect as CPU memory. The first Memory Channel Storage (MCS) product converts Memory Channel IO to SATA IO. This allows any SATA device to be attached as memory and enjoy lightening fast, memory access times. Access times are clocked at 7µsec. Most PCIe Flash cards have an access latency at 50µsec or more, so this is 7X faster that PCIe Flash.  They also claim the MCS is capable of 20GB/sec. I know enterprise class storage systems that can’t do that. Also, the MCS utilizes 2 memory channels.

Diablo delivers a chip (that converts MemIO to SATA IO) and software that provides a block IO access to the MCS device. Customers of MCS supply their own SATA flash storage device and presumably package it all together in a DIMM compatible card.

But the main problem is that the whole MCS chip and SATA IO flash device has to fit in the form factor of a DIMM. And cannot draw any more power than a memory device can draw, ~10-15W with its corresponding thermal load.

But this seems plenty for a small flash drive.  The MCS is configured as a 4GB DDR3 DIMM.  There is a requirement to patch the BIOS so that it doesn’t run diagnostic memory tests on the MCS device and their software needs to be loaded to access the device as a block device. I believe they currently support Linux O/S with more O/Ss on the way.

Diablo has looked at other applications for their technology including providing an Memory IO accessed Ethernet NIC was mentioned. But it seems flash storage would be a great first application of their technology.  Not clear to me but SAS would also be something that could be done.

Whatever happens after NAND with the next generation semiconductor storage (see my The end of NAND is near post, it seems to me that accessing it as Memory IO would make an awful lot of sense. makes a lot of sense.  Using MCS as the access channel would seem to be a logical next step.

Part 1 of this story is on Diablo Technologies, Part 2 will be on SanDisk and I am not sure but maybe there will be a Part 3 on IBM eXFlash. So stay tuned.

Comments?

Posted in Block Storage, DAS, SSD storage, Storage, Storage architecture, Storage performance | Tagged , , , , , | 2 Comments

Releasing social and mobile data as a public good

I have been reading a book recently, called Uncharted: Big data as a lens on human culture by Erez Aiden and Jean-Baptiste Michel that discusses the use of Google’s Ngram search engine which counts phrases (Ngrams) used in all the books they have digitized. Ngram phrases are charted against other Ngrams and plotted in real time.

It’s an interesting concept and one example they use is “United States are” vs. “United States is” a 3-Ngram which shows that the singular version of the phrase which has often been attributed to emerge immediately after the Civil War actually was in use prior to the Civil War and really didn’t take off until 1880′s, 15 years after the end of the Civil War.

I haven’t finished the book yet but it got me to thinking. The authors petitioned Google to gain access to the Ngram data which led to their original research. But then they convinced Google after their original research time was up to release the information to the general public. Great for them but it’s only a one time event and happened to work this time with luck and persistance.

The world needs more data

But there’s plenty of other information or data out there where we could use to learn an awful lot about human social interaction and other attributes about the world that are buried away in corporate databases. Yes, sometimes this information is made public (like Google), or made available for specific research (see my post on using mobile phone data to understand people mobility in an urban environment) but these are special situations. Once the research is over, the data is typically no longer available to the general public and getting future or past data outside the research boundaries requires yet another research proposal.

And yet books and magazines are universally available for a fair price to anyone and are available in most research libraries as a general public good for free.  Why should electronic data be any different?

Social and mobile dta as a public good

What I would propose is that the Library of Congress and other research libraries around the world have access to all corporate data that documents interaction between humans, humans and the environment, humanity and society, etc.  This data would be freely available to anyone with library access and could be used to provide information for research activities that have yet to be envisioned.

Hopefully all of this data would be released, free of charge (or for some nominal fee) to these institutions after some period of time has elapsed. For example, if we were talking about Twitter feeds, Facebook feeds, Instagram feeds, etc. the data would be provided from say 7 years back on a reoccurring yearly or quarterly basis. Not sure if the delay time should be 7, 10 or 15 years, but after some judicious period of time, the data would be released and made publicly available.

There are a number of other considerations:

  • Anonymity – somehow any information about a person’s identity, actual location, or other potentially identifying characteristics would need to be removed from all the data.  I realize this may reduce the value of the data to future researchers but it must be done. I also realize that this may not be an easy thing to accomplish and that is why the data could potentially be sold for a fair price to research libraries. Perhaps after 35 to 100 years or so the identifying information could be re-incorporated into the original data set but I think this highly unlikely.
  • Accessibility – somehow the data would need to have an easily accessible and understandable description that would enable any researcher to understand the underlying format of the data. This description should probably be in XML format or some other universal description language. At a minimum this would need to include meta-data descriptions of the structure of the data, with all the tables, rows and fields completely described. This could be in SQL format or just XML but needs to be made available. Also the data release itself would then need to be available in a database or in flat file formats that could be uploaded by the research libraries and then accessed by researchers. I would expect that this would use some sort of open source database/file service tools such as MySQL or other database engines. These database’s represent the counterpart to book shelves in today’s libraries and has to be universally accessible and forever available.
  • Identifyability – somehow the data releases would need to be universally identifiable, not unlike the ISBN scheme currently in use for books and magazines and ISRC scheme used for recordings. This would allow researchers to uniquely refer to any data set that is used to underpin their research. This would also allow the world’s research libraries to insure that they purchase and maintain all the data that becomes available by using some sort of master worldwide catalog that would hold pointers to all this data that is currently being held in research institutions. Such a catalog entry would represent additional meta-data for the data release and would represent a counterpart to a online library card catalog.
  • Legality – somehow any data release would need to respect any local Data Privacy and Protection laws of the country where the data resides. This could potentially limit the data that is generated in one country, say Germany to be held in that country only. I would think this could be easily accomplished as long as that country would be willing to host all its data in its research institutions.

I am probably forgetting a dozen more considerations but this covers most of it.

How to get companies to release their data

One that quickly comes to mind is how to compel companies to release their data in a timely fashion. I believe that data such as this is inherently valuable to a company but that its corporate value starts to diminish over time and after some time goes to 0.

However, the value to the world of such data encounters an inverse curve. That is, the longer away we are from a specific time period when that data was created, the more value it has for future research endeavors. Just consider what current researchers do with letters, books and magazine articles from the past when they are researching a specific time period in history.

But we need to act now. We are already over 7 years into the Facebook era and mobile phones have been around for decades now. We have probably already lost much of the mobile phone tracking information from the 80′s, 90′s, 00′s and may already be losing the data from the early ’10′s. Some social networks have already risen and gone into a long eclipse where historical data is probably their lowest concern. There is nothing that compels organizations to keep this data around, today.

Types of data to release

Obviously, any social networking data, mobile phone data, or email/chat/texting data should all be available to the world after 7 or more years.  Also the private photo libraries, video feeds, audio recordings, etc. should also be released if not already readily available. Less clear to me are utility data, such as smart power meter readings, water consumption readings, traffic tollway activity, etc.

I would say that one standard to use might be if there is any current research activity based on private, corporate data, then that data should ultimately become available to the world. The downside to this is that companies may be more reluctant to grant such research if this is a criteria to release data.

But maybe the researchers themselves should be able to submit requests for data releases and that way it wouldn’t matter if the companies declined or not.

There is no way, anyone could possibly identify all the data that future researchers would need. So I would err on the side to be more inclusive rather than less inclusive in identifying classes of data to be released.

The dawn of Psychohistory

The Uncharted book above seems to me to represent a first step to realizing a science of Psychohistory as envisioned in Asimov’s Foundation Trilogy. It’s unclear whether this will ever be a true, quantified scientific endeavor but with appropriate data releases, readily available for research, perhaps someday in the future we can help create the science of Psychohistory. In the mean time, through the use of judicious, periodic data releases and appropriate research, we can certainly better understand how the world works and just maybe, improve its internal workings for everyone on the planet.

Comments?

Picture Credit(s): Amazon and Wikipedia 

Posted in data access, Data availability, Data science, Strategic Inflection Points, Strategy, Visionary leadershp | Tagged , , , , , , | Comments Off

Veeam’s upcoming V8 virtues

[Not] Vamoosing VMworld

We were at Storage Field Day 5 (SFD5, see the videos here) last month and had a briefing on Veeam’s upcoming V8 release.

They also told us (news to me) that they are leaving VMworld[I sit corrected, I have been informed after this went to press that Veeam is not leaving VMworld2014, and never said anything about it at the session - My mistake and I take full responsibility, sorry for any confusion] (sigh, now who’s going to have THE after conference, KILLER PARTY at VMworld) and moving to [but they did say they are definitely starting up] their own VeeamON conference at The Cosmopolitan in Las Vegas on October 6,7 & 8 this year. If their VMworld parties are any indication, the conference in the Cosmo should be a fun and rewarding time for all. Pre-registration is open and they have a call out for papers.

Doug Hazelman (@VMDoug), Rick Vanover (@RickVanover) and Luca Dell’Oca (@dellock6) all presented although Luca’s session was under strict NDA to be revealed later. I think sometime later this summer.

Doug mentioned that after 6 years, Veeam now has over 100,000 customers world wide.  One of their more popular, early innovations was the ability to run a VM directly off of a backup and sometime over the past couple of years they have moved from a VMware only backup & replication solution to also supporting Microsoft Hyper-V (more news to me).

V8′s virtues

Veeam V8 will add some interesting capabilities to the Veeam product solutions:

  • (VMware only) Built-in backups from storage snapshots – (Enterprise Plus edition only) Backup from VMware snapshots can sometimes impact app performance, especially when it comes time to commit changes. But with V7, Veeam now offers backup utilizing VMware’s Change Block Tracking (CBT)and taking backups from storage snapshots directly for HP 3PAR StoreServ, HP (Lefthand) StoreVirtual/StoreVirtual VSA and in soon to be available V8, NetApp FAS (Data ONTAP 8.1 or above, 7- or cluster-mode, clones too) storage systems. First Veeam does its application level processing (under Windows Server does VSS operations), after that completes tells VMware to take (a VMware) snapshot, when that completes they tell the storage to take a (storage) snapshot, when that completes they release the VMware snapshot. What all this does is allows them to utilize VMware CBT as well as storage snapshots which makes it up to 20 times faster than normal VMware snapshot backups. This way they can backup directly from the storage snapshot using the Veeam proxy. Also because the VMware snapshot is so short lived there is little overhead for committing any changes.  Also there is no need to use a proxy ESX server to do this, i.e., promote the VMware snapshot to a LUN, add it to an ESX, resignature, add the VM, and do all the backups, which, of course destroys CBT. This works for FC, iSCSI and NFS data stores. With NetApp storage you can also take the (VSS) application consistent snapshot and copy it to SnapVault.
  • Veeam Explorer (recovery) for storage snapshots – (Free backup edition) Recovery from (HP in V7 & NetApp in V8) storage snapshots is yet another feature and provides item (e.g., emails, contacts, email folders for Exchange), granular (VM level or file level) or full (volume) recovery from storage based snapshots, regardless of how those storage snapshots were created.
  • Veeam Explorer for SQL Server (V8 only) – (unsure what license is required) Similar to the Explorer for snapshots discussed above, this would allow a Veeam admin to do item level recovery for an SQL database. This also includes recovery from Veeam Backup repositories as well as storage snapshots. But this means that you could restore a ROW of an SQL table, an SQL TABLE as well as a whole SQL database. Now DBAs always had these sorts of abilities which required using Log services. But allowing a Veeam admin to do these sorts of activities seems like putting a gun in the hands of a child (or maybe a bazooka in the hands of an untrained civilian).
  • Veeam Explorer for Active Directory (V8 only) - (unsure what license is required) You’ve seen whats’ available above and just consider these same capabilities only applied to active directory. This means you can restore a password hash, user, group or organizational unit (OU). I don’t know about you but this seems more akin to a howitzer in the hands of a civilian.

They showed an example of competitive situation where running V8 (in beta?) with NetApp backups using snapshots versus some unnamed competition. They were able to complete a full backup in 1/4 the time of their competition (2hrs. vs. 8hrs.) and completed incremental backups in 35min. vs. 2hrs. for the competition.

“Thar be dragons there …”

Ok, maybe I am a little more paranoid than the average IT guy/gal. But in my (old world, greybeards) view, SQL databases belong in the realm of DBAs and Active Directory databases belong to domain controller admins. Messing around with production versions of SQL DBs or AD DBs seems hazardous to a data centers health. We’re not just talking files anymore here guys.

In Veeam’s defense, these new Explorer recovery tools are only probably going to be used to do something that needs to be done right away, to get things back operating again, and would not be used unless there’s a real need/emergency to do so. Otherwise let the DBA and security admins do it with their log recovery tools.  And another thing, they have had similar capabilities for Exchange emails, folders, contacts, etc. and no ones shot their foot off yet so why the concern.

Nonetheless, I feel strongly that these tools ought to be placed under lock and key and the key put in a safe with the combination under a glass case labeled IN CASE OF EMERGENCY, BREAK GLASS.

Comments.

Posted in Block Storage, data protection, File Storage, Server virtualization, Storage Backup, Visionary leadershp | Tagged , , , , , , | Comments Off