To try to partition this space just a bit, there is unstructured data analysis and structured data analysis. Hadoop is used to analyze un-structured data (although Hadoop is used to parse and structure the data).
On the other hand, for structured data there are a number of other options currently available. Namely:
EMC Greenplum – a relational database that is available in a software only as well as now as a hardware appliance. Greenplum supports both row or column oriented data structuring and has support for policy based data placement across multiple storage tiers. There is a packaged solution that consists of Greenplum software and a Hadoop distribution running on a GreenPlum appliance.
HP Vertica – a column oriented, relational database that is available currently in a software only distribution. Vertica supports aggressive data compression and provides high throughput query performance. They were early supporters of Hadoop integration providing Hadoop MapReduce and Pig API connectors to provide Hadoop access to data in Vertica databases and job scheduling integration.
IBM Netezza – a relational database system that is based on proprietary hardware analysis engine configured in a blade system. Netezza is the second oldest solution on this list (see Teradata for the oldest). Since the acquisition by IBM, Netezza now provides their highest performing solution on IBM blade hardware but all of their systems depend on purpose built, FPGA chips designed to perform high speed queries across relational data. Netezza has a number of partners and/or homegrown solutions that provide specialized analysis for specific verticals such as retail, telcom, finserv, and others. Also, Netezza provides tight integration with various Oracle functionality but there doesn’t appear to be much direct integration with Hadoop on thier website.
ParAccel – a column based, relational database that is available in a software only solution. ParAccel offers a number of storage deployment options including an all in-memory database, DAS database or SSD database. In addition, ParAccel offers a Blended Scan approach providing a two tier database structure with DAS and SAN storage. There appears to be some integration with Hadoop indicating that data stored in HDFS and structured by MapReduce can be loaded and analyzed by ParAccel.
Teradata – a relational databases that is based on a proprietary purpose built appliance hardware. Teradata recently came out with an all SSD, solution which provides very high performance for database queries. The company was started in 1979 and has been very successful in retail, telcom and finserv verticals and offer a number of special purpose applications supporting data analysis for these and other verticals. There appears to be some integration with Hadoop but it’s not prominent on their website.
Probably missing a few other solutions but these appear to be the main ones at the moment.
In any case both Hadoop and most of it’s software-only, structured competition are based on a massively parrallelized/share nothing set of linux servers. The two hardware based solutions listed above (Teradata and Netezza) also operate in a massive parallel processing mode to load and analyze data. Such solutions provide scale-out performance at a reasonable cost to support very large databases (PB of data).
Now that EMC owns Greenplum and HP owns Vertica, we are likely to see more appliance based packaging options for both of these offerings. EMC has taken the lead here and have already announced Greenplum specific appliance packages.
One lingering question about these solutions is why don’t customers use current traditional database systems (Oracle, DB2, Postgres, MySQL) to do this analysis. The answer seems to lie in the fact that these traditional solutions are not massively parallelized. Thus, doing this analysis on TB or PB of data would take a too long. Moreover, the cost to support data analysis with traditional database solutions over PB of data would be prohibitive. For these reasons and the fact that compute power has become so cheap nowadays, structured data analytics for large databases has migrated to these special purpose, massively parallelized solutions.
I was talking with another analyst the other day by the name of John Koller of Kai Consulting who specializes in the medical space and he was talking about the rise of electronic pathology (e-pathology). I hadn’t heard about this one.
He said that just like radiology had done in the recent past, pathology investigations are moving to make use of digital formats.
What does that mean?
The biopsies taken today for cancer and disease diagnosis which involve one more specimens of tissue examined under a microscope will now be digitized and the digital files will be inspected instead of the original slide.
Apparently microscopic examinations typically use a 1×3 inch slide that can have the whole slide devoted to some tissue matter. To be able to do a pathological examination, one has to digitize the whole slide, under magnification at various depths within the tissue. According to Koller, any tissue is essentially a 3D structure and pathological exams, must inspect different depths (slices) within this sample to form their diagnosis.
I was struck by the need for different slices of the same specimen. I hadn’t anticipated that but whenever I look in a microscope, I am always adjusting the focal length, showing different depths within the slide. So it makes sense, if you want to understand the pathology of a tissue sample, multiple views (or slices) at different depths are a necessity.
So what does a slide take in storage capacity?
Koller said, an uncompressed, full slide will take about 300GB of space. However, with compression and the fact that most often the slide is not completely used, a more typical space consumption would be on the order of 3 to 5GB per specimen.
As for volume, Koller indicated that a medium hospital facility (~300 beds) typically does around 30K radiological studies a year but do about 10X that in pathological studies. So at 300K pathological examinations done a year, we are talking about 90 to 150TB of digitized specimen images a year for a mid-sized hospital.
Why move to e-pathology?
It can open up a whole myriad of telemedicine offerings similar to the radiological study services currently available around the globe. Today, non-electronic pathology involves sending specimens off to a local lab and examination by medical technicians under microscope. But with e-pathology, the specimen gets digitized (where, the hospital, the lab, ?) and then the digital files can be sent anywhere around the world, wherever someone is qualified and available to scrutinize them.
At a recent analyst event we were discussing big data and aside from the analytics component and other markets, the vendor made mention of content archives are starting to explode. Given where e-pathology is heading, I can understand why.
I was at another conference the other day where someone showed a chart that said the world will create 35ZB (10**21) of data and content in 2020 from 800EB (10**18) in 2009.
Every time I see something like this I cringe. Yes, lot’s of data is being created today but what does that tell us about corporate data growth. Not much, I’d wager.
That being said, I have a couple of questions I would ask of the people who estimated this:
How much is personal data and how much is corporate data.
Did you factor how entertainment data growth rates will change over time.
These two questions are crucial.
Entertainment dominates data growth
Just as personal entertainment is becoming the major consumer of national bandwidth (see study [requires login]), it’s clear to me that the majority of the data being created today is for personal consumption/entertainment – video, music, and image files.
I look at my own office, our corporate data (office files, PDFs, text, etc.) represents ~14% of the data we keep. Images, music, video, audio take up the remainder of our data footprint. Is this data growing yes, faster than I would like but the corporate data is only averaging ~30% YoY growth while the overall data growth for our shop is averaging a total of ~116% YoY growth . [As I interrupt this activity to load up another 3.3GB of photos and videos from our camera]
Moreover, although some media content is of significant external interest to select (Media and Entertainment, social media-photo/video sharing sites, mapping/satellite, healthcare, etc.) companies today, most corporations don’t deal with lot’s of video, music or audio data. Thus, I personally see that the 30% growth is a more realistic growth rate for corporate data than 116%.
Will entertainment data growth flatten?
Will we see a drop in the entertainment data growth rates over time, undoubtedly.
Two factors will reduce the growth of this data.
What happens to entertainment data recording formats. I believe media recording formats are starting to level out. I think the issue here is one of fidelity to nature, in terms of how closely a digital representation matches reality as we perceive it. For example, the fact is that most digital projection systems in movie theaters today run from ~2 to 8TBs per feature length motion picture which seems to indicate that at some point further gains in fidelity (or in more pixels/frame) may not be worth it. Similar issues, will ultimately lead to a slowing down of other media encoding formats.
When will all the people that can create content be doing so? Recent data indicates that more than 2B people will be on the internet this year or ~28% of the world’s. But sometime we must reach saturation on internet penetration and when that happens data growth rates should also start to level out. Let’s say for argument sake, that 800EB in 2009 was correct and let’s assume there were 1.5B internet users (in 2009). As such, 1B internet users correlates to a data and content footprint of about 533EB or ~0.5TB/internet user — seems high but certainly doable.
Once these two factors level off, we should see world data and content growth rates plummet. Nonetheless, internet user population growth could be driving data growth rates for some time to come.
The scary part is that the 35ZB represents only a ~41% growth rate over the period against the baseline 2009 data and content creation levels.
But I must assume this estimate doesn’t consider much growth in digital creators of content, otherwise these numbers should go up substantially. In the last week, I ran across someone who said there would be 6B internet users by the end of the decade (can’t seem to recall where, but it was a TEDx video). I find that a little hard to believe but this was based on the assumption that most people will have smart phones with cellular data plans by that time. If that be the case, 35ZB seems awfully short of the mark.
A previous post blows this discussion completely away with just one application, (see Yottabytes by 2015 for the NSA A Yottabyte (YB) is 10**24 bytes of data) and I had already discussed an Exabyte-a-day and 3.3 Exabytes-a-day in prior posts. [Note, those YB by 2015 are all audio (phone) recordings but if we start using Skype Video, FaceTime and other video communications technologies can Nonabytes (10**27) be far behind… BOOM!]
I started out thinking that 35ZB by 2020 wasn’t pertinent to corporate considerations and figured things had to flatten out, then convinced myself that it wasn’t large enough to accommodate internet user growth, and then finally recalled prior posts that put all this into even more perspective.
I have had this conversation before (and have blogged about it with Crowdsourcing business analyst …) where there is lots of time and effort (person years?) devoted to scheduling one-on-one meetings between analyst firms and corporate executives. I may be repeating my earlier post but the problem persists and I see an obvious easier way to solve this.
Auction off 1 on 1 time slots
By doing this the company puts the burden on the analyst community by giving every firm some amount of “analyst buck”s (A$) and then auction off executive meeting slots. In this way the crowd of analysts would determine who best to meet with whom (putting crowdsourcing to work).
Consider today’s solution:
Send out a list of topics to be discussed at the meeting,
Have the analyst firm select their top 3 or 5 topics, and
Have analyst relation’s sift the requests and executive availability to schedule the meetings.
For analyst events with 100s of analyst firms, 20 or more executives, and 10 or more time slots, the scheduling activity can become quite complex and time consuming.
I understand a corporation’s need to make the most effective use of analysts and executive management time, but what better way to make this determination than to let the (analyst) market decide.
How an executive 1 on 1 auction could work
The way I see it is to hold some sort of dutch or japanese auction (see wikipedia auction) where all the analyst firm representatives attended a webex session and bid for 1-1 time slots with various executives. In this fashion the company could have the whole schedule laid out in a single day with the only effort involved in identifying executives, time slots and supplying A$s to analyst firms.
It doesn’t even need to be that sophisticated and potentially could be done on eBay with real money supplied by the company (useable only for bidding in executive time slot auctions) and donated to charity when the process is finished. There are any number of ways to do this on the quick and cheap. However, using eBay may be a bit too public but doing this over a conference call with webex would probably suffice just as well and could be totally private.
Of course with this approach, the company may find that their are some executives that are in higher demand than others. If such is the case, perhaps a secondary auction could be supplied with more time slots. Ditto for executives that have time slots that are not in demand – they could be released from providing time for 1 on 1 meetings.
In my prior post I mentioned the option that maybe the corporation might want more control over who meets who. In that case allocating some A$s to the corporate executives (or A/R as their proxy) to use to augment analyst firm bids might do the trick. Of course providing those firms more A$s would also give them preferential access. Obviously, this wouldn’t provide as much absolute control as spending person years of effort doing 1 on 1 scheduling but it would provide a quick and relatively easy solution to the problem from both the analyst firm as well as analyst relations.
But how much to grant to each analyst firm?
The critical question is the amount of A$’s to provide each firm. This might take some thought but there is an easy solution. Just use last years analyst spend as the amount of A$s to provide the firm. Another option is to provide some base level of analyst bucks to any firm invited to attend and then add more for the prior year spend.
Possibly, a less appealing approach (to me at least) is to give each analyst firm an amount proportional to their annual revenue regardless of company spending with the firm. But perhaps some combination of the above, say
1/3 base amount for any invitee + 1/3 proportional to annual spend +1/3 proportional to annual firm revenue = A$s
In my previous post I suggested so many A$s per analyst. As such, bigger firms with more analysts would get more than firms with less analysts. But I feel the formula described above makes more sense to me.
Information provided to facilitate the 1 on 1’s auction
In order for the auction to work well, analyst firms would need to know more information about the executive whose time is being auctioned off. But aside from that just a schedule of the time slots available would allow the auction to work. On the other hand, some idea of the company’s org chart and where the executive fit in would be very useful to facilitate the auction.
That’s it, pretty simple, set up a conference call, send out executive information and org chart, allocate analyst bucks and let the bidding begin.
Auctioning off Lot-132: 30 minutes of Ray Lucchesi’s time …, let the bidding begin.
Was invited to the SNIA tech center to witness the CDMI (Cloud Data Managament Initiative) plugfest that was going on down in Colorado Springs.
It was somewhat subdued. I always imagine racks of servers, with people crawling all over them with logic analyzers, laptops and other electronic probing equipment. But alas, software plugfests are generally just a bunch of people with laptops, ethernet/wifi connections all sitting around a big conference table.
The team was working to define an errata sheet for CDMI v1.0 to be completed prior to ISO submission for official standardization.
CDMI is an interface standard for clients talking to cloud storage servers and provides a standardized way to access all such services. With CDMI you can create a cloud storage container, define it’s attributes, and deposit and retrieve data objects within that container. Mezeo had announced support for CDMI v1.0 a couple of weeks ago at SNW in Santa Clara.
CDMI provides for attributes to be defined at the cloud storage server, container or data object level such as: standard redundancy degree (number of mirrors, RAID protection), immediate redundancy (synchronous), infrastructure redundancy (across same storage or different storage), data dispersion (physical distance between replicas), geographical constraints (where it can be stored), retention hold (how soon it can be deleted/modified), encryption, data hashing (having the server provide a hash used to validate end-to-end data integrity), latency and throughput characteristics, sanitization level (secure erasure), RPO, and RTO.
A CDMI client is free to implement compression and/or deduplication as well as other storage efficiency characteristics on top of CDMI server characteristics. Probably something I am missing here but seems pretty complete at first glance.
SNIA has defined a reference implementations of a CDMI v1.0 server [and I think client] which can be downloaded from their CDMI website. [After filling out the “information on me” page, SNIA sent me an email with the download information but I could only recognize the CDMI server in the download information not the client (although it could have been there). The CDMI v1.0 specification is freely available as well.] The reference implementation can be used to test your own CDMI clients if you wish. They are JAVA based and apparently run on Linux systems but shouldn’t be too hard to run elsewhere. (one CDMI server at the plugfest was running on a Mac laptop).
There were a number people from both big and small organizations at SNIA’s plugfest.
Mark Carlson from Oracle was there and seemed to be leading the activity. He said I was free to attend but couldn’t say anything about what was and wasn’t working. Didn’t have the heart to tell him, I couldn’t tell what was working or not from my limited time there. But everything seemed to be working just fine.
Carlson said that SNIA’s CDMI reference implementations had been downloaded 164 times with the majority of the downloads coming from China, USA, and India in that order. But he said there were people in just about every geo looking at it. He also said this was the first annual CDMI plugfest although they had CDMI v0.8 running at other shows (i.e, SNIA SDC) before.
David Slik, from NetApp’s Vancouver Technology Center was there showing off his demo CDMI Ajax client and laptop CDMI server. He was able to use the Ajax client to access all the CDMI capabilities of the cloud data object he was presenting and displayed the binary contents of an object. Then he showed me the exact same data object (file) could be easily accessed by just typing in the proper URL into any browser, it turned out the binary was a GIF file.
The other thing that Slik showed me was a display of a cloud data object which was created via a “Cron job” referencing to a satellite image website and depositing the data directly into cloud storage, entirely at the server level. Slik said that CDMI also specifies a cloud storage to cloud storage protocol which could be used to move cloud data from one cloud storage provider to another without having to retrieve the data back to the user. Such a capability would be ideal to export user data from one cloud provider and import the data to another cloud storage provider using their high speed backbone rather than having to transmit the data to and from the user’s client.
Slik was also instrumental in the SNIA XAM interface standards for archive storage. He said that CDMI is much more light weight than XAM, as there is no requirement for a runtime library whatsoever and only depends on HTTP standards as the underlying protocol. From his viewpoint CDMI is almost XAM 2.0.
Gary Mazzaferro from AlloyCloud was talking like CDMI would eventually take over not just cloud storage management but also local data management as well. He called the CDMI as a strategic standard that could potentially be implemented in OSs, hypervisors and even embedded systems to provide a standardized interface for all data management – cloud or local storage. When I asked what happens in this future with SMI-S he said they would co-exist as independent but cooperative management schemes for local storage.
Not sure how far this goes. I asked if he envisioned a bootable CDMI driver? He said yes, a BIOS CDMI driver is something that will come once CDMI is more widely adopted.
Other people I talked with at the plugfest consider CDMI as the new web file services protocol akin to NFS as the LAN file services protocol. In comparison, they see Amazon S3 as similar to CIFS (SMB1 & SMB2) in that it’s a proprietary cloud storage protocol but will also be widely adopted and available.
There were a few people from startups at the plugfest, working on various client and server implementations. Not sure they wanted to be identified nor for me to mention what they were working on. Suffice it to say the potential for CDMI is pretty hot at the moment as is cloud storage in general.
But what about cloud data consistency?
I had to ask about how the CDMI standard deals with eventual consistency – it doesn’t. The crowd chimed in, relaxed consistency is inherent in any distributed service. You really have three characteristics Consistency, Availability and Partitionability (CAP) for any distributed service. You can elect to have any two of these, but must give up the third. Sort of like the Hiesenberg uncertainty principal applied to data.
They all said that consistency is mainly a CDMI client issue outside the purview of the standard, associated with server SLAs, replication characteristics and other data attributes. As such, CDMI does not define any specification for eventual consistency.
Although, Slik said that the standard does guarantee if you modify an object and then request a copy of it from the same location during the same internet session, that it be the one you last modified. Seems like long odds in my experience. Unclear how CDMI, with relaxed consistency can ever take the place of primary storage in the data center but maybe it’s not intended to.
Nonetheless, what I saw was impressive, cloud storage from multiple vendors all being accessed from the same client, using the same protocols. And if that wasn’t simple enough for you, just use your browser.
If CDMI can become popular it certainly has the potential to be the new web file system.
I wrote a post a while back about how interplanetary commerce could be stimulated through the use of information commerce (see my Information based inter-planetary commerce post). Last week I saw an article in the Economist magazine that discussed new 3D-printers used to create products with just the design information needed to describe a part or product. Although this is only one type of information commerce, cultivating such capabilities can be one step to the future information commerce I envisioned.
3D Printers Today
3D printers grew up from the 2D inkjet printers of last century. It turns out if 2D printers can precisely spray ink on a surface it stands to reason that similar technology could potentially build up a 3D structure one plane at a time. After each layer is created, a laser, infrared light or some other technique is used to set the material into it’s proper form and then the part is incrementally lowered so that the next layer can be created.
Such devices use a form of additive manufacturing which adds material to the exact design specifications necessary to create one part. In contrast, normal part manufacturing activities such as those using a lathe are subtractive manufacturing activities, i.e., they take a block of material and chip away anything that doesn’t belong in the final part design.
3D printers started out making cheap, short-life plastic parts but recently, using titanium oxide powders, have been used to create extremely long lived, metal aircraft parts and nowadays can create any short- or long-lived plastic part imaginable. A few limitations persist, namely, the size of the printer determines the size of the part or product and 3D printers that can create multi-material parts are fairly limited.
Another problem is the economics of 3D printing of parts, both in time and cost. Volume production, using subtractive manufacturing of parts is probably still a viable alternative, i.e., if you need to manufacture 1000 or more of the same part, it probably still makes sense to use standard manufacturing techniques. However, the boundary as to where it makes economic sense to 3D print a part or whether to use a lathe to manufacture a part is gradually moving upward. Moreover, as more multi-material capable 3D printers start coming online, the economics of volume product manufacturing (not just a single part) will cause a sea change in product construction.
Information based, intra-planetary commerce
The Economist article discussed some implications of sophisticated 3D printers available in the near future. Specifically, with 3D printers coming soon, manufacturing can now be done locally rather than having to ship parts and products from one country to another. Using 3D printers all one needed to do was to transmit the product design to wherever it needs to be produced and sold. They believed this would eliminate most cost advantages available today for low-wage countries that manufacturing parts and products.
The other implication that comes with newer 3D printers is that product customization is now much easier to do. I envision clothing, furnishing, and other goods that can be literally tailor made for an individual with the proper use of design rule checking CAD software together with local, sophisicated 3D printers. How Joe Consumer, fires up a CAD program and tailors their product is another matter. But with 3D printers coming online, sophisticated, CAD knowledgeable users could almost do this today.
In the end, the information needed to create a part or a product will be the key intellectual property. It’s already been happening for years now but the dawn of 3D printers will accelerate this trend even more.
Also, 3D printers will expand information commerce, joining the already present, information activities provided by the finance, research/science, media, and other information purveyors around the planet today. Anything that makes information more a part of everyday commerce can be beneficial, whenever we ultimately begin to move off this world to the next planet – let alone when I want to move to Tahitti…
I was reading a book the other day and it suggested that sometime in the near future we will all have a personal medical record archive. Such an archive would be a formal record of every visit to a healthcare provider, with every x-ray, MRI, CatScan, doctor’s note, blood analysis, etc. that’s ever done to a person.
Such data would be our personal record of our life’s medical history usable by any future medical provider and accessible by us.
Who owns medical records?
Healthcare is unusual. For any other discipline like accounting, you provide information to the discipline expert and you get all the information you could possibly want back, to store, send to the IRS or or whatever, to do with it as you want. If you decide to pitch it, you can pretty much request a copy (at your cost) of anything for a certain number of years after the information was created.
But, in medicine, X-rays are owned and kept by the medical provider, same with MRIs, CT scans, etc. and you hardly ever get a copy. Occasionally, if the physician deems it useful for explicative reasons, you might get a grainy copy of an X-ray that shows a break or something but other than that and possible therapeutic instructions, typically nothing.
Getting Doctor’s notes is another question entirely. It’s mostly text records in some sort of database somewhere online to the medical unit. But, mainly what we get as patients, is a verbal diagnosis to take in and mull over.
Personal experience with medical records
I worked for an enlightened company a while back that had their own onsite medical practice providing all sorts of healthcare to their employees. Over time, new management decided this service was not profitable and terminated it. As they were winding down the operation, they offered to send patient medical information to any new healthcare provider or to us. Not having a new provider, I asked they send them to me.
A couple of weeks later, a big brown manilla envelope was delivered. Inside was a rather large, multy-page printout of notes taken by every medical provider I had visited throughout my tenure with this facility. What was missing from this assemblage was lab reports, x-rays and other ancillary data that was taken in conjunction with those office visits. I must say the notes were comprehensive and somewhat laden with medical terminology but they were all there to see.
Printouts were not very useful to me and probably wouldn’t be to any follow-on medical group caring for me. However the lack of x-rays, blood work, etc. might be a serious deficiency for any follow-on treatment. But, as far as I was concerned it was the first time any medical entity even offered me any information like this.
Making personal medical records useable, complete, and retrievable
To take this to the next level, and provide something useful for patients and follow-on healthcare, we need some sort of standardization of medical records across the healthcare industry. This doesn’t seem that hard, given where we are today and need not be that difficult. Standards for most medical data already exist, specifically,
DICOM or Digital Imaging and Communications in Medicine – is a standard file format used to digitally record X-Rays, MRIs, CT scans and more. Most digital medical imaging technology (except for ultrasound) out there today optionally records information in DICOM format. There just so happens to be an open source DICOM viewer that anyone can use to view these sorts of files if one is interested.
Ultrasound imaging – is typically rendered and viewed as a sort of movie and is often used for soft tissue imaging and prenatal care. I don’t know for sure but cannot find any standard like DICOM for ultrasound images. However, if they are truly movies, perhaps HD movie files would suffice for a standard ultrasound imaging file.
Audiograms, blood chemistry analysis, etc. – is provided by many technicians or labs and could all be easily represented as PDFs, scanned images, JPEG/MPEG recordings, etc. Doctors or healthcare providers often discuss salient items off these reports that are of specific interest to the patients condition. Such affiliated notes could all be in an associated text file or even a recording made of the doctor discussing the results of the analysis that somehow references the other artifact (“Blood chemistry analysis done on 2/14/2007 indicates …”).
Other doctor/healthcare provider notes – I find that everytime I visit a healthcare provider these days, they either take copious notes using WIFI connected laptops, record verbal notes to some voice recorder later transcribed into notes, or some combination of these. Any of such information could be provided in standard RTF (text files) or MPEG recordings and viewed as is.
How patients can access medical data
Most voice recordings or text notes could easily be emailed to the patient. As for DICOM images, ultrasound movies, etc., they could all be readily provided on DVDs or other removable media sent to the patient.
Another and possibly better alternative, is to have all this data uploaded to a healthcare provider’s designated URL, stored in a medical record cloud someplace, allowing patient access for viewing, downloading and/or copying. I envision something akin to a photo sharing site, upload-able by any healthcare provider but accessible for downloads by any authorized user/patient.
Medical information security
Any patient data stored in such a medical record cloud would need to be secured and possibly encrypted by a healthcare provider supplied pass code which could be used for downloading/decrypting by the patient. There are plenty of open source cryptographic algorithms which would suffice to encrypt this data (see GNU Privacy Guard for instance).
As for access passwords, possible some form of public key cryptography would suffice but it need not be that sophisticated. I prefer to use open source tools for these security mechanisms as then it would be readily available to the patient or any follow-on medical provider to access and decrypt the data.
Medical information retention period
The patient would have a certain amount of time to download these files. I lean towards months just to insure it’s done in a timely fashion but maybe it should be longer, something on the order of 7-years after a patients last visit might work. This would allow the patient sufficient time to retrieve the data and to supply it to any follow-on medical provider or stored it in their own, personal medical record archive. There are plenty of cloud storage providers I know, that would be willing to store such data at a fair, but high price, for any period of time desired.
Medical information access credentials
All the patient would need is an email and/or possible a letter that provides the accessing URL, access password and encryption passcode information for the files. Possibly such information could be provided in plaintext, appended to any bill that is cut for the visit which is sure to find its way to the patient or some financially responsible guardian/parent.
How do we get there
Bootstrapping this personal medical record archive shouldn’t be that hard. As I understand it, Electronic Medical Record (EMR) legislation in the US and elsewhere has provisions stating that any patient has a legal right to copies of any medical record that a healthcare provider has for them. If this is true, all we need do then is to institute some additional legislation that requires the healthcare provider to make those records available in a standard format, in a publicly accessible place, access controlled/encrypted via a password/passcode, downloadableby the patient and to provide the access credentials to the patient in a standard form. Once that is done, we have all the pieces needed to create the personal medical record archive I envision here.
While such legislation may take some time, one thing we could all do now, at least in the US, is to request access to all medical records/information that is legally ours already. Once all the healthcare providers, start getting inundated with requests for this data, they might figure having some easy, standardized way to provide it would make sense. Then the healthcare organizations could get together and work to finalize a better solution/legislation needed to provide this in some standard way. I would think university hospitals could lead this endeavor and show us how it could be done.
Ran across a web posting yesterday providing information on a University of Illinois summer program in Data Science. I had never encountered the term before so I was intrigued. When I first saw the article I immediately thought of data analytics but data science should be much broader than that.
What exactly is a data scientist? I suppose someone who studies what can be learned from data but also what happens throughout data lifecycles.
Data science is like biology
I look to biology for an example. A biologist studies all sorts of activity/interactions from what happens in a single cell organism, to plants, and animal kingdoms. They create taxonomies which organizes all biological entities, past and present. They study current and past food webs, ecosystems, and species. They work in an environment of scientific study where results are openly discussed and repeatable. In peer reviewed journals, they document everything from how a cell interacts within an organism, to how an organism interacts with its ecosystem, to whole ecosystem lifecycles. I fondly remember my biology class in high school talking about DNA, the life of a cell, biological taxonomy and disection.
Where are these counterparts in Data Science? Not sure but for starters let’s call someone who does data science an informatist.
What constitutes a data ecosystem in data science? Perhaps an informatist would study the IT infrastructure(s) where a datum is created, stored, and analyzed. Such infrastructure (especially with cloud) may span data centers, companies, and even the whole world. Nonetheless, migratory birds can cover large distances, across multiple ecosystems and are still valid subjects for biologists.
So where a datum exists, where/when it’s moved throughout its lifecycle, and how it interacts with other datums is a proper subject for data ecosystem study. I suppose my life’s study of storage could properly be called the study of data ecosytems.
Next, what’s a reasonable way for an informatist to organize data like a biological taxonomy with domain, kingdom, phylum, class, order, family, genus, and species (see wikipedia). Seems to me that applications that create and access the data represent a rational way to organize data. However my first thought on this was structured or unstructured data as the defining first level breakdown (maybe Phylum). Order could be general application type such as email, ERP, office documents, etc. Family could be application domain, genus could be application version and species could be application data type. So that something like an Exchange 2010 email would be Order=EMAILus, Family=EXCHANGius, Genus=E2010ius, and Species=MESSAGius.
I think higher classifications such as kingdom and domain need to consider things such as oral history, handcopied manuscripts, movable type printed documents, IT, etc., at the Kingdom level. Maybe Domain would be such things as biological domain, information domain, physical domain, etc. Although where oral-h
When first thinking of higher taxonomical designations I immediately went into O/S but now I think of an O/S as part of the ecological niche where data temporarily resides.
I could go on, there are probably hundreds if not thousands of other characteristics of data science that need to be discussed – data lifecycle, the data cell, information use webs, etc.
Another surprise is how well the study of biology fits the study of data science. Counterparts to biology seem to exist everywhere I look. At some deep level, biology is information, wet-ware perhaps, but information nonetheless. It seems to me that the use of biology to guide our elaboration of data science can be very useful.