New science used to combat COVID-19 disease

Read an article last week in Science Magazine (A completely new culture on doing research… ) on how the way science is done to combat disease has changed the last few years.

In the olden days (~3-5 years ago), disease outbreaks would generate a slew of research papers to be written, submitted for publication and there they would sit, until peer-reviewed, after which they might get published for the world to see for the first time. Estimates I’ve seen say that the scientific research publishing process takes anywhere from one month (very fast) to 4-8 months, assuming no major revisions are required.

With the emergence of the Zika virus and recent Ebola outbreaks, more and more biological research papers have become available through pre-print servers. These are web-sites which accept any research before publication (pre-print), posting the research for all to see, comment and understand.

Open science via pre-print

Most of these pre-print servers focus on specific areas of science. For example bioRxiv is a pre-print server focused on Biology and medRxiv is for health sciences. On the other hand, arXiv is a pre-print server for “physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.” These are just a sampling of what’s available today.

In the past, scientific journals would not accept research that had been published before. But this slowly change as well. Now most scientific journals have policies gol pre-print publication and will also publish them if they deem it worthwhile, (see wikipedia article List of academic journals by pre-print policies).

As of today (9 March 2020) ,on biorXiv there are 423 papers with keyword=”coronavirus” and 52 papers with the keyword COVID-19, some of these may be the same. The newest (Substrate specificity profiling of SARS-CoV-2 Mpro protease provides basis for anti-COVID-19 drug design) was published on 3/7/2020. The last sentence in their abstract says “The results of our work provide a structural framework for the design of inhibitors as antiviral agents or diagnostic tests.” The oldest on bioRxiv is dated 23 January 2020. Similarly, there are 326 papers on medRxiv with the keyword “coronavirus”, the newest published 5 March 2020.

Pre-print research is getting out in the open much sooner than ever before. But the downside, is that pre-print papers may have serious mistakes or omissions in them as they are not peer-reviewed. So the cost of rapid openness is the possibility that some research may be outright wrong, badly done, or lead researchers down blind alleys.

However, the upside is any bad research can be vetted sooner, if it’s open to the world. We see similar problems with open source software, some of it can be buggy or outright failure prone. But having it be open, and if it’s popular, many people will see the problems (or bugs) and fixes will be rapidly created to solve them. With pre-print research, the comment list associated with a pre-print can be long and often will identify problems in the research.

Open science through open journals

In addition to pre-print servers , we are also starting to see the increasing use of open scientific journals such as PLOS to publish formal research.

PLOS has a number of open journals focused on specific arenas of research, such as PLOS Biology, PLOS Pathogyns, PLOS Medicine, etc.

Researchers or their institutions have to pay a nominal fee to publish in PLOS. But all PLOS publications are fully expert, peer-reviewed. But unlike research from say Nature, IEEE or other scientific journals, PLOS papers are free to anyone, and are widely available. (However, I just saw that SpringerNature is making all their coronavirus research free).

Open science via open data(sets)

Another aspect of scientific research that has undergone change of late is the sharing and publication of data used in the research.

Nature has a list of recommended data repositories. All these data repositories seem to be hosted by FAIRsharing at the University of Oxford and run by their Data Readiness Group. They list 1349 databases of which the vast majority (1250) are for the natural sciences with over 1380 standards used for data to be registered with FAIRsharing.

We’ve discussed similar data repositories in the past (please see Data banks, data deposits and data withdrawals, UK BioBank, Big open data leads to citizen science, etc). Having a place to store data used in research papers makes it easier to understand and replicate science.

Collaboration software

The other change to research activities is the use of collaborative software such as Slack. Researchers at UW Madison were already using Slack to collaborate on research but when Coronavirus went public, they Slack could help here too. So they created a group (or channel) under their Slack site called “Wu-han Clan” and invited 69 researchers from around the world. The day after they created it they held their first teleconference.

Other collaboration software exists today but Slack seems most popular. We use Slack for communications in our robotics club, blogging group, a couple of companies we work with, etc. Each has a number of invite-only channels, where channel members can post text, (data) files, links and just about anything else of interest to the channel.

Although I have not been invited to participate in Wu-han Clan (yet), I assume they usee Slack to discuss and vet (pre-print) research, discuss research needs, and other ways to avert the pandemic.

~~~~

So there you have it. Coronavirus scientific research is happening at warp speed compared to diseases of yore. Technologies to support this sped up research have all emerged over the last five to 10 years but are now being put to use more than ever before. Such technological advancement should lead to faster diagnosis, lower worldwide infection/mortality rates and a quicker medical solution.

Photo Credit(s):

UK Biobank & the data economy – part 2

A couple of weeks back I wrote a post about repositories for all the data that users generate these days and what to do with it.  (See our post on Data banks, deposits, … data economy – part 1).

This past week I read an article (see ScienceDaily Genetics of brain structure … article) which partially exemplifies what that post talked about. The research used publicly available genetic information to tease out brain structure hereditary characteristics.  The Science Daily article was a summary of research done at the University of Oxford using information provided from the UK Biobank.

Biobank as a data bank

The Biobank has recruited 500K participants from the UK,  aged 40-69,  between 2006-2010, to share their anonymized health data with researchers and scientist around the world. The Biobank is set up as a Scottish charity, funded by various health organizations in UK both gov’t and private. 

In addition to information collected during the baseline assessment: 

  • 100K participants have worn a 24 hour health monitoring device for a week and 20K have signed up to repeat this activity.
  • 500K participants are providing have been genotyped (DNA sequencing to determine hereditary genes)
  • 100K participants will be medically scanned (brain, heart, abdomen, bones, carotid artery) with images stored in the Biobank
  • 100K participants have signed up to receive questionnaires asking  about diet, exercise, work history, digestive health and other medical indicators..

There’s more. Biobank is linking to electronic health records (EHR) of participants to track their health over time. The Biobank is also starting to provide blood analysis and other detailed medical measures of subjects in the study.

UK Biobank (data bank) information uses

“UK Biobank is an open access resource. The Resource is open to bona fide scientists, undertaking health-related research that is in the public good. Approved scientists from the UK and overseas and from academia, government, charity and commercial companies can use the Resource. ….” (from UK Biobank scientists page).

Somewhat like open source code, the Biobank resource is made available to anyone (academia as well as industry), that can make valid use of its data BUT any research derived from its data must be published and made freely available to the Biobank and the world.

Biobank’s papers page documents some of the research that has already been published using their data. It lists the paper on genetics of brain study mentioned above and dozens more.

Differences from Data Banks

In the original data bank post:

  1. We thought data was only needed by  AI/deep learning. That seems naive now. The Biobank shows that AI/deep learning is not the only application/research that needs data.
  2. We thought data would be collected by only by hyper-scalars and other big web firms during normal user web activity. But their data is not the only data that matters.
  3. We thought data would be gathered for free. Good data can take many forms, and some may cost money.
  4. We thought profits from selling data would be split between the bank and users and could fund data bank operations. But in the Biobank, funding came from charitable contributions and data is available for free (to valid researchers).

Data banks can be an invaluable resource and may take many forms. Data that’s difficult to find can be gathered by charities and others that use funding to create, operate and gather the specific information needed for targeted research.

Comments?

Photo Credit(s): Bank on it by Alan Levine

Latest MRI – two screws in the kneecap by Becky Stern

Other graphics from the Genetics of brain structure… paper

 

 

perspective by anomalous4 (cc) (from Flickr)

Data banks, data deposits & data withdrawals in the data economy – part 1

Big data visualization, Facebook friend connections
Facebook friend carrousel by antjeverena (cc) (from flickr)

Read an interesting article this week in The Atlantic, Why Technology Favors Tyranny by Yuvai Noah Harari, about the inevitable future of technology and how the use of data will drive it.

At the end of the article Harari talks about the need to take back ownership of our data in order to gain some control over the tech giants that currently control our data.

In part 3, Harari discusses the coming AI revolution and the impact on humanity. Yes there will still be jobs, but early on less jobs for unskilled labor and over time less jobs for skilled labor.

Yet, our data continues to be valuable. AI neural net (NN) accuracy increases as a function of the amount of data used to train it. As a result, he who has the most data creates the best AI NN. This means our data has value and can be used over and over again to train other AI NNs. This all sounds like data is just another form of capital, at least for AI NN training.

Safe by cjc4454 (cc) (from flickr)
Safe by cjc4454 (cc) (from flickr)

If only we could own our data, then there would still be value from people’s (digital) exertions (labor), regardless of how much AI has taken over the reigns of production or reduced the need for human work.

What we need is data (savings) banks. These banks would hold people’s data, gathered from social media likes/dislikes,  cell phone metadata, app/web history, search history, credit history, purchase history,  photo/video streams, email streams, lab work, X-rays, wearables info, etc. Probably many more categories need to be identified but ultimately ALL the digital data we generate today would need to be owned by people and deposited in their digital bank accounts.

Data deposits?

Social media companies, telecom, search companies, financial services app companies, internet  providers, etc. anywhere you do business should supply a copy of the digital data they gather for a person back to that persons data bank account.

There are many technical problems to overcome here but it could be as simple as an object storage bucket, assigned to each person that each digital business deposits (XML versions of) our  digital data they create for everyone that uses their service. They would do this as compensation for using our data in their business activities.

How to change data ownership?

Today, we all sign user agreements which essentially gives a company the rights to our data in perpetuity. That needs to change. I see a few ways that this change could come about

  1. Countries could enact laws to insure personal data ownership resides in the person generating it and enforce periodic distribution of this data
  2. Market dynamics could impel data distribution, e.g. if some search firm supplied data to us, we would be more likely to use them.
  3. Societal changes, as AI becomes more important to profit making activities and reduces the need for human work, and as data continues to be an important factor in AI success, data ownership becomes essential to retaining the value of human labor in society.

Probably, all of the above and maybe more would be required to change the ownership structure of data.

How to profit from data?

Technical entities needing data to train AI NNs could solicit data contributions through an Initial Data Offering (IDO). IDO’s would specify types of data required and a proportion of AI NN ownership, they would cede to all  data providers. Data providers would be apportioned ownership based on the % identified and the number of IDO data subscribers.

perspective by anomalous4 (cc) (from Flickr)
perspective by anomalous4 (cc) (from Flickr)

Data banks would extract the data requested by the IDO and supply it to the IDO entity for use. For IDOs, just like ICO’s or IPO’s, some would fail and others would succeed. But the data used in them would represent an ownership share sort of like a  stock (data) certificate in the AI NN.

Data bank responsibilities

Data banks would have various responsibilities and would need to collect fees to perform them. For example, data banks would be responsible for:

  1. Protecting data deposits – to insure data deposits are never lost, are never accessed without permission, are always trackable as to how they are used..
  2. Performing data deposits – to verify that data is deposited from proper digital entities, to validate that data deposits are in a usable form and to properly store the data in a customers object storage bucket.
  3. Performing data withdrawals – upon customer request, to extract all the appropriate data requested by an IDO,  anonymize it, secure it, package it and send it to the IDO originator.
  4. Reconciling data accounts – to track data transactions, data banks would supply a monthly statement that identifies all data deposits and data withdrawals, data revenues and data expenses/fees.
  5. Enforcing data withdrawal types – to enforce data withdrawal types, as data  withdrawals can have many different characteristics, such as exclusivity, expiration, geographic bounds, etc. Data banks would need to enforce withdrawal characteristics, at least to the extent they can
  6. Auditing data transactions – to insure that data is used properly, a consortium of data banks or possibly data accountancies would need to audit AI training data sets to verify that only data that has been properly withdrawn is used in trying the NN. .

AI NN, tools and framework responsibilities

In order for personal data ownership to work well, AI NNs, tools and frameworks used today would need to change to account for data ownership.

  1. Generate, maintain and supply immutable data ownership digests – data ownership digests would be a sort of stock registry for the data used in training the AI NN. They would need to be a part of any AI NN and be viewable by proper data authorities
  2. Track data use – any and all data used in AI NN training should be traceable so that proper data ownership can be guaranteed.
  3. Identify AI NN revenues – NN revenues would need to be isolated, identified and accounted for so that data owners could be rewarded.
  4. Identify AI NN data expenses – NN data costs would need to somehow be isolated, identified and accounted for so that data expenses could be properly deducted from data owner awards. .

At some point there’s a need for almost a data profit and loss statement as well as a data balance sheet for at an AI NN level. The information supplied above should make auditing data ownership, use and rewards much more feasible. But it all starts with identifying data ownership and the data used in training the AI.

~~~~

There are a thousand more questions that come to mind. For example

  • Who owns earth sensing satellite, IoT sensors, weather sensors, car sensors etc. data? Everyone in the world (or country) being monitored is laboring to create the environment sensed by these devices. Shouldn’t this sensor data be apportioned to the people of the world or country where these sensors operate.
  • Who pays data bank fees? The generators/extractors of the data could pay in addition to providing data deposits for the privilege to use our data. I could also see the people paying.  Having the company pay would give them an incentive to make the data load be as efficient and complete as possible. Having the people pay would induce them to use their data more productively.
  • What’s a decent data expiration period? Given application time frames these days, 7-15 years would make sense. But what happens to the AI NN when data expires. Some way would need to be created to extract data from a NN, or the AI NN would need to cease being used and a new one would  need to be created with new data.
  • Can data deposits be rented/sold to data aggregators? Sort of like a AI VC partnership only using data deposits rather than money to fund AI startups.
  • What happens to data deposits when a person dies? Can one inherit a data deposits, would a data deposit inheritance be taxable as part of an estate transfer?

In the end, as data is required to train better AI, ownership of our data makes us all be capitalist (datalists) in the creation of new AI NNs and the subsequent advancement of society. And that’s a good thing.

Comments?