Made in space

Read an article in IEEE Spectrum recently titled, 4 Products it makes sense to manufacture in space. The 4 products identified in the article include:

1) Metal alloys – because of micro-gravity, the mixture of metals that go into metal alloys should be much more even and as a result, should create a purer mixture of the metal alloy at the end of the process.

2) Fibre optical cables – the article says, ZBLAN, which is a heavy-metal fluoride glass fibre could have 1/10th the signal loss of current cable but is hard to manufacture on earth due to micro-crystal formation. Apparently, when manufactured (mixed-drawn) in micro-gravity, there’s less of this defect in the glass.

3) Printed human organs – the problem with printing biological organs, hearts, lungs, livers, etc. is they require scaffolding for the cells to adhere to that needs to be bio-degradeable and in the form of whatever organ is needed. However, in micro-gravity there should be less of a need for any scaffolding.

4) Artificial meat – similar to human organs above, by being able to build (3D print) biological products, one could create a steak or other cuts of meat that biological #D printing.

Problems with space manufacture

One problem with manufacturing metal alloys and fibre optic cable in space, is the immense heat required. Glass melts at 1400C, metals anywhere from 650C to 3400C. Getting rid of all that heat in space could present a significant problem. Not to mention the vessels required to hold molten materials weigh a lot.

And metal and glass manufacturing processes can also create waste, such as hot metal/glass particulates that settle on the floor on earth, but who knows where in space. To manufacture metal or glass on ISS would require a very heat tolerant, protected environment or capsule, lots of power to provide heat and radiator surfaces to release said heat.

And of course, delivering raw materials for metals and glass to space (LEO) would cost a lot (SpaceX $2.7K/kg , Atlas V $13.2K/kg). As such, the business case for metal alloy manufacturing in space doesn’t appear positive.

But given the reduced product weight and potentially higher prices one can charge for the product, fibre pptical glass may make business sense. Especially, if you could get by with 1/10th the glass because it has 1/10th the signal loss.

And if you don’t have to ship raw materials from earth (using the moon or asteroids instead), it would improvesboth business cases. That is, assuming raw material discovery and shipping costs are 1/6th or less as much as shipping from earth.

As for organs, as they can’t be manufactured on earth (yet), it could be the “killer app’ for made in space. But it’s sort of a race against time. Doing this in space may be a lot easier today but more research is going on to create organs on earth than in space. But eventually, manufacturing these on earth could be a lot cheaper and just as effective.

But I don’t see a business case for meat in space unless it’s to support making food for astronauts on ISS. Even then, it might be cheaper to just ship them some steak.

Products hard to make in space

I would think anything that doesn’t require gravity to work, should be easier to produce in space.

But that eliminates distillation, e.g., fossil fuel refining, fermentation, and many other chemical distillation processes (see Wikipedia article on Distillation).

But gravity is also used in depositing and holding multiple layers onto one another. So manufacturing paper, magnetic/optical disk platters, magnetic tapes, or any other product built up layer by layer, may not be suitable for space manufacture.

Not sure about semiconductors, as deposition steps can make use of chemical vapors. And that seems to require gravity. But it’s conceivable that in the absence of gravity, chemicals may still adhere to the wafer surface, as it’s an easier location to combine with than other surfaces in the chamber. On the other hand, they may just as likely retain their mixture in the vapor.

Growing extremely pure silicon ingots may be something better done in space. However, it may suffer from the same problems as metal alloy manufacturing. Given the need for extreme purity and the price paid for pure silicon, I would think this would be something to research ahead of metal alloys.

For further research

But in the end, if and when we become a space fairing people, we will need to manufacture everything in space. As well as grow or find raw materials easier than shipping them from the earth.

So, some research ought to be directed on how to perform distillation and multi-layer product manufacturing in space/micro-gravity. Such processes could potentially be done in a centrifuge, if they truly can’t be gone without gravity.

It’s also unclear how to boil any liquid in 0g or micro-g without convection (see Bizarre Boiling NASA Science article). According to the article, it creates one big bubble that stays where it is formed. Providing some way to extract this bubble in place would seem difficult. Boiling liquids in a centrifuge may work.

In any case, I’m sure the ISS crew would be more than happy to do any research necessary to figure out how to brew beer, let alone, distill vodka in space.

Picture Credit(s):

Where should IoT data be processed – part 1

I was at FlashMemorySummit 2019 (FMS2019) this week and there was a lot of talk about computational storage (see our GBoS podcast with Scott Shadley, NGD Systems). There was also a lot of discussion about IoT and the need for data processing done at the edge (or in near-edge computing centers/edge clouds).

At the show, I was talking with Tom Leyden of Excelero and he mentioned there was a real need for some insight on how to determine where IoT data should be processed.

For our discussion let’s assume a multi-layered IoT architecture, with 1000s of sensors at the edge, 100s of near-edge processing/multiplexing stations, and 1 to 3 core data center or cloud regions. Data comes in from the sensors, is sent to near-edge processing/multiplexing and then to the core data center/cloud.

Data size

Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)
Dans la nuit des images (Grand Palais) by dalbera (cc) (from flickr)

When deciding where to process data one key aspect is the size of the data. Tin GB or TB but given today’s world, can be PB as well. This lone parameter has multiple impacts and can affect many other considerations, such as the cost and time to transfer the data, cost of data storage, amount of time to process the data, etc. All of these sub-factors include the size of the data to be processed.

Data size can be the largest single determinant of where to process the data. If we are talking about GB of data, it could probably be processed anywhere from the sensor edge, to near-edge station, to core. But if we are talking about TB the processing requirements and time go up substantially and are unlikely to be available at the sensor edge, and may not be available at the near-edge station. And PB take this up to a whole other level and may require processing only at the core due to the infrastructure requirements.

Processing criticality

Human or machine safety may depend on quick processing of sensor data, e. g. in a self-driving car or a factory floor, flood guages, etc.. In these cases, some amount of data (sufficient to insure human/machinge safety) needs to be done at the lowest point in the hierarchy, with the processing power to perform this activity.

This could be in the self-driving car or factory automation that controls a mechanism. Similar situations would probably apply for any robots and auto pilots. Anywhere some IoT sensor array was used to control an entity, that could jeopardize the life of human(s) or the safety of machines would need to do safety level processing at the lowest level in the hierarchy.

If processing doesn’t involve safety, then it could potentially be done at the near-edge stations or at the core. .

Processing time and infrastructure requirements

Although we talked about this in data size above, infrastructure requirements must also play a part in where data is processed. Yes sensors are getting more intelligent and the same goes for near-edge stations. But if you’re processing the data multiple times, say for deep learning, it’s probably better to do this where there’s a bunch of GPUs and some way of keeping the data pipeline running efficiently. The same applies to any data analytics that distributes workloads and data across a gaggle of CPU cores, storage devices, network nodes, etc.

There’s also an efficiency component to this. Computational storage is all about how some workloads can better be accomplished at the storage layer. But the concept applies throughout the hierarchy. Given the infrastructure requirements to process the data, there’s probably one place where it makes the most sense to do this. If it takes a 100 CPU cores to process the data in a timely fashion, it’s probably not going to be done at the sensor level.

Data information funnel

We make the assumption that raw data comes in through sensors, and more processed data is sent to higher layers. This would mean at a minimum, some sort of data compression/compaction would need to be done at each layer below the core.

We were at a conference a while back where they talked about updating deep learning neural networks. It’s possible that each near-edge station could perform a mini-deep learning training cycle and share their learning with the core periodicals, which could then send this information back down to the lowest level to be used, (see our Swarm Intelligence @ #HPEDiscover post).

All this means that there’s a minimal level of processing of the data that needs to go on throughout the hierarchy between access point connections.

Pipe availability

binary data flow

The availability of a networking access point may also have some bearing on where data is processed. For example, a self driving car could generate TB of data a day, but access to a high speed, inexpensive data pipe to send that data may be limited to a service bay and/or a garage connection.

So some processing may need to be done between access point connections. This will need to take place at lower levels. That way, there would be no need to send the data while the car is out on the road but rather it could be sent whenever it’s attached to an access point.

Compliance/archive requirements

Any sensor data probably needs to be stored for a long time and as such will need access to a long term archive. Depending on the extent of this data, it may help dictate where processing is done. That is, if all the raw data needs to be held, then maybe the processing of that data can be deferred until it’s already at the core and on it’s way to archive.

However, any safety oriented data processing needs to be done at the lowest level and may need to be reprocessed higher up in the hierachy. This would be done to insure proper safety decisions were made. And needless the say all this data would need to be held.

~~~~

I started this post with 40 or more factors but that was overkill. In the above, I tried to summarize the 6 critical factors which I would use to determine where IoT data should be processed.

My intent is in a part 2 to this post to work through some examples. If there’s anyone example that you feel may be instructive, please let me know.

Also, if there’s other factors that you would use to determine where to process IoT data let me know.

Polarized laser light speeds up data center networks

binary data flow

Read an article the other day, Polarizing the data center from IEEE Spectrum, on new optical technology that has the potential to boost data center networking speeds by ~7x beyond what it is today. The research was released in a Nature article, Ultrafast spin lasers (paywall) but a previous version of the paper was released on PLOS (Ultrafast spin lasers) was freely available

It’s still in lab demonstration at this point, but if it does make into the data center, it has the potential to remove local networking as a bottleneck for application workloads, at least for the foreseeable future.

The new technology is based on polarizing (right or left circular) laser light and using plolarization to encode ones and zeros. Today’s optical transceivers seem to use on-off or brightness level to encode data signals, which requires a lot of power (and by definition cooling) to work. On the other hand, polarizing laser light takes ~7% of the power (and cooling), then the old style of on and off laser light. 

How it works

Not sure I understand all the physics but it appears that if you are able to control the carrier spin within a semiconductor, Vertical-Cavity Surface-Emitting Laser (VCSEL), it transmutes carrier spin into photon polarization, and by doing so, emits polarized laser light. And with appropriate sensors, this laser light polarization can be detected and decoded. 

In addition, due to some physical constraints, modulating (encoding) laser intensity will never be faster than modulating (encoding) carrier spin. This has something to do with cycling the laser on and off vs, the polarization process. As such, one should be able to can transmit more information by polarized laser light than by intensified laser light.

Moreover, polarization can be done at room temperature. Apparently, VCSELs operating today typically hit 70C in normal high speed operations, vs. ~21C for VCSELs using polarization.

Lab results

In the lab they are using (I believe) mechanical bending in combination with a pulsed laser to create the spin carriers in the VCSEL’s that polarize the laser light. This is just used for demonstrating purposes. Unclear whether this approach will be useable in a data center application of the technology.

In their lab experiments they were able to demonstrate VCSEL polarization cycles (how quickly they could change polarization) in the 5 ps (pico-second, trillionths of a second) range. This resulted in transmitting something on the order of 214Ghz of polarized light cycles. Somewhere in the PLoS article they mentioned transmitting a random bit string using the technology and not just cycling through 1s and 0s over and over again.

The researchers believe that by moving from mechanical bending, to the use of a photonic crystal or strained quantum well-based VCSELs will allow them to move from signaling at 214Ghz to 1Thz, or ~28X what can be done with laser intensity signaling today. 

I don’t know whether the technology will get out of the lab anytime soon but 1Thz  (~1Tbps) seems something most IT organizations would want, especially if the price is right is similar to today’s technology.

The research mentioned this would be more suitable for data center networking rather than long range data transfers. Not sure why but it could be because 1) it’s still relatively experimental and 2) they have yet to determine distance degradation parameters.

Of course normal (on-off) signaling technology using VCSELs is not standing still. There’s always a potential for moving beyond any current physical constraints to boost some technologies capabilities. Just witness the superparamagnetic barrier in magnetic disk over the years. That physical barrier has moved multiple times during my career.

However, a nearly order of magnitude of speed and more than an order of magnitude of power/cooling improvements are hard to come by with mature technology. I see a polarized optical fiber networking in data centers of the future.

~~~~

Comments?

Photo Credit(s):

IT in space

Read an article last week about all the startup activity that’s taking place in space systems and infrastructure (see: As rocket companies proliferate … new tech emerges leading to a new space race). This is a consequence of cheap(er) launch systems from SpaceX, Blue Origin, Rocket Lab and others.

SpaceBelt, storage in space

One startup that caught my eye was SpaceBelt from Cloud Constellation Corporation, that’s planning to put PB (4X library of congress) of data storage in a constellation of LEO satellites.

The LEO storage pool will be populated by multiple nodes (satellites) with a set of geo-synchronous access points to the LEO storage pool. Customers use ground based secure terminals to talk with geosynchronous access satellites which communicate to the LEO storage nodes to access data.

Their main selling points appear to be data security and availability. The only way to access the data is through secured satellite downlinks/uplinks and then you only get to the geo-synchronous satellites. From there, those satellites access the LEO storage cloud directly. Customers can’t access the storage cloud without going through the geo-synchronous layer first and the secured terminals.

The problem with terrestrial data is that it is prone to security threats as well as natural disasters which take out a data center or a region. But with all your data residing in a space cloud, such concerns shouldn’t be a problem. (However, gaining access to your ground stations is a whole different story.

AWS and Lockheed-Martin supply new ground station service

The other company of interest is not a startup but a link up between Amazon and Lockheed Martin (see: Amazon-Lockheed Martin …) that supplies a new cloud based, satellite ground station as a service offering. The new service will use Lockheed Martin ground stations.

Currently, the service is limited to S-Band and attennas located in Denver, but plans are to expand to X-Band and locations throughout the world. The plan is to have ground stations located close to AWS data centers, so data center customers can have high speed, access to satellite data.

There are other startups in the ground station as a service space, but none with the resources of Amazon-Lockheed. All of this competition is just getting off the ground, but a few have been leasing idle ground station resources to customers. The AWS service already has a few big customers, like DigitalGlobe.

One thing we have learned, is that the appeal of cloud services is as much about the ecosystem that surrounds it, as the service offering itself. So having satellite ground stations as a service is good, but having these services, tied directly into other public cloud computing infrastructure, is much much better. Google, Microsoft, IBM are you listening?

Data centers in space

Why stop at storage? Wouldn’t it be better to support both storage and computation in space. That way access latencies wouldn’t be a concern. When terrestrial disasters occur, it’s not just data at risk. Ditto, for security threats.

Having whole data centers, would represent a whole new stratum of cloud computing. Also, now IT could implement space native applications.

If Microsoft can run a data center under the oceans, I see no reason they couldn’t do so in orbit. Especially when human flight returns to NASA/SpaceX. Just imagine admins and service techs as astronauts.

And yet, security and availability aren’t the only threats one has to deal with. What happens to the space cloud when war breaks out and satellite killers are set loose.

Yes, space infrastructure is not subject to terrestrial disasters or internet based security risks, but there are other problems besides those and war that exist such as solar storms and space debris clouds. .

In the end, it’s important to have multiple, non-overlapping risk profiles for your IT infrastructure. That is each IT deployment, may be subject to one set of risks but those sets are disjoint with another IT deployment option. IT in space, that is subject to solar storms, space debris, and satellite killers is a nice complement to terrestrial cloud data centers, subject to natural disasters, internet security risks, and other earth-based, man made disasters.

On the other hand, a large, solar storm like the 1859 one, could knock every data system on the world or in orbit, out. As for under the sea, it probably depends on how deep it was submerged!!

Photo Credit(s): Screen shots from SpaceBelt youtube video (c) SpaceBelt

Screens shot from AWS Ground Station as a Service sign up page (c) Amazon-Lockheed

Screen shots from Microsoft’s Under the sea news feature (c) Microsoft

Screaming IOP performance with StarWind’s new NVMeoF software & Optane SSDs

Was at SFD17 last week in San Jose and we heard from StarWind SAN (@starwindsan) and their latest NVMeoF storage system that they have been working on. Videos of their presentation are available here. Starwind is this amazing company from the Ukraine that have been developing software defined storage.

They have developed their own NVMe SPDK for Windows Server. Intel doesn’t currently offer SPDK for Windows today, so they developed their own. They also developed their own initiator (CentOS Linux) for NVMeoF. The target system was a multicore server running Windows Server with a single Optane SSD that they used to test their software.

Extreme IOP performance consumes cores

During their development activity they tested various configurations. At the start of their development they used a Windows Server with their NVMeoF target device driver. With this configuration and on a bare metal server, they found that they could max out the Optane SSD at 550K 4K random write IOPs at 0.6msec to a single Optane drive.

When they moved this code directly to run under a Hyper-V environment, they were able to come close to this performance at 518K 4K write IOPS at 0.6msec. However, this level of IO activity pegged 100% of 8 cores on their 40 core server.

More IOPs/core performance in user mode

Next they decided to optimize their driver code and move as much as possible into user space and out of kernel space, They continued to use Hyper-V. With this level off code, they were able to achieve the same performance as bare metal or ~551K 4K random write IOP performance at the 0.6msec RT and 2.26 GB/sec level. However, they were now able to perform only pegging 2 cores. They expect to release this initiator and target software in mid October 2018!

They converted this functionality to run under ESX/VMware and were able to see much the same 2 cores pegged, ~551K 4K random write IOPS at 0.6msec RT and 2.26 GB/sec. They will have the ESXi version of their target driver code available sometime later this year.

Their initiator was running CentOS on another server. When they decided to test how far they could push their initiator, they were able to drive 4 Optane SSDs at up to ~1.9M 4K random write IOP performance.

At SFD17, I asked what they could have done at 100 usec RT and Max said about 450K IOPs. This is still surprisingly good performance. With 4 Optane SSDs and consuming ~8 cores, you could achieve 1.8M IOPS and ~7.4GB/sec. Doubling the Optane SSDs one could achieve ~3.6M IOPS, with sufficient initiators and target cores with ~14.8GB/sec.

Optane based super computer?

ORNL Summit super computer, the current number one supercomputer in the world, has a sustained throughput of 2.5 TB/sec over 18.7K server nodes. You could do much the same with 337 CentOS initiator nodes, 337 Windows server nodes and ~1350 Optane SSDs.

This would assumes that Starwind’s initiator and target NVMeoF systems can scale but they’ve already shown they can do 1.8M IOPS across 4 Optane SSDs on a single initiator server. Aand I assume a single target server with 4 Optane SSDs and at least 8 cores to service the IO. Multiplying this by 4 or 400 shouldn’t be much of a concern except for the increasing networking bandwidth.

Of course, with Starwind’s Virtual SAN, there’s no data management, no data protection and probably very little in the way of logical volume management. And the ORNL Summit supercomputer is accessing data as files in a massive file system. The StarWind Virtual SAN is a block device.

But if I wanted to rule the supercomputing world, in a somewhat smallish data center, I might be tempted to put together 400 of StarWind NVMeoF target storage nodes with 4 Optane SSDs each. And convert their initiator code to work on IBM Spectrum Scale nodes and let her rip.

Comments?

Cloudlets at the edge

Read an article (Never heard of Edge Computing….) this week on ATT’s presentations at their Spark Conference.  Apparently, ATT is saying that the problem with AR, VR/immersive gaming, self-driving cars, drones, etc. has been two fold, lack of bandwidth and processing latency.

The long latency issue comes from having current processing  for these devices being done mostly at cloud data centers, 100s of miles away from the device doing the work.

The upcoming 5G rollout should hopefully solve the bandwidth problem (for now at least) but the processing latency issue can only be dealt with by moving compute closer to where it’s needed.

A couple of weeks back I was at VMworld and one of the big announcements there was vSphere supporting 64 bit ARM processors. Pat and others talked up the coming edge processing tsunami, that will overtake IT as we know it today and bring significant benefits to everything from traffic management, to infrastructure maintenance, to better security for all, etc. Windows Server has been ported to ARM for Azure apps  for a while now but I don’t know if it’s been slated for external release

The new edge

Up until this point, I had always considered edge devices as sensors and other equipment embedded in buildings, land, sea, air, machinery, etc., that provided useful, realworld information/status about their environments and when  somethings gone wrong, that has to be fixed. I hadn’t really saw AR and, VR immersive gaming as an edge issue.  However, drones and self-driving cars are edge devices.

AR seems to rely on smart phone levels of computation and VR today is usually tethered to a desktop PC or Mac. But to take AR and VR to the next level, processing requirements need to go up.

Self-driving cars have their own army of compute processing and sensors to deal with realtime road recognition and accident avoidance. Drones have smart phone levels of compute onboard and a nearby laptop for additional processing and control support. Not sure that edge processing requirements for these devices is increasing but I’m no expert.

But, they all need more low-latency computation to become more effective, they all require lot’s of bandwidth and some of them at least, can only perform well, if both of these requirements are solved.

CloudLets

ATT has been experimenting with neighborhood data centers, test zones or cloudlets to supply this new,  low-latency processing.

These are apparently local (edge) mini-datacenters that host edge electronics gear for to \ow latency latency processing. ATT has one current test zone (or cloudlet) set up in Silicon Valley and has plans to roll out more across the US.

Up until this point, I thought edge processing would be solved by moving AI and other compute resources out to the devices themselves (see my AI processing at the edge post). Moore’s law would allow today’s compute capabilities to be embedded in low-power edge devices in a decade or so.

But why wait. If you can setup a mini-(ARM based)-data center in a  neighborhood cell-phone/telephone/cable/electrical cabinet, running vSphere or Windows virtualization, with high speed networking data connections to edge devices and the cloud, you can get by with less compute processing at the edge devices, enjoy low-latency responsiveness and use less cloud resources to boot.

~~~~

Doesn’t this mean we need mini-racklets, to stack our mini cloudlets compute resources, something like 9.5″ wide and 0.5U shelving.

Just when I thought (edge) decentralization would take over compute again, cloudlets come to take it back again.

Photo Credit(s): L10000901-Edit|Guide van Nispen

Augmented Reality RFid Cup|JeanBaptisteParis

The Great Escape|Edward Webb

 

Photonic or Optical FPGAs on the horizon

Read an article this past week (Toward an optical FPGA – programable silicon photonics circuits) on a new technology that could underpin optical  FPGAs. The technology is based on implantable wave guides and uses silicon on insulator technology which is compatible with current chip fabrication.

How does the Optical FPGA work

Their Optical FPGA is based on an eraseable direct coupler (DC) built using GE (Germanium) ion implantation. A DC is used when two optical waveguides are placed close enough together such that optical energy (photons) on one wave guide is switched over to the other, nearby wave guide.

As can be seen in the figure, the red (eraseable, implantable) and blue (conventional) wave guides are fabricated on the FPGA. The red wave guide performs the function of DC between the two conventional wave guides. The diagram shows both a single stage and a dual stage DC.

By using imlantable (eraseable) DCs, one can change the path of a photonic circuit by just erasing the implantable wave guide(s).

The GE ion implantable wave guides are erased by passing a laser over it and thus annealing (melting) it.

Once erased, the implantable wave guide DC no longer works. The chart on the left of the figure above shows how long the implantable wave guide needs to be to work. As shown above once erased to be shorter than 4-5µm, it no longer acts as a DC.

It’s not clear how one directs the laser to the proper place on the Optical FPGA to anneal the implantable wave guide but that’s a question of servos and mirrors.

Previous attempts at optical FPGAs, required applying continuing voltage to maintain the switched photonics circuits. Once voltage was withdrawn the photonics reverted back to original configuration.

But once an implantable wave guide is erased (annealed) in their approach, the changes to the Optical FPGA are permanent.

FPGAs today

Electronic FPGAs have never gone out of favor with customers doing hardware innovation. By supplying Optical FPGAs, the techniques in the paper would allow for much more photonics innovation as well.

Optics are primarily used in communications and storage (CD-DVDs) today. But quantum computing could potentially use photonics and there’s been talk of a 100% optical computer for a long time. As more and more photonics circuitry comes online, the need for an optical FPGA grows. The fact that it’s able to be grown on today’s fab lines makes it even more appealing.

But an FPGA is more than just directional control over (electronic or photonic) energy. One needs to have other circuitry in place on the FPGA for it to do work.

For example, if this were an electronic FPGA, gates, adders, muxes, etc. would all be somewhere on the FPGA

However, once having placed additional optical componentry on the FPGA, photonic directional control would be the glue that makes the Optical FPGA programmable.

Comments?

Photo Credit(s): All photos from Toward an optical FPGA – programable silicon photonics circuits paper

 

Information flows everywhere – part 1

Read an article today from Scientific American (Sewage is helping cities flush out the opioid crisis) about how using chemical analysis of wastewater can be used to assess the extent of the opioid crisis in their city.

Wastewater information highway

There’s a lab at ASU (Arizona State University) that chemically analyzes samples of wastewater to determine the amount of drugs that a city’s population excretes. They can provide a near real-time assessment of the proportion of drugs in city sewage and thereby, in a city’s population.

The problem with public drug use surveys and hospital data gathering is that they take time.  Moreover, surveys and hospital data gathering typically come long after drugs problem have become a serious problem in a city’s population.

Wastewater sample drug analysis can be done in a matter of days and can be redone as often as needed. Such data could be used to track intervention activities and see if they have a real impact (positive or negative) on drug use in a population.

Neighborhood health

In addition, by sampling sewage at a neighborhood level, one can gain an assessment of drug problems at any sub-division of a city that’s needed.

The above article talks about an MIT program with Cary, NC (from Biobot.io)  that is designing robots to traverse sewer pipes and analyze wastewater chemical makeup in real time, reporting this back to ground stations around the city.

With such an approach, one could almost zero in (depending on sewer pipe networks) on any neighborhood in a city, target specific interventions at that level and measure impact in (digestion delayed) real time. Doing so, cities or states for that matter, could  experiment with different interventions on a neighborhood by neighborhood basis and gain statistical evidence on drug problem intervention effectiveness.

But, you can analyze wastewater for any number of variables, such as viruses, bacteria, enzymes, etc. Any of which can lead to a better understanding of a population’s health.

~~~~

Two things I want to leave you with:

First, public health has had a major impact on human health and has doubled our lifespan in 200 years. All modern cities have water treatment plants today to insure water quality and thereby, have reduced the incidence of cholera and other waterborne epidemics in their cities. Wastewater analysis has the potential for significant improvements in population health monitoring. Just like water treatment, wastewater analysis will someday become common public health practice in modern cities throughout the world.

Second, I was at a conference this week which presented a slide that there was no cold data anymore (Pure//Accelerate 2018). This was in reference to  re-analyzing old, cold data can often lead to insights and process improvements that were not obvious at first glance.

But it’s not just data anymore. Any activity done by man needs to be analyzed for (inherent & invisible) information flows that could be extracted to make the world a better place.

Photo Credit(s):