Read an article the other day in Christian Science Monitor (CSM) on the Be My Eyes App. The app is from BeMyEyes.com and is available for the iPhone and Android smart phones.
Essentially there are two groups of people that use the app:
Visually helpful volunteers – these people signup for the app and when a visually impaired person needs help they provide visual aid by speaking to the person on the other end.
Visually impaired individuals – these people signup for the app and when they are having problems understanding what they are (or are not) looking at they can turn on their camera take video with their phone and it will be sent to a volunteer, they can then ask the volunteer for help in deciding what they are looking at.
So, the visually impaired ask questions about the scenes they are shooting with their phone camera and volunteers will provide an answer.
It’s easy to register as Sighted and I assume Blind. I downloaded the app, registered and tried a test call in minutes. You have to enable notifications, microphone access and camera access on your phone to use the app. The camera access is required to display the scene/video on your phone.
According to the app there are 492K sighted individuals, 34.1K blind individuals and they have been helped 214K times.
Sounds like an easy way to help the world.
There was no requests to identify a language to use, so it may only work for English speakers. And there was no way to disable/enable it for a period of time when you don’t want to be disturbed. But maybe you would just close the app.
But other than that it was simple to use and seemed effective.
Now if there were only an app that would provide the same service for the hearing impaired to supply captions or a “filtered” audio feed to ear buds.
Blockchains are a software based, permanent ledger which can be used to record anything. Bitcoin, Ethereum and Hyperledger are all based on blockchains that provide similar digital information services with varying security, programability, consensus characteristics, etc.
Blockchains represent an entirely new way of doing business in the digital world and have the potential to take over many financial services and other contracting activities that are done today between organizations.
Blockchain services provide the decentralized recording of transactions into an immutable ledger. The decentralized nature of blockchains makes it difficult (if not impossible) to game the system to record an invalid transaction.
Ethereum supports an Ethereum Virtual Machine (EVM) application which offers customers and users a more programmable blockchain. That is rather than just updating accounts with monetary transactions like Bitcoin does, one can implement specialized transaction processing for updating the immutable ledger. It’s this programability that allows for the creation of “smart contracts” which can be programmatically verified and executed.
Ethereum miner nodes are responsible for validating transactions and the state transition(s) that update the ledger. Transactions are grouped in blocks by miners.
Miners are responsible for validating the transaction block and performing a hard mathematical computation or proof of work (PoW) which goes along used to validate the block of transactions. Once the PoW computation is complete, the block is packaged up and the miner node updates its database (ledger) and communicates its result to all the other nodes on the network which updates their transaction ledgers as well. This constitutes one state transition of the Ethereum ledger.
Miners that validate Ethereum transactions get paid in Ethers, which are a form of currency throughout the Ethereum ecosystem.
Ethereum ledger consensus is based on the miner nodes executing the PoW algorithm properly. The current Ethereal PoW algorithm is Ethash, which is an “ASIC resistant” algorithm. What this means is that standard GPUs and (less so) CPUs are already very well optimized to perform this algorithm and any potential ASIC designer, if they could do better, would make more money selling their design to GPU and CPU designers, than trying to game the system.
One problem with Bitcoin is that its PoW is more ASIC friendly, which has led some organizations to developing special purpose ASICs in an attempt to dominate Bitcoin mining. If they can dominate Bitcoin mining, this can be used to game the Bitcoin consensus system and potentially implement invalid transactions.
Ethereum has two types of accounts:
Contract accounts that are controlled by the EVM application code, or
Externally owned accounts (EOA) that are controlled by a set of private keys and represent external agents (miner nodes, people, transaction generating entities)
Contract accounts really are code and data which constitute the EVM bytecode (application). Contract account bytecode is also stored on the Ethereum ledger (when deployed?) and are associated with an EOA that initiates the Contract account.
Contract functionality is written in Solidity, Serpent, Lisp Like Language (LLL) or other languages that can be compiled into EVM bytecode. Smart contracts use Ethereum Contract accounts to validate and execute contract actions.
Ethereum gas pricing
As EVMs contract accounts can consume arbitrary amounts of computation, bandwidth and storage to process transactions, Ethereum uses a concept called “gas” to pay for their resource consumption.
When a contract account transaction is initiated, it identifies a gas price (in Ethers) and a maximum gas amount that it is willing to consume to process the transaction.
When a contract transaction takes place:
If the maximum gas amount is less than what the transaction consumes, then the transaction is executed and is applied to the ledger. Any left over or remaining gas Ethers is credited back to the EOA.
If the maximum gas amount is not enough to execute the transaction, then the transaction fails and no update occurs.
Enterprise Ethereum Alliance
What’s new to Ethereum is that Accenture, Bank of New York Mellon, BP, CreditSuisse, Intel, Microsoft, JP Morgan, UBS and many others have joined together to form the Enterprise Ethereum Alliance. The alliance intends to work to create a standard version of the Ethereum software that enterprise companies can use to manage smart contracts.
Microsoft has had a Azure Blockchain-as-a-Service online since 2015. This was based on an earlier version of Ethereum called Project Bletchley.
Ethereum seems to be an alternative to IBM Hyperledger, which offers another enterprise class block chain for smart contracts. As enterprise class blockchains look like they will transform the way companies do business in the future, having multiple enterprise class blockchain solutions seems smart to many companies.
The article reported on what’s called disengagement events that occurred on CA roads. This is where a driver has to take over from the self-driving automation to deal with a potential mis-queue, mistake, or accident.
Waymo (Google) way out ahead
It appears as if Waymo, Google’s self-driving car spin out, is way ahead of the pack. It reported only 124 disengages for 636K mi (~1M km) or ~1 disengage every ~5.1K mi (~8K km). This is ~4.3X better rate than last year, 1 disengage for every ~1.2K mi (1.9K km).
Competition far behind
Below I list some comparative statistics (from the DMV/CA report, noted above), sorted from best to worst:
BMW: 1 disengage 638 mi (1027 km)
Ford: 3 disengages for 590 mi (~950 km) or 1 disengage every ~197 mi (~317 km);
Nissan: 23 disengages for 3.3K mi (3.5K km) or 1 disengage every ~151 mi (~243 km)
Cruise (GM) automation: had 181 disengagements for ~9.8K mi (~15.8K km) or 1 disengage every ~54 mi (~87 km)
Delphi: 149 disengages for ~3.1K mi (~5.0K km) or 1 disengage every ~21 mi (~34 km);
There was no information on previous years activities so no data on how competitors had improved over the last year.
Please note: the report only applies to travel on California (CA) roads. Other competitors are operating in other countries and other states (AZ, PA, & TX to name just a few). However, these rankings may hold up fairly well when combined with other state/country data. Thousand(s) of kilometers should be adequate to assess self-driving cars disengagement rates.
Apparently, Google has figured out how to reduce their sensor (hardware) costs by a factor of 10X, bringing the sensor package down from $75K to $7.5K, (most probably due to a cheaper way to produce Lidar sensors – my guess).
So now Waymo is doing about ~65 to ~1000 X more (CA road) miles than any competitor, has a much (~8 to ~243 X) better disengage rate and is moving to become a major auto supplier in both hardware and software.
It’s going to be an interesting century.
If the 20th century was defined by the emergence of the automobile, the 21st will probably be defined by dominance of autonomous operations.
This was all inspired when the lead researcher saw an electronic blood centrifuge being used as a door stop in a remote clinic due to lack of electricity.
So they started looking at various pre-electricity toys that rotate quickly to see if they could come up with an alternative.
The surprising thing is that they clocked a toy whirligig at over 10K RPM which no-one knew before. The team worked on the device using experimentation, computer simulation and mathematical analysis of the various aspects of the device such as string elasticityin order to improve its speed and reliability. Finally, they were able to get their device to spin at 125K RPM.
They mounted a capillary (tube) onto a paper disk where the blood is placed and then they just start pulling and pushing the device to have it centrifuge the blood into its various components.
Blood centrifuges for anywhere
A blood centrifuge separates blood components into layers based on the density of blood elements. Red blood cells are the heaviest so they end up at the bottom of the tube, blood plasma is the lightest so it ends up at the top of the tube and parasites like malaria settle in the middle. Blood centrifuges help in diagnosing disease.
Any device spinning at 125K RPM is more than adequate to centrifuge blood. As such, the PaperFuge competes with electronic blood centrifuge that cost $1000-$5000 and of course, take electricity to run.
The Paperfuge is currently in field testing but at $0.20 each, it would be a boon to many clinics and remote medical personnel both on and off the world.
Earlier this week I attended Hitachi Summit 2016 along with a number of other analysts and Hitachi executives where Hitachi discussed their current and ongoing focus on the IoT (Internet of Things) business.
Hitachi is uniquely positioned to take on the IoT business over the coming decades, having a number of current businesses in industrial processes, transportation, energy production, water management, etc. Over time, all these industries and more are becoming much more data driven and smarter as IoT rolls out.
Some metrics indicating the scale of Hitachi’s current IoT business, include:
Hitachi is #79 in the Fortune Global 500;
Hitachi’s generated $5.4B (FY15) in IoT revenue;
Hitachi IoT R&D investment is $2.3B (over 3 years);
Hitachi has 15K customers Worldwide and 1400+ partners; and
Hitachi spends ~$3B in R&D annually and has 119K patents
Hitachi has been in the OT (Operational [industrial] Technology) business for over a century now. Hitachi has also had a very successful and ongoing IT business (Hitachi Data Systems) for decades now. Their main competitors in this IoT business are GE and Siemans but neither have the extensive history in IT that Hitachi has had. But both are working hard to catchup.
For one example of what Hitachi is doing in IoT, they have recently won a 27.5 year Rail-as-a-Service contract to upgrade, ticket, maintain and manage all new trains for UK Rail. This entails upgrading all train rolling stock, provide upgraded rail signaling, traffic management systems, depot and station equipment and ticketing services for all of UK Rail.
The success and profitability of this Hitachi service offering hinges on their ability to provide more cost efficient rail transport. A key capability they plan to deliver is predictive maintenance.
Today, in UK and most other major rail systems, train high availability is often supplied by using spare rolling stock, that’s pre-positioned and available to call into service, when needed. With Hitachi’s new predictive maintenance capabilities, the plan is to reduce, if not totally eliminate the need for spare rolling stock inventory and keep the new trains running 7X24.
Hitachi said their new trains capture 48K data items and generate over ~25GB/train/day. All this data, will be fed into their new Hitachi Insight Group Lumada platform which includes Pentaho, HSDP (Hitachi Streaming Data Platform) and their Content Analytics to analyze train data and determine how best to keep the trains running. Behind all this analytical power will no doubt be HDS HCP object store used to keep track of all the train sensor data and other information, Hitachi UCP servers to process it all, and other Hitachi software and hardware to glue it all together.
The new trains and services will be rolled out over time, but there’s a pretty impressive time table. For instance, Hitachi will add 120 new high speed trains to UK Rail by 2018. About the only thing that Hitachi is not directly responsible for in this Rail-as-a-Service offering, is the communications network for the trains.
Hitachi other IoT offerings
Hitachi is actively seeking other customers for their Rail-as-a-service IoT service offering. But it doesn’t stop there, they would like to offer smart-water-as-a-service, smart-city-as-a-service, digital-energy-as-a-service, etc.
There’s almost nothing that Hitachi currently supplies as industrial products that they wouldn’t consider offering in an X-as-a-service solution. With HDS Lumada Analytics, HCP and HDS storage systems, Hitachi UCP converged infrastructure, Hitachi industrial products, and Hitachi consulting services, together they are primed to take over the IoT-industrial products/services market.
Scality was on the 25th floor of a downtown San Francisco office tower. And the view outside the conference room was great. Giorgio Regni, CTO, Scality, said on the two days a year it wasn’t foggy out, you could even see Golden Gate Bridge from their conference room.
As you may recall, Scality is an object storage solution that came out of the telecom, consumer networking industry to provide Google/Facebook like storage services to other customers.
Scality RING is a software defined object storage that supports a full complement of interface legacy and advanced protocols including, NFS, CIGS/SMB, Linux FUSE, RESTful native, SWIFT, CDMI and Amazon Web Services (AWS) S3. Scality also supports replication and erasure coding based on object size.
RING 6.0 brings AWS IAM style authentication to Scality object storage. Scality pricing is based on usable storage and you bring your own hardware.
Giorgio also gave a session on the RING’s durability (reliability) which showed they support 13-9’s data availability. He flashed up the math on this but it was too fast for me to take down:)
Scality has been on the market since 2010 and has been having a lot of success lately, having grown 150% in revenue this past year. In the media and entertainment space, Scality has won a lot of business with their S3 support. But their other interface protocols are also very popular.
It looks as if AWS S3 is becoming the defacto standard for object storage. AWS S3 is the largest current repository of objects. As such, other vendors and solution providers now offer support for S3 services whenever they need an object/bulk storage tier behind their appliances/applications/solutions.
This has driven every object storage vendor to also offer S3 “compatible” services to entice these users to move to their object storage solution. In essence, the object storage industry, like it or not, is standardizing on S3 because everyone is using it.
But how can you tell if a vendor’s S3 solution is any good. You could always try it out to see if it worked properly with your S3 application, but that involves a lot of heavy lifting.
However, there is another way. Take an S3 Driver and run your application against that. Assuming your vendor supports all the functionality used in the S3 Driver, it should all work with the real object storage solution.
Open source S3 driver
Scality open sourced their S3 driver just to make this process easier. Now, one could just download their S3server driver (available from Scality’s GitHub) and start it up.
I used Docker for Mac but I assume the terminal CLI is the same for both.Downloading and installing Docker for Mac was pretty straightforward. Starting it up took just a double click on the Docker application, which generates a toolbar Docker icon. You do need to enter your login password to run Docker for Mac but once that was done, you have Docker running on your Mac.
Open up a terminal window and you have the full Docker CLI at your disposal. You can download the latest S3 Server from Scality’s Docker hub by executing a pull command (docker pull scality/s3server), to fire it up, you need to define a new container (docker run -d –name s3server -p 8000:8000 scality/s3server) and then start it (docker start s3server).
It’s that simple to have a S3server running on your Mac. The toolbox approach for older Mac’s and PC’S is a bit more complicated but seems simple enough.
The data is stored in the container and persists until you stop/delete the container. However, there’s an option to store the data elsewhere as well.
I tried to use CyberDuck to load some objects into my Mac’s S3server but couldn’t get it to connect properly. I wrote up a ticket to the S3server community. It seemed to be talking to the right port, but maybe I needed to do an S3cmd to initialize the bucket first – I think.
[Update 2016Sep19: Turns out the S3 server getting started doc said you should download an S3 profile for Cyberduck. I didn’t do that originally because I had already been using S3 with Cyberduck. But did that just now and it now works just like it’s supposed to. My mistake]
Anyways, it all seemed pretty straight forward to run S3server on my Mac. If I was an application developer, it would make a lot of sense to try S3 this way before I did anything on the real AWS S3. And some day, when I grew tired of paying AWS, I could always migrate to Scality RING S3 object storage – or at least that’s the idea.
They called the new chip, a Tensor Processing Unit (TPU). According to Google, the TPU provides an order of magnitude more power efficient machine learning over what’s achievable via off the shelf GPU/CPUs. TensorFlow is Google’s open sourced machine learning software.
A couple of weeks back I was at Intel Cloud Day 2016 with the rest of the TFD team. We listened to a number of presentations from Intel Management team mostly about how the IT world was changing and how they planned to help lead the transition to the new cloud world.
The view from Intel is that any organization with 1200 to 1500 servers has enough scale to do a private cloud deployment that would be more economical than using public cloud services. Intel’s new goal is to facilitate (private) 10,000 clouds, being deployed across the world.
In order to facilitate the next 10,000, Intel is working hard to introduce a number of new technologies and programs that they feel can make it happen. One that was discussed at the show was the new OpenStack scheduler based on Google’s open sourced, Kubernetes technologies which provides container management for Google’s own infrastructure but now supports the OpenStack framework.
Another way Intel is helping is by building a new 1000 (500 now) server cloud test lab in San Antonio, TX. Of course the servers will be use the latest Xeon chips from Intel (see below for more info on the latest chips). The other enabling technology discussed a lot at the show was software defined infrastructure (SDI) which applies across the data center, networking and storage.
According to Intel, security isn’t the number 1 concern holding back cloud deployments anymore. Nowadays it’s more the lack of skills that’s governing how quickly the enterprise moves to the cloud.
At the event, Intel talked about a couple of verticals that seemed to be ahead of the pack in adopting cloud services, namely, education and healthcare. They also spent a lot of time talking about the new technologies they were introducing today. Continue reading “Intel Cloud Day 2016 news and views”