Know Fortran, optimize NASA code, make money

Read a number of articles this past week about NASA offering a Fortran optimization contest, the High Performance Fast Computing Contest (HPFCC) for their computational fluid dynamics (CFD) program. They want to speed up CFD by 10X to 1000X and are willing to pay for it.

The contest is being run through HeroX and TopCoder and they are offering $55K, across the various levels of the contests to the winners.

The FUN3D CVD code (manual) runs on NASA’s Pliedes Linux supercomputer complex which sports over 245K cores. Even when running on the supercomputer complex, a typical CVD FUN3D run takes thousands to millions of core hours!

The program(s)

FUN3D does a hypersonic fluid analysis over a (fixed) surface which includes a “simulation of mixtures of thermally perfect gases in thermo-chemical equilibrium and non-equilibrium. The routines in PHYSICS_DEPS enable coupling of the new gas modules to the existing FUN3D infrastructure. These algorithms also address challenges in simulation of shocks and boundary layers on tetrahedral grids in hypersonic flows.”

Not sure what all that means but I am certain there’s a number of iterations on multiple Fortran modules, and it does this over a 3D grid of points, which corresponds to both the surface being modeled and the gas mixture, it’s running through at hypersonic speeds. Sounds easy enough.

The contest(s)

There are two levels to the contest: an Ideation phase (at HeroX) and an architectural phase (at TopCoder). The $55000 is split up between the HeroX ideation phase which rewards a total of $20K: $10K for winner and 2-$5K runner up prizes and the TopCoder architectural phase which rewards a total of $35K: $15K for winner and $10K for 2nd place and another $10K for “Qualified improvement candidate”.

The (HeroX) Ideation phase looks for specific new or faster algorithms that could replace current ones in FUN3D which include “exploiting algorithmic developments in such areas as grid adaptation, higher-order methods and efficient solution techniques for high performance computing hardware.”

The (TopCoder) Architecture phase looks at specifically speeding up actual FUN3D code execution. “Ideal submission(s) may include algorithm optimization of the existing code base,  Inter-node dispatch optimization or a combination of the two.  Unlike the Ideation challenge, which is highly strategic, this challenge focuses on measurable improvements of the existing FUN3d suite and is highly tactical.”

Sounds to me that the ideation phase is selecting algorithm designs and the architecture phase is running the new algorithms or just in general speeding up the FUN3D code execution.

The equation(s)

There’s a Navier-Stokes equation algorithm that get’s called maybe a trillion times until the flow settles down, during a run and any minor improvement here would be obviously significant. Perhaps there are algorithmic changes that can be used, if your an aeronautical engineer or perhaps there are compiler speedups that can be found, if your a fortran expert. Both approaches can be validated/debugged/proved out on a desktop computer.

You have to be a US citizen to access the code and you can apply here. You will receive an email to verify your email address and then once your validated and back on the website, you need to approve the software use agreement. NASA will verify your physical address by sending a letter to you with a passcode to use to finally access the code. The process may take weeks to complete, so if your interested in the contest, best to start now.

The Fortran(s)

I learned Fortran 66 a long time ago and may have dabbled with Fortran 77 but that’s the last touched fortran. But it’s like riding a bike, once you do it, it’s easy to do it again.

As I understand it the FUN3D uses Fortran 2003 and NASA suggests you use the Gnu Fortran GFortran compiler as the Intel one has some bugs in it. There appears to be a Fortran 2015 but it’s not in main use just yet.

A million core hours, just amazing. If you could save a millisecond out of the routine called a trillion times, you’d save 1 billion seconds, or ~280K core hours.

Coders start your engines…

 

Crowdsourced vision for visually impaired

Read an article the other day in Christian Science Monitor (CSM) on the Be My Eyes App. The app is from BeMyEyes.com and is available for the iPhone and Android smart phones.

Essentially there are two groups of people that use the app:

  • Visually helpful volunteers – these people signup for the app and when a visually impaired person needs help they provide visual aid by speaking to the person on the other end.
  • Visually impaired individuals – these people signup for the app and when they are having problems understanding what they are (or are not) looking at they can turn on their camera take video with their phone and it will be sent to a volunteer, they can then ask the volunteer for help in deciding what they are looking at.

So, the visually impaired ask questions about the scenes they are shooting with their phone camera and volunteers will provide an answer.

It’s easy to register as Sighted and I assume Blind. I downloaded the app, registered and tried a test call in minutes. You have to enable notifications, microphone access and camera access on your phone to use the app. The camera access is required to display the scene/video on your phone.

According to the app there are 492K sighted individuals, 34.1K blind individuals and they have been helped 214K times.

Sounds like an easy way to help the world.

There was no requests to identify a language to use, so it may only work for English speakers. And there was no way to disable/enable it for a period of time when you don’t want to be disturbed. But maybe you would just close the app.

But other than that it was simple to use and seemed effective.

Now if there were only an app that would provide the same service for the hearing impaired to supply captions or a “filtered” audio feed to ear buds.

The world need more apps like this…

Comments

Ethereum enters the enterprise

Read an article the other day on NYT (Business Giants Announce Creation of … Ethereum).

In case you don’t know Ethereum is a open source, block chain solution that’s different than the software behind Bitcoin and IBM’s Hyperledger (for more on Hyperledger see our Blockchains at IBM post or our GreyBeardsOnStorage podcast with Donna Dillinger, IBM Fellow).

Blockchains are a software based, permanent ledger which can be used to record anything. Bitcoin, Ethereum and Hyperledger are all based on blockchains that provide similar digital information services with varying security, programability, consensus characteristics, etc.

Earth globe within a locked cageBlockchains represent an entirely new way of doing business in the digital world and have the potential to take over many financial services  and other contracting activities that are done today between organizations.

Blockchain services provide the decentralized recording of transactions into an immutable ledger.  The decentralized nature of blockchains makes it difficult (if not impossible) to game the system to record an invalid transaction.

Miners

Ethereum supports an Ethereum Virtual Machine (EVM) application which offers customers and users a more programmable blockchain. That is rather than just updating accounts with monetary transactions like Bitcoin does, one can implement specialized transaction processing for updating the immutable ledger. It’s this programability that allows for the creation of “smart contracts” which can be programmatically verified and executed.

MinerEthereum miner nodes are responsible for validating transactions and the state transition(s) that update the ledger. Transactions are grouped in blocks by miners.

Miners are responsible for validating the transaction block and performing a hard mathematical computation or proof of work (PoW) which goes along used to validate the block of transactions. Once the PoW computation is complete, the block is packaged up and the miner node updates its database (ledger) and communicates its result to all the other nodes on the network which updates their transaction ledgers as well. This constitutes one state transition of the Ethereum ledger.

Miners that validate Ethereum transactions get paid in Ethers, which are a form of currency throughout the Ethereum ecosystem.

Blockchain consensus

Ethereum ledger consensus is based on the miner nodes executing the PoW algorithm properly. The current Ethereal PoW algorithm is Ethash, which is an “ASIC resistant” algorithm. What this means is that standard GPUs and (less so) CPUs are already very well optimized to perform this algorithm and any potential ASIC designer, if they could do better, would make more money selling their design to GPU and CPU designers, than trying to game the system.

One problem with Bitcoin is that its PoW is more ASIC friendly, which has led some organizations to developing special purpose ASICs in an attempt to dominate Bitcoin mining. If they can dominate Bitcoin mining, this can  be used to game the Bitcoin consensus system and potentially implement invalid transactions.

Ethereum Accounts

Ethereum has two types of accounts:

  • Contract accounts that are controlled by the EVM application code, or
  • Externally owned accounts (EOA) that are controlled by a set of private keys and represent external agents (miner nodes, people, transaction generating entities)

Contract accounts really are code and data which constitute the EVM bytecode (application). Contract account bytecode is also stored on the Ethereum ledger (when deployed?) and are associated with an EOA that initiates the Contract account.

Contract functionality is written in Solidity, Serpent, Lisp Like Language (LLL) or other languages that can be compiled into EVM bytecode. Smart contracts use Ethereum Contract accounts to validate and execute contract actions.

Ethereum gas pricing

As EVMs contract accounts can consume arbitrary amounts of computation, bandwidth and storage to process transactions,   Ethereum uses a concept called “gas” to pay for their resource consumption.

When a contract account transaction is initiated, it identifies a gas price (in Ethers) and a maximum gas amount that it is willing to consume to process the transaction.

When a contract transaction takes place:

  • If the maximum gas amount is less than what the transaction consumes, then the transaction is executed and is applied to the ledger. Any left over or remaining gas Ethers is credited back to the EOA.
  • If the maximum gas amount is not enough to execute the transaction, then the transaction fails and no update occurs.

Enterprise Ethereum Alliance

What’s new to Ethereum is that Accenture, Bank of New York Mellon, BP, CreditSuisse, Intel, Microsoft, JP Morgan, UBS and many others have joined together to form the Enterprise Ethereum Alliance. The alliance intends to work to create a standard version of the Ethereum software that enterprise companies can use to manage smart contracts.

Microsoft has had a Azure Blockchain-as-a-Service online since 2015.  This was based on an earlier version of Ethereum called Project Bletchley.

Ethereum seems to be an alternative to IBM Hyperledger, which offers another enterprise class block chain for smart contracts. As enterprise class blockchains look like they will transform the way companies do business in the future, having multiple enterprise class blockchain solutions seems smart to many companies.

Comments?

Photo Credit(s): Miner by Mark Callahan; Gas prices by Corpsman.com; File: Ether pharmecie.jpg by Wikimedia

 

Crowdsourcing made better

765140960_735722ddf8_zRead an article the other day in MIT News (Better wisdom from crowds) about a new approach to drawing out better information from crowdsourced surveys. It’s based on something the researchers have named the “surprising popularity” algorithm.

Normally, when someone performs a crowdsourced survey, the results of the survey are typically some statistically based (simple or confidence weighted) average of all the responses. But this may not be correct because, if the majority are ill-informed then any average of their responses will most likely be incorrect.

Surprisingly popular?

10955401155_89f0f3f05a_zWhat surprising popularity does, is it asks respondents what they believe will be the most popular answer to a question and then asks what the respondent believes the correct answer to the question. It’s these two answers that they then use to choose the most surprisingly popular answer.

For example, lets say the answer the surveyors are looking for is the capital of Pennsylvania (PA, a state in the eastern USA) Philadelphia or not. They ask everyone what answer would be the most popular answer. In this case yes, because Philadelphia is large and well known and historically important. But they then ask for a yes or no on whether Philadelphia is the capital of PA. Of course the answer they get back from the crowd here is also yes.

But, a sizable contingent would answer that the capital of PA is  Philadelphia wrong (it is actually Harisburg). And because there’s a (knowledgeable) group that all answers the same (no) this becomes the “surprisingly popular” answer and this is the answer the surprisingly popular algorithm would choose.

What it means

The MIT researchers indicated that their approach reduced errors by 21.3% over a simple majority and 24.2% over a confidence weighted average.

What the researchers have found, is that surprisingly popular algorithm can be used to identify a knowledgeable subset of individuals in the respondents that knows the correct answer.  By knowing the most popular answer, the algorithm can discount this and then identify the surprisingly popular (next most frequent) answer and use this as the result of the survey.

Where might this be useful?

In our (USA) last election there were quite a few false news stories that were sent out via social media (Facebook and Twitter). If there were a mechanism to survey the readers of these stories that asked both whether this story was false/made up or not and asked what the most popular answer would be, perhaps the new story truthfulness could be completely established by the crowd.

In the past, there were a number of crowdsourced markets that were being used to predict stock movements, commodity production and other securities market values. Crowd sourcing using surprisingly popular methods might be used to better identify the correct answer from the crowd.

Problems with surprisingly popular methods

The one issue is that this approach could be gamed. If a group wanted some answer (lets say that a news story was true), they could easily indicate that the most popular answer would be false and then the method would fail. But it would fail in any case if the group could command a majority of responses, so it’s no worse than any other crowdsourced approach.

Comments?

Photo Credit(s): Crowd shot by Andrew WestLost in the crowd by Eric Sonstroem

 

Blockchains at IBM

img_6985-2I attended IBM Edge 2016 (videos available here, login required) this past week and there was a lot of talk about their new blockchain service available on z Systems (LinuxONE).

IBM’s blockchain software/service  is based on the open source, Open Ledger, HyperLedger project.

Blockchains explained

1003163361_ba156d12f7We have discussed blockchain before (see my post on BlockStack). Blockchains can be used to implement an immutable ledger useful for smart contracts, electronic asset tracking, secured financial transactions, etc.

BlockStack was being used to implement Private Key Infrastructure and to implement a worldwide, distributed file system.

IBM’s Blockchain-as-a-service offering has a plugin based consensus that can use super majority rules (2/3+1 of members of a blockchain must agree to ledger contents) or can use consensus based on parties to a transaction (e.g. supplier and user of a component).

BitCoin (an early form of blockchain) consensus used data miners (performing hard cryptographic calculations) to determine the shared state of a ledger.

There can be any number of blockchains in existence at any one time. Microsoft Azure also offers Blockchain as a service.

The potential for blockchains are enormous and very disruptive to middlemen everywhere. Anywhere ledgers are used to keep track of assets, information, money, etc, that undergo transformations, transitions or transactions as they are further refined, produced and change hands, can be easily tracked in blockchains.  The only question is can these assets, information, currency, etc. be digitally fingerprinted and can that fingerprint be read/verified. If such is the case, then blockchains can be used to track them.

New uses for Blockchain

img_6995IBM showed a demo of their new supply chain management service based on z Systems blockchain in action.  IBM component suppliers record when they shipped component(s), shippers would record when they received the component(s), port authorities would record when components arrived at port, shippers would record when parts cleared customs and when they arrived at IBM facilities. Not sure if each of these transitions were recorded, but there were a number of records for each component shipment from supplier to IBM warehouse. This service is live and being used by IBM and its component suppliers right now.

Leanne Kemp, CEO Everledger, presented another example at IBM Edge (presumably built on z Systems Hyperledger service) used to track diamonds from mining, to cutter, to polishing, to wholesaler, to retailer, to purchaser, and beyond. Apparently the diamonds have a digital bar code/fingerprint/signature that’s imprinted microscopically on the diamond during processing and can be used to track diamonds throughout processing chain, all the way to end-user. This diamond blockchain is used for fraud detection, verification of ownership and digitally certify that the diamond was produced in accordance of the Kimberley Process.

Everledger can also be used to track any other asset that can be digitally fingerprinted as they flow from creation, to factory, to wholesaler, to retailer, to customer and after purchase.

Why z System blockchains

What makes z Systems a great way to implement blockchains is its securely, isolated partitioning and advanced cryptographic capabilities such as z System functionality accelerated hashing, signing & securing and hardware based encryption to speed up blockchain processing.  z Systems also has FIPS-140 level 4 certification which can provide the highest security possible for blockchain and other security based operations.

From IBM’s perspective blockchains speak to the advantages of the mainframe environments. Blockchains are compute intensive, they require sophisticated cryptographic services and represent formal systems of record, all traditional strengths of z Systems.

Aside from the service offering, IBM has made numerous contributions to the Hyperledger project. I assume one could just download the z Systems code and run it on any LinuxONE processing environment you want. Also, since Hyperledger is Linux based, it could just as easily run in any OpenPower server running an appropriate version of Linux.

Blockchains will be used to maintain the system of record of the future just like mainframes maintained the systems of record of today and the past.

Comments?

 

Scality’s Open Source S3 Driver

img_6931
The view from Scality’s conference room

We were at Scality last week for Cloud Field Day 1 (CFD1) and one of the items they discussed was their open source S3 driver. (Videos available here).

Scality was on the 25th floor of a downtown San Francisco office tower. And the view outside the conference room was great. Giorgio Regni, CTO, Scality, said on the two days a year it wasn’t foggy out, you could even see Golden Gate Bridge from their conference room.

Scality

img_6912As you may recall, Scality is an object storage solution that came out of the telecom, consumer networking industry to provide Google/Facebook like storage services to other customers.

Scality RING is a software defined object storage that supports a full complement of interface legacy and advanced protocols including, NFS, CIGS/SMB, Linux FUSE, RESTful native, SWIFT, CDMI and Amazon Web Services (AWS) S3. Scality also supports replication and erasure coding based on object size.

RING 6.0 brings AWS IAM style authentication to Scality object storage. Scality pricing is based on usable storage and you bring your own hardware.

Giorgio also gave a session on the RING’s durability (reliability) which showed they support 13-9’s data availability. He flashed up the math on this but it was too fast for me to take down:)

Scality has been on the market since 2010 and has been having a lot of success lately, having grown 150% in revenue this past year. In the media and entertainment space, Scality has won a lot of business with their S3 support. But their other interface protocols are also very popular.

Why S3?

It looks as if AWS S3 is becoming the defacto standard for object storage. AWS S3 is the largest current repository of objects. As such, other vendors and solution providers now offer support for S3 services whenever they need an object/bulk storage tier behind their appliances/applications/solutions.

This has driven every object storage vendor to also offer S3 “compatible” services to entice these users to move to their object storage solution. In essence, the object storage industry, like it or not, is standardizing on S3 because everyone is using it.

But how can you tell if a vendor’s S3 solution is any good. You could always try it out to see if it worked properly with your S3 application, but that involves a lot of heavy lifting.

However, there is another way. Take an S3 Driver and run your application against that. Assuming your vendor supports all the functionality used in the S3 Driver, it should all work with the real object storage solution.

Open source S3 driver

img_6916Scality open sourced their S3 driver just to make this process easier. Now, one could just download their S3server driver (available from Scality’s GitHub) and start it up.

Scality’s S3 driver runs ontop of a Docker Engine so to run it on your desktop you would need to install Docker Toolbox for older Mac or Windows systems or run Docker for Mac or Docker for Windows for newer systems. (We also talked with Docker at CFD1).

img_6933Firing up the S3server on my Mac

I used Docker for Mac but I assume the terminal CLI is the same for both.Downloading and installing Docker for Mac was pretty straightforward.  Starting it up took just a double click on the Docker application, which generates a toolbar Docker icon. You do need to enter your login password to run Docker for Mac but once that was done, you have Docker running on your Mac.

Open up a terminal window and you have the full Docker CLI at your disposal. You can download the latest S3 Server from Scality’s Docker hub by executing  a pull command (docker pull scality/s3server), to fire it up, you need to define a new container (docker run -d –name s3server -p 8000:8000 scality/s3server) and then start it (docker start s3server).

It’s that simple to have a S3server running on your Mac. The toolbox approach for older Mac’s and PC’S is a bit more complicated but seems simple enough.

The data is stored in the container and persists until you stop/delete the container. However, there’s an option to store the data elsewhere as well.

I tried to use CyberDuck to load some objects into my Mac’s S3server but couldn’t get it to connect properly. I wrote up a ticket to the S3server community. It seemed to be talking to the right port, but maybe I needed to do an S3cmd to initialize the bucket first – I think.

[Update 2016Sep19: Turns out the S3 server getting started doc said you should download an S3 profile for Cyberduck. I didn’t do that originally because I had already been using S3 with Cyberduck. But did that just now and it now works just like it’s supposed to. My mistake]

~~~~

Anyways, it all seemed pretty straight forward to run S3server on my Mac. If I was an application developer, it would make a lot of sense to try S3 this way before I did anything on the real AWS S3. And some day, when I grew tired of paying AWS, I could always migrate to Scality RING S3 object storage – or at least that’s the idea.

Comments?

(#Storage-QoW 2015-002): Will we see 3D TLC NAND GA in major vendor storage products in the next year?

450_x_492_3d_nand_32_layer_stack

I was almost going to just say something about TLC NAND but there’s planar TLC and 3D TLC. From my perspective, planar NAND is on the way out, so we go with 3D TLC NAND.

QoW 2015-002 definitions

By “3D TLC NAND” we mean 3 dimensional (rather than planar or 2 dimensional) triple level cell (meaning 3 values rather than two [MLC] or one [SLC]) NAND technology. It could show up in SSDs, PCIe cards and perhaps other implementations. At least one flash vendor is claiming to be shipping 3D TLC NAND so it’s available to be used. We did a post earlier this year on 3D NAND, how high can it go. Rumors are out that startup vendors will adopt the technology but have heard nothing any major vendor plans for the technology.

By “major vendor storage products” I mean EMC VMAX, VNX or XtremIO;  HDS VSP G1000, HUS VM (or replacement), VSP-F/VSP G800-G600; HPE 3PAR, IBM DS8K, FlashSystem, or V7000 StorWize; & NetApp AFF/FAS 8080, 8060, or 8040. I tried to use 700 drives or better block storage product lines for the major storage vendors.

By “in the next year” I mean between today (15Dec2015) and one year from today (15Dec2016).

By “GA” I mean a generally available product offering that can be ordered, sold and installed within the time frame identified above.

Forecasts for QoW 2015-002 need to be submitted via email (or via twitter with email addresses known to me) to me before end of day (PT) next Tuesday 22Dec2015.

Thanks to Howard Marks (DeepStorage.net, @DeepStorageNet) for the genesis of this weeks QoW.

We are always looking for future QoW’s, so if you have any ideas please drop me a line.

Forecast contest – status update for prior QoW(s):

(#Storage-QoW 2015-001) – Will 3D XPoint be GA’d in  enterprise storage systems within 12 months? 2 active forecasters, current forecasts are:

A) YES with 0.85 probability; and

B) NO with 0.62 probability.

These can be updated over time, so we will track current forecasts for both forecasters with every new QoW.

 

An analyst forecasting contest ala SuperForecasting & 1st #Storage-QoW

71619318_80d2135743_zI recently read the book SuperForecasting: the art and science of prediction by P. E. Tetlock & D. Gardner. Their Good Judgement Project has been running for years now and the book is the results of their experiments.  I thought it was a great book.

But it also got me to thinking, how can industry analysts do a better job at forecasting storage trends and events?

Impossible to judge most analyst forecasts

One thing the book mentioned was that typically analyst/pundit forecasts are too infrequent, vague and time independent to be judge-able as to their accuracy. I have committed this fault as much as anyone in this blog and on our GreyBeards on Storage podcast (e.g. see our Yearend podcast videos…).

What do we need to do differently?

The experiments documented in the book show us the way. One suggestion is to start putting time durations/limits on all forecasts so that we can better assess analyst accuracy. The other is to start estimating a probability for a forecast and updating your estimate periodically when new information becomes available. Another is to document your rational for making your forecast. Also, do post mortems on both correct and incorrect forecasts to learn how to forecast better.

Finally, make more frequent forecasts so that accuracy can be assessed statistically. The book discusses Brier scores as a way of scoring the accuracy of forecasters.

How to be better forecasters?

In the back of the book the author’s publish a list of helpful hints or guidelines to better forecasting which I will summarize here (read the book for more information):

  1. Triage – focus on questions where your work will pay off.  For example, try not to forecast anything that’s beyond say 5 years out, because there’s just too much randomness that can impact results.
  2. Split intractable problems into tractable ones – the author calls this Fermizing (after the physicist) who loved to ballpark answers to hard questions by breaking them down into easier questions to answer. So decompose problems into simpler (answerable) problems.
  3. Balance inside and outside views – search for comparisons (outside) that can be made to help estimate unique events and balance this against your own knowledge/opinions (inside) on the question.
  4. Balance over- and under-reacting to new evidence – as forecasts are updated periodically, new evidence should impact your forecasts. But a balance has to be struck as to how much new evidence should change forecasts.
  5. Search for clashing forces at work – in storage there are many ways to store data and perform faster IO. Search out all the alternatives, especially ones that can critically impact your forecast.
  6. Distinguish all degrees of uncertainty – there are many degrees of knowability, try to be as nuanced as you can and properly aggregate your uncertainty(ies) across aspects of the question to create a better overall forecast.
  7. Balance under/over confidence, prudence/decisiveness – rushing to judgement can be as bad as dawdling too long. You must get better at both calibration (how accurate multiple forecasts are) and resolution (decisiveness in forecasts). For calibration think weather rain forecasts, if rain tomorrow is 80% probably then over time rain probability estimates should be on average correct. Resolution is no guts no glory, if all your estimates are between 0.4 and 0.6 probable, your probably being to conservative to really be effective.
  8. During post mortems, beware of hindsight bias – e.g., of course we were going to have flash in storage because the price was coming down, controllers were becoming more sophisticated, reliability became good enough, etc., represents hindsight bias. What was known before SSDs came to enterprise storage was much less than this.

There are a few more hints than the above.  In the Good Judgement Project, forecasters were put in teams and there’s one guideline that deals with how to be better forecasters on teams. Then, there’s another that says don’t treat these guidelines as gospel. And a third, on trying to balance between over and under compensating for recent errors (which sounds like #4 above).

Again, I would suggest reading the book if you want to learn more.

Storage analysts forecast contest

I think we all want to be better forecasters. At least I think so. So I propose a multi-year long contest, where someone provides a storage question of the week and analyst,s such as myself, provide forecasts. Over time we can score the forecasts by creating a Brier score for each analysts set of forecasts.

I suggest we run the contest for 1 year to see if there’s any improvements in forecasting and decide again next year to see if we want to continue.

Question(s) of the week

But the first step in better forecasting is to have more frequent and better questions to forecast against.

I suggest that the analysts community come up with a question of the week. Then, everyone would get one week from publication to record their forecast. Over time as the forecasts come out we can then score analysts in their forecasting ability.

I would propose we use some sort of hash tag to track new questions, “#storage-QoW” might suffice and would stand for Question of the week for storage.

Not sure if one question a week is sufficient but that seems reasonable.

(#Storage-QoW 2015-001): Will 3D XPoint be GA’d in  enterprise storage systems within 12 months?

3D XPoint NVM was announced last July by Intel-Micron (wrote a post about here). By enterprise storage I mean enterprise and mid-range class, shared storage systems, that are accessed as block storage via Ethernet or Fibre Channel as SCSI device protocols or as file storage using SMB or NFS file access protocols. By 12 months I mean by EoD 12/8/2016. By GA’d, I mean announced as generally available and sellable in any of the major IT regions of the world (USA, Europe, Asia, or Middle East).

I hope to have my prediction in by next Monday with the next QoW as well.

Anyone interested in participating please email me at Ray [at] SilvertonConsulting <dot> com and put QoW somewhere in the title. I will keep actual names anonymous unless told otherwise. Brier scores will be calculated starting after the 12th forecast.

Please email me your forecasts. Initial forecasts need to be in by one week after the QoW goes live.  You can update your forecasts at any time.

Forecasts should be of the form “[YES|NO] Probability [0.00 to 0.99]”.

Better forecasting demands some documentation of your rational for your forecasts. You don’t have to send me your rational but I suggest you document it someplace you can use to refer back to during post mortems.

Let me know if you have any questions and I will try to answer them here

I could use more storage questions…

Comments?

Photo Credits: Renato Guerreiro, Crystalballer