cOAlition S requires open access to funded research

I read a Science article this last week (A new mandate highlights costs and benefits of making all scientific articles free) about a group of funding organizations that have come together to mandate open access to all peer-reviewed research they fund called Plan S. The list of organizations in cOAlition S is impressive including national R&D funding agencies from UK, Ireland, Norway, and a number of other countries, charitable R&D funding agencies from WHO, Welcome Trust, Bill&Melinda Gates Foundation and more, and the group is also being funded by the EU. Plan S takes effect this year.

Essentially, all research funded by these organizations must be immediately published in open access forum, open access journals or be freely available in an open access section of a publishers website which means it could be free to be read by anyone worldwide with access to the web. Authors and institutions will retain copyright for the work and the work will be published under an open access license such as the CC BY (Creative Commons Attribution) license.

Why open access is important

At this blog, frequently we find ourselves writing about research which is only available on a paid subscription or on a pay per article basis. However, sometimes, if we search long enough, we find a duplicate of the article published in pre-print form in some preprint server or open access journal.

We have written about open access journals before (see our New Science combats Coronavirus post). Much of what we do on this blog would not be possible without open access journals like PLoS, BioRxiv, and PubMed.

Open access mandates are trending

Open access mandates have been around for a while now. And even the US Gov’t got into the act, mandating all research funded by the NIH be open access by 2008, with Dept of Agriculture and Energy following later (see wikipedia Open access mandates).

In addition, given the pandemic emergency, many research publishers like Nature and Elsevier made any and all information about the Coronavirus free access on their websites.

Impacts and R&D research publishing business model

Although research is funded by public organizations such as charities and government agencies, prior to open access mandates, most research was published in peer-reviewed journal magazines which charged a fee for access. For many research organizations, those fees were a cost of doing research. If you were an independent researcher or in an institution that couldn’t afford these fees, attempting to do cutting edge research was impossible without this access.

Yes in some cases, those journal repositories waved these fees for deserving institutions and organizations but this wasn’t the case for individual researchers. Or If you were truly diligent, you could request a copy of a paper from an author and wait.

Of course, journal publishers have real expenses they needed to cover, as well as make a reasonable profit. But due to business consolidation, there were fewer independent journals around and as a result, they charged bundled license fees for vast swathes of research articles. Such a wide bundle may or may not be of interest to an individual or an institution. That plus with consolidation, profits were becoming a more significant consideration.

So open access mandates, often included funding to cover fees for publishers to supply open access. Such fees varied widely. So open access mandates also began to require fees to be published and to be supplied a description how prices were calculated. By doing so, their hope was to make such costs more transparent

Impacts on authors of research articles

Somewhere there’s an aphorism for researchers that says “publish or perish“, which means you must publish research in order to become a recognized expert in your field. Recognition often the main driver behind better academic employment and more research funding.

However, it’s not just about volume of published papers, the quality of research also matters. And the more highly regarded publishing outlets have an advantage here, in that they are de facto gatekeepers to whats published in their journals. As such, where you publish can often lend credibility to any research.

Another thing changed over the last few decades, judging the quality of research has become more quantative. Nowadays, research quality is also dependent on the number of citations it receives. The more popular a publisher is, the more readers it has which increases the possibility for citations.

Thus, most researchers try to publish their best work in highly regarded journals. And of course, these journals have a high cost to provide open access.

Successful research institutions can afford to pay these prices but those further down the totem pole cannot.

Most mandates come with additional funding to support paying the cost to supply open access. But they also require publishing and justifying these. In the belief that in doing this so it will lend some transparency to these costs.

So the researcher is caught in the middle. Funding organizations want open access to research they fund. And publishers want to be paid a profit for that access.

History of research publication

Nature magazine first started publishing research in 1859, Science magazine first published in 1880, the Royal Society first published research in 1665. So publishing research has been going on for 350 years, and at least as a for profit business model, since the mid-1800s.

Research prior to being published in journals was only available in books. And more than likely, the author of the research had to pay to have a book published and the publisher made money only when those books were sold. And prior to that, scientific research was mostly only available in a course of study, also mostly paid for by the student.

So science has always had a cost to access. What open access mandates are doing is moving this cost to something added to the funding of research.

Now if open access can only solve the reproducibility crisis in science we could have us a real scientific revolution.

Comments?

Photo Credits:

Services and products, a match made in heaven

wrench rust by HVargas (cc) (from Flickr)
wrench rust by HVargas (cc) (from Flickr)

In all the hoopla about company’s increasing services revenues what seems to be missing is that hardware and software sales automatically drive lots of services revenues.

A recent Wikibon post by Doug Chandler (see Can cloud pull services and technology together …) showed a chart of leading IT companies percent of revenue from services.  The percentages ranged from a high of 57% for  IBM to a low of 12% for Dell, with the median being ~26.5%.

In the beginning, …

It seems to me that services started out being an adjunct to hardware and software sales – i.e., maintenance, help to install the product, provide operational support, etc. Over time, companies like IBM and others went after service offerings as a separate distinct business activity, outside of normal HW and SW sales cycles.

This turned out to be a great revenue booster, and practically turned IBM around in the 90s.   However, one problem with hardware and software vendors reporting of service revenue is that they also embed break-fix, maintenance and infrastructure revenue streams in these line items.

The Wikibon blog mentioned StorageTek’s great service revenue business when Sun purchased them.  I recall that at the time, this was primarily driven by break-fix, maintenance and infrastructure revenues and not mainly from other non-product related revenues.

Certainly companies like EDS (now with HP), Perot Systems (now with Dell), and other pure service companies generate all their revenue from services not associated with selling HW or SW.  Which is probably why HP and Dell purchased them.

The challenge for analysts is to try to extract the more ongoing maintenance, break-fix and infrastructure revenues from other service activity in order to understand how to delineate portions of service revenue growth:

  • IBM seems to break out their GBS (consulting and application mgnt) from their GTS (outsourcing, infrastructure, and maint) revenues (see IBM’s 10k).  However extracting break-fix and maintenance revenues from the other GTS revenues is impossible outside IBM.
  • EMC has no breakdown whatsoever in their services revenue line item in their 10K.
  • HP similarly, has no breakdown for their service revenues in their 10K.

Some of this may be discussed in financial analyst calls, but I could locate nothing but the above in their annual reports/10Ks.

IBM and Dell to the rescue

So we are all left to wonder how much of reported services revenue is ongoing maintenance and infrastructure business versus other services business.  Certainly IBM, in reporting both GBS and GTS gives us some inkling of what this might be in their annual report: GBS is $18B and GTS is $38B. So that means maint and break-fix must be some portion of that GTS line item.

Perhaps we could use Dell as a proxy to determine break-fix, maintenance and infrastructure service revenues. Not sure where Wikibon got the reported service revenue % for Dell but their most recent 10K shows services are more like 19% of annual revenues.

Dell had a note in their “Results from operations” section that said Perot systems was 7% of this.  Which means previous services, primarily break-fix, maintenance and other infrastructure support revenues accounted for something like 12% (maybe this is what Wikibon is reporting).

Unclear how well Dell revenue percentages are representative of the rest of the IT industry but if we take their ~12% of revenues off the percentages reported by Wikibon then the new ranges are from 45% for IBM to 7% for Dell with an median around 14.5% for non-break fix, maintenance and infrastructure service revenues.

Why is this important?

Break-fix, maintenance revenues and most infrastructure revenues are entirely associated with product (HW or SW) sales, representing an annuity once original product sales close.  The remaining service revenues are special purpose contracts (which may last years), much of which are sold on a project basis representing non-recurring revenue streams.

—-

So the next time some company tells you their service revenues are up 25% YoY, ask them how much of this is due to break-fix and maintenance.  This may tell you whether their product footprint expansion or their service offerings success is driving service revenue growth.

Comments?

Information commerce – part 2

3d personal printer by juhansonin (cc) (from Flickr)
3d personal printer by juhansonin (cc) (from Flickr)

I wrote a post a while back about how interplanetary commerce could be stimulated through the use of information commerce (see my Information based inter-planetary commerce post).  Last week I saw an article in the Economist magazine that discussed new 3D-printers used to create products with just the design information needed to describe a part or product.  Although this is only one type of information commerce, cultivating such capabilities can be one step to the future information commerce I envisioned.

3D Printers Today

3D printers grew up from the 2D inkjet printers of last century.  It turns out if 2D printers can precisely spray ink on a surface it stands to reason that similar technology could potentially build up a 3D structure one plane at a time.  After each layer is created, a laser, infrared light or some other technique is used to set the material into it’s proper form and then the part is incrementally lowered so that the next layer can be created.

Such devices use a form of additive manufacturing which adds material to the exact design specifications necessary to create one part. In contrast, normal part manufacturing activities such as those using a lathe are subtractive manufacturing activities, i.e., they take a block of material and chip away anything that doesn’t belong in the final part design.

3D printers started out making cheap, short-life plastic parts but recently, using titanium oxide powders, have been used to create extremely long lived, metal aircraft parts and nowadays can create any short- or long-lived plastic part imaginable.  A few limitations persist, namely, the size of the printer determines the size of the part or product and 3D printers that can create multi-material parts are fairly limited.

Another problem is the economics of 3D printing of parts, both in time and cost.  Volume production, using subtractive manufacturing of parts is probably still a viable alternative, i.e., if you need to manufacture 1000 or more of the same part, it probably still makes sense to use standard manufacturing techniques.   However, the boundary as to where it makes economic sense to 3D print a part or whether to use a lathe to manufacture a part is gradually moving upward.  Moreover, as more multi-material capable 3D printers start coming online, the economics of volume product manufacturing (not just a single part) will cause a sea change in product construction.

Information based, intra-planetary commerce

The Economist article discussed some implications of sophisticated 3D printers available in the near future.  Specifically, with 3D printers coming soon, manufacturing can now be done locally rather than having to ship parts and products from one country to another.  Using 3D printers all one needed to do was to transmit the product design to wherever it needs to be produced and sold.  They believed this would eliminate most cost advantages available today for low-wage countries that manufacturing parts and products.

The other implication that comes with newer 3D printers is that product customization is now much easier to do.  I envision clothing, furnishing, and other goods that can be literally tailor made for an individual with the proper use of design rule checking CAD software together with local, sophisicated 3D printers.  How Joe Consumer, fires up a CAD program and tailors their product is another matter.  But with 3D printers coming online, sophisticated, CAD knowledgeable users could almost do this today.

—-

In the end, the information needed to create a part or a product will be the key intellectual property.  It’s already been happening for years now but the dawn of 3D printers will accelerate this trend even more.

Also, 3D printers will expand information commerce, joining the already present, information activities provided by the finance, research/science, media, and other information purveyors around the planet today.  Anything that makes information more a part of everyday commerce can be beneficial, whenever we ultimately begin to move off this world to the next planet – let alone when I want to move to Tahitti…

Comments?

Real-time data analytics from customer interactions

the ghosts in the machine by MelvinSchlubman (cc) (From Flickr)
the ghosts in the machine by MelvinSchlubman (cc) (From Flickr)

At a recent EMC product launch in New York, there was a customer question and answer session for industry analysts with four of EMC’s leading edge customers. One customer, Marco Pacelli, was the CEO of ClickFox, a company providing real-time data analytics to retailers, telecoms, banks and other high transaction volume companies.

Interactions vs. transactions

Marco was very interesting, mostly because at first I didn’t understand what his company was doing or how they were doing it.  He made the statement that for every transaction (customer activity that generates revenue) companies encounter (and their are millions of them), there can be literally 10 to a 100 distinct customer interactions.  And it’s the information in these interactions which can most help companies maximize transaction revenue, volume and/or throughput.

Tracking and tracing through all these interactions in real-time, to try to make sense of the customer interaction sphere is a new and emerging discipline.  Apparently, ClickFox makes extensive use of GreenPlum, one of EMC’s recent acquisitions to do all this but I was more interested in what they were trying to achieve than the products used to accomplish this.

Banking interactions

For example, it seems that the websites, bank tellers, ATM machines and myriad of other devices one uses to interact with a bank are all capable of recording any interaction or actions we perform. What ClickFox seems to do is to track customer interactions across all these mechanisms to trace what transpired that led to any transaction, and determines how it can be done better. The fact that most banking interactions are authenticated to one account, regardless of origin, makes tracking interactions across all facets of customer activity possible.

By doing this, ClickFox can tell companies how to generate more transactions, faster.  If a bank can somehow change their interactions with a customer across websites, bank tellers, ATM machines, phone banking and any other touchpoint, so that more transactions can be done with less trouble, it can be worth lots of money.

How all that data is aggregated and sent offsite or processed onsite is yet another side to this problem but ClickFox is able to do all this with the help of GreenPlum database appliances.  Moreover, ClickFox can host interaction data and perform analytics at their own secure site(s) or perform their analysis on customer premises depending on company preference.

—-

Marco’s closing comments were something like the days of offloading information to a data warehouse, asking a question and waiting weeks for an answer are over, the time when a company can optimize their customer interactions by using data just gathered, across every touchpoint they support, are upon us.

How all this works for non-authenticated interactions was another mystery to me.  Marco indicated in later discussions that it was possible to identify patterns of behavior that led to transactions and that this could be used instead to help trace customer interactions across company touchpoints for similar types of analyses!?  Sounds like AI on top of database machines…

Comments?

Whatever happened to holographic storage?

InPhase Technologies Drive & Media (c) 2010 InPhase Technologies, All Rights Reserved (From their website)
InPhase Technologies Drive & Media (c) 2010 InPhase Technologies, All Rights Reserved (From their website)

Although InPhase Technologies and a few other startups had taken a shot at holographic storage over time, there has not been any recent innovation here that I can see.

Ecosystems matter

The real problem (which InPhase was trying to address) is to build up an ecosystem around their technology.  In magnetic disk storage, you have media companies, head companies, and interface companies; in optical disk (Blu-Ray, DVDs, CDs) you have drive vendors, media vendors, and laser electronic providers; in magnetic tape, you have drive vendors, tape head vendors, and tape media vendors, etc.  All of these corporate ecosystems are driving their respective technologies with joint and separate R&D funding, as fast as they can and gaining economies of scale from specialization.

Any holographic storage or any new storage technology for that matter would have to enter into the data storage market with a competitive product but the real trick is maintaining that competitiveness over time. That’s where an ecosystem and all their specialized R&D funding can help.

Market equavalence is fine, but technology trend parity is key

So let’s say holographic storage enters the market with a 260GB disk platter to compete against something like Blu-ray. Well today Blu-ray technology supports 26GB of data storage in single layer media, costing about $5 each and a drive costs about ~$60-$190.   So to match todays Blu-ray capabilities holographic media would need to cost ~$50 and the holographic drive about ~$600-$1900.  But that’s just today, dual layer Blu-Ray is available coming on line soon and in the labs, a 16-layer Blu-ray recording was demonstrated in 2008.  To keep up with Blu-ray, holographic storage would need to demonstrate in their lab more than 4TB of data on a platter and be able to maintain similar cost multipliers for their media and drives.  Hard to do with limited R&D funding.

As such, I believe it’s not enough to achieve parity to other technologies currently available, any new storage technology really has to be at least (in my estimation) 10x better in costs and performance right at the start in order to gain some sort of foothold that can be sustained.  To do this against Blu-ray, optical holographic would need to start at 260GB platter for $5 with a drive at $60-$190 – just not there yet.

But NAND Flash/SSDs did it!

Yes, but the secret with NAND/SSDs was that they emerged from e-prom’s a small but lucrative market and later their technology was used in consumer products as a lower cost alternative/lower power/more rugged solution to extremely small form factor disk devices that were just starting to come online.  We don’t hear about extremely small factor disk drives anymore because NAND flash won out.  Once NAND flash held the market there, consumer product volumes were able to drive costs down and entice the creation of a valuable multi-company/multi-continent ecosystem.  From there, it was only a matter of time before NAND technologies became dense and cheap enough to be used in SSDs addressing the more interesting and potential more lucrative enterprise data storage domain.

So how can optical holographic storage do it?

Maybe the real problem for holographic storage was its aim at the enterprise data storage market, perhaps if they could have gone after some specialized or consumer market and carved out a niche, they could have created an ecosystem.  Media and Entertainment has some pretty serious data storage requirements which might be a good match.  InPhase was making some inroads there but couldn’t seem to put it altogether.

So what’s left for holographic technology to go after – perhaps medical imaging.  It would play to holographic’s storage strengths (ability to densely record multiple photographs). It’s very niche-like with a few medical instrument players developing MRI, cat scans and other imaging technology that all require lot’s of data storage and long-term retention is a definite plus.  Perhaps, if holographic technology could collaborate with a medical instrument consortium to establish a beachhead and develop some sort of multi-company ecosystem, it could move out from there.  Of course, magnetic disk and tape are also going after this market,  so this isn’t a certainty but there may be others markets like this out there, e.g., check imaging, satellite imagery, etc.  Something specialized like this could be just the place to hunker down, build an ecosystem and in 5-7 years, emerge to attack general data storage again.

Comments?

Commodity hardware always loses

Herman Miller's Embody Chair by johncantrell (cc) (from Flickr)
A recent post by Stephen Foskett has revisted a blog discussion that Chuck Hollis and I had on commodity vs. special purpose hardware.  It’s clear to me that commodity hardware is a losing proposition for the storage industry and for storage users as a whole.  Not sure why everybody else disagrees with me about this.

It’s all about delivering value to the end user.  If one can deliver equivalent value with commodity hardware than possible with special purpose hardware then obviously commodity hardware wins – no question about it.

But, and it’s a big BUT, when some company invests in special purpose hardware, they have an opportunity to deliver better value to their customers.  Yes it’s going to be more expensive on a per unit basis but that doesn’t mean it can’t deliver commensurate benefits to offset that cost disadvantage.

Supercar Run 23 by VOD Cars (cc) (from Flickr)
Supercar Run 23 by VOD Cars (cc) (from Flickr)

Look around, one sees special purpose hardware everywhere. For example, just checkout Apple’s iPad, iPhone, and iPod just to name a few.  None of these would be possible without special, non-commodity hardware.  Yes, if one disassembles these products, you may find some commodity chips, but I venture, the majority of the componentry is special purpose, one-off designs that aren’t readily purchase-able from any chip vendor.  And the benefits it brings, aside from the coolness factor, is significant miniaturization with advanced functionality.  The popularity of these products proves my point entirely – value sells and special purpose hardware adds significant value.

One may argue that the storage industry doesn’t need such radical miniaturization.  I disagree of course, but even so, there are other more pressing concerns worthy of hardware specialization, such as reduced power and cooling, increased data density and higher IO performance, to name just a few.   Can some of this be delivered with SBB and other mass-produced hardware designs, perhaps.  But I believe that with judicious selection of special purposed hardware, the storage value delivered along these dimensions can be 10 times more than what can be done with commodity hardware.

Cuba Gallery: France / Paris / Louvre / architecture / people / buildings / design / style / photography by Cuba Gallery (cc) (from Flickr)
Cuba Gallery: France / Paris / Louvre / ... by Cuba Gallery (cc) (from Flickr)

Special purpose HW cost and development disadvantages denied

The other advantage to commodity hardware is the belief that it’s just easier to develop and deliver functionality in software than hardware.  (I disagree, software functionality can be much harder to deliver than hardware functionality, maybe a subject for a different post).  But hardware development is becoming more software like every day.  Most hardware engineers do as much coding as any software engineer I know and then some.

Then there’s the cost of special purpose hardware but ASIC manufacturing is getting more commodity like every day.   Several hardware design shops exist that sell off the shelf processor and other logic one can readily incorporate into an ASIC and Fabs can be found that will manufacture any ASIC design at a moderate price with reasonable volumes.  And, if one doesn’t need the cost advantage of ASICs, use FPGAs and CPLDs to develop special purpose hardware with programmable logic.  This will cut engineering and development lead-times considerably but will cost commensurably more than ASICs.

Do we ever  stop innovating?

Probably the hardest argument to counteract is that over time, commodity hardware becomes more proficient at providing the same value as special purpose hardware.  Although this may be true, products don’t have to stand still.  One can continue to innovate and always increase the market delivered value for any product.

If there comes a time when further product innovation is not valued by the market than and only then, does commodity hardware win.  However, chairs, cars, and buildings have all been around for many years, decades, even centuries now and innovation continues to deliver added value.  I can’t see where the data storage business will be any different a century or two from now…

Why cloud, why now?

Moore’s Law by Marcin Wichary (cc) (from Flickr)
Moore’s Law by Marcin Wichary (cc) (from Flickr)

I have been struggling for sometime now to understand why cloud computing and cloud storage have suddenly become so popular.  We have previously discussed some of cloud problems (here and here) but we have never touched on why cloud has become so popular.

In my view, SaaS or ASPs and MSPs have been around for a decade or more now and have been renamed cloud computing and storage but they have rapidly taken over the IT discussion.  Why now?

At first I thought this new popularity was due to the prevalence of higher bandwidth today. But later I determined that this was too simplistic.  Now I would say the reasons cloud services have become so popular, include

  • Bandwidth costs have decreased substantially
  • Hardware costs have decreased substantially
  • Software costs remain flat

Given the above one would think that non-cloud computing/storage would also be more popular today and you would be right.  But, there is something about the pricing reduction available from cloud services which substantially increases interest.

For example, at $10,000 per widget, a market size may be ok, at $100/widget the market becomes larger still, and at $1/widget the market can be huge.  This is what seems to have happened to Cloud services.  Pricing has gradually decreased, brought about through hardware and bandwidth cost reductions and has finally reached a point where the market has grown significantly.

Take email for example:

Now with Google or Exchange Online you have to supply internet access or the bandwidth required to access the email account.  For Exchange, you would also need to provide the internet access to get email in and out of your environment, servers and storage to run Exchange server, and would use internal LAN resources to distribute that email to internally attached clients.  I would venture to say the similar pricing differences applies to CRM, ERP, storage etc. which could be hosted in your data center or used as a cloud service.  Also, over the last decade these prices have been coming down for cloud services but have remained (relatively) flat for on premises services.

How does such pricing affect market size?

Well, when it costs ~$1034 (+ server costs + admin time) to field 5 Exchange email accounts vs.  $250 for 5 Gmail ($300 for 5 Exchange Online) accounts the assumption is that the market will increase, maybe not ~12X but certainly 3X or more.  At ~$3000 or more, I need a substantially larger justification to introduce enterprise email services but at $250,  justification becomes much simpler.

Moreover, the fact that the entry pricing is substantially smaller, i.e.,  $~2800 for one Exchange Standard Edition account vs $50 for one (Gmail) email account, justification becomes almost a non-issue and the market size grows geometrically.  In the past, pricing for such services may have prohibited small business use, but today cloud pricing makes them very affordable and as such, more widely adopted.

I suppose there is another inflection point at  $0.50/mail user that would increase market size even more.  However, at some point anybody in the world with internet access could afford enterprise email services and I don’t think the market could grow much larger.

So there you have it.  Why cloud, why now – the reasons are hardware and bandwidth pricing have come down giving rise to much more affordable cloud services opening up more market participants at the low end.  But it’s not just SMB customers that can now take advantage of these lower priced services, large companies can also now afford to implement applications which were too costly to introduce before.

Yes, cloud services can be slow and yes, cloud services can be insecure but, the price can’t be beat.

As to why software pricing has remained flat must remain a mystery for now but may be treated in some future post.

Any other thoughts as to why cloud’s popularity has increased so much?

Future iBook pricing

Apple's iPad iBook app (from Apple.com)
Apple's iPad iBook app (from Apple.com)

All the news about iPad and iBooks app got me thinking. There’s been much discussion on e-book pricing but no one is looking at what to charge for items other than books.  I look at this as something like what happened to albums when iTunes came out.  Individual songs were now available without having to buy the whole album.

As such, I started to consider what iBooks should charge for items outside of books.  Specifically,

  • Poems – no reason the iBooks app should not offer poems as well as books but what’s a reasonable price for a poem.  I believe Natalie Goldberg in Writing Down the Bones: Freeing the Writer Within used to charge $0.25 per poem.  So this is a useful lower bound, however considering inflation (and assuming $0.25 was 1976 pricing), in today’s prices this would be closer to $1.66.  With iBooks app’s published commission rate (33% for Apple) future poets would walk away with $1.11 per poem.
  • Haiku – As a short form poem I would argue that a Haiku should cost less than a poem.  So, maybe $0.99 per haiku,would be a reasonable price.
  • Short stories – As a short form book pricing for short stories needs to be somehow proportional to normal e-book pricing.  A typical book has about 10 chapters and as such, it might be reasonable to consider a short story as equal to a chapter.  So maybe 1/10th the price of an e-book is reasonable.  With the prices being discussed for books this would be roughly the price we set for poems.  No doubt incurring the wrath of poets forevermore, I  am willing to say this undercuts the worth of short stories and would suggest something more on the order of $2.49 for a short story.  (Poets please forgive my transgression.)
  • Comic books – Comic books seem close to short stories and with their color graphics would do well on the iPad.  It seems to me that these might be priced somewhere in between short stories and poems,  perhaps at $1.99 each.
  • Magazine articles – I see no reason that magazine articles shouldn’t be offered as well as short stories outside the magazine itself. Once again, color graphics found in most high end magazines should do well on the iPad.  I would assume pricing similar to short stories would make sense here.

University presses, the prime outlet for short stories today, seem similar to small record labels.  Of course,  the iBooks app could easily offer to sell their production as e-books in addition to selling their stories separately. Similar considerations apply to poetry publishers. Selling poems and short stories outside of book form might provide more exposure for the authors/poets and in the long run, more revenue for them and their publishers.  But record companies will attest that your results may vary.

Regarding magazine articles and comic books there seems to be a dependance on advertising revenue that may suffer from iBook publishing.  This could be dealt with by incorporating publisher advertisements in iBook displays of an article or comic book.   However, significant advertisement revenue comes from ads placed outside of articles, such as in back matter, around the table of contents, in-between articles, etc.  This will need to change with the transition to e-articles – revenues may suffer.

Nonetheless, all these industries can continue to do what they do today.  Record companies still exist, perhaps not doing as well as before iTunes, but they still sell CDs.  So there is life after iTunes/iBooks, but one things for certain – it’s different.

Probably missing whole categories of items that could be separated from book form as sold today,  But in my view, anything that could be offered separately probably will be.  Comments?