AI reaches a crossroads

There’s been a lot of talk on the extendability of current AI this past week and it appears that while we may have a good deal of runway left on the machine learning/deep learning/pattern recognition, there’s something ahead that we don’t understand.

Let’s start with MIT IQ (Intelligence Quest),  which is essentially a moon shot project to understand and replicate human intelligence. The Quest is attempting to answer “How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?“.

Where’s HAL?

The problem with AI’s deep learning today is that it’s fine for pattern recognition, but it doesn’t appear to develop any basic understanding of the world beyond recognition.

Some AI scientists concede that there’s more to human/mamalian intelligence than just pattern recognition expertise, while others’ disagree. MIT IQ is trying to determine, what’s beyond pattern recognition.

There’s a great article in Wired about the limits of deep learning,  Greedy, Brittle, Opaque and Shallow: the Downsides to Deep Learning. The article says deep learning is greedy because it needs lots of data (training sets) to work, it’s brittle because step one inch beyond what’s it’s been trained  to do and it falls down, and it’s opaque because there’s no way to understand how it came to label something the way it did. Deep learning is great for pattern recognition of known patterns but outside of that, there must be more to intelligence.

The limited steps using unsupervised learning don’t show a lot of hope, yet

“Pattern recognition” all the way down…

There’s a case to be made that all mammalian intelligence is based on hierarchies of pattern recognition capabilities.

That is, at a bottom level  human intelligence consists of pattern recognition, such as vision, hearing, touch, balance, taste, etc. systems which are just sophisticated pattern recognition algorithms that label what we are hearing as Bethovan’s Ninth Symphony, tasting as grandma’s pasta sauce, and seeing as the Grand Canyon.

Then, at the next level there’s another pattern recognition(-like) system that takes all these labels and somehow recognizes this scene as danger, romance, school,  etc.

Then, at the next level, human intelligence just looks up what to do in this scene.  Almost as if we have a defined list of action templates that are what we do when we are in danger (fight or flight), in romance (kiss, cuddle or ?), in school (answer, study, view, hide, …), etc.  Almost like a simple lookup table with procedural logic behind each entry

One question for this view is how are these action templates defined and  how many are there. If, as it seems, there’s almost an infinite number of them, how are they selected (some finer level of granularity in scene labeling – romance but only flirting …).

No, it’s not …

But to other scientists, there appears to be more than just pattern recognition(-like) algorithms and lookup and act algorithms, going on inside our brains.

For example, once I interpret a scene surrounding me as in danger, romance, school, etc.,  I believe I start to generate possible action lists which I could take in this domain, and then somehow I select the one to do which makes the most sense in this situation or rather gets me closer to my current goal (whatever that is) in this situation.

This is beyond just procedural logic and involves some sort of memory system, action generative system, goal generative/recollection system, weighing of possible action scripts, etc.

And what to make of the brain’s seemingly infinite capability to explain itself…

Baby intelligence

Most babies understand their parents language(s) and learn to crawl within months after birth. But they haven’t listened to thousands of hours of people talking or crawled thousands of miles.  And yet, deep learning requires even more learning sets in order to label language properly or  learning how to crawl on four appendages. And of course, understanding language and speaking it are two different capabilities. Ditto for crawling and walking.

How does a baby learn to recognize these patterns without TB of data and millions of reinforcements (“Smile for Mommy”, say “Daddy”). And what to make of the, seemingly impossible to contain wanderlust, of any baby given free reign of an area.

These questions are just scratching the surface in what it really means to engineer human intelligence.


MIT IQ is one attempt to try to answer the question that: assuming we understand how to pattern recognition can be made to work well on today’s computers what else do we need to do to build a more general purpose intelligence.

There are obvious ethical questions on whether we want to engineer a human level of intelligence (see my Existential risks… post). Our main concern is what it does (to humanity) once we achieve it.

But assuming we can somehow contain it for the benefit of humanity, we ought to take another look at just what it entails.


Photo Credits:  Tech trends for 2017: more AI …., the Next Silicon Valley website. 

HAL from 2001 a Space Odyssey 

Design software test labeling… 

Exploration in toddlers…, Science Daily website

Collaboration as a function of proximity vs. heterogeneity, MIT research

Read an article the other week in MIT news on how Proximity boosts collaboration on MIT campus. Using MIT patents and papers published between 2004-2014, researchers determined how collaboration varied based on proximity or physical distance.

What they found was that distance matters. The closer you are to a person the more likely you are collaborate with him or her (on papers and patents at least).

Paper results

In looking at the PLOS research paper (An exploration of collaborative scientific production at MIT …), one can see that the relative frequency of collaboration decays as distance increases (Graph A shows frequency of collaboration vs. proximity for papers and Graph B shows a similar relationship for patents).


Other paper results

The two sets of charts below show the buildings where research (papers and patents) was generated. Building heterogeneity, crowdedness (lab space/researcher) and number of papers and patents per building is displayed using the color of the building.

The number of papers and patents per building is self evident.

The heterogeneity of a building is a function of the number of different departments that use the building. The crowdedness of a building is an indication of how much lab space per faculty member a building has. So the more crowded buildings are lighter in color and less crowded buildings are darker in color.

I would like to point out Building 32. It seems to have a high heterogeneity, moderate crowdedness and a high paper production but a relatively low patent production. Conversely, Building 68 has a low heterogeneity, low crowdedness and a high production of papers and a relatively low production of patents. So similar results have been obtained from buildings that have different crowdedness and different heterogeneity.

The paper specifically cites buildings 3 & 32 as being most diverse on campus and as “hubs on campus” for research activity.  The paper states that these buildings were outliers in research production on a per person basis.

And yet there’s no global correlation between heterogeneity or crowdedness for that matter and (paper/patent) research production. I view crowdedness as a substitute for researcher proximity. That is the more crowded a building is the closer researchers should be. Such buildings should theoretically be hotbeds of collaboration. But it doesn’t seem like they have any more papers than non-crowded buildings.

Also heterogeneity is often cited as a generator of research. Steven Johnson’s Where Good Ideas Come From, frequently mentions that good research often derives from collaboration outside your area of speciality. And yet, high heterogeneity buildings don’t seem to have a high production of research, at least for patents.

So I am perplexed and unsatisfied with the research. Yes proximity leads to more collaboration but it doesn’t necessarily lead to more papers or patents. The paper shows other information on the number of papers and patents by discipline which may be confounding results in this regard.

Telecommuting and productivity

So what does this tell us about the plight of telecommuters in todays business and R&D environments. While the paper has shown that collaboration goes down as a function of distance, it doesn’t show that an increase in collaboration leads to more research or productivity.

This last chart from the paper shows how collaboration on papers is trending down and on patents is trending up. For both papers and patents, inter-departmental collaboration is more important than inter-building collaboration. Indeed, the sidebars seem to show that the MIT faculty participation in papers and patents is flat over the whole time period even though the number of authors (for papers) and inventors (for patents) is going up.

So, I,  as a one person company can be considered an extreme telecommuter for any organization I work with. I am often concerned that  my lack of proximity to others adversely limits my productivity. Thankfully the research is inconclusive at best on this and if anything tells me that this is not a significant factor in research productivity

And yet, many companies (Yahoo, IBM, and others) have recently instituted policies restricting telecommuting because, they believe,  it  reduces productivity. This research does not show that.

So IBM and Yahoo I think what you are doing to concentrate your employee population and reduce or outright eliminate telecommuting is wrong.

Picture credit(s): All charts and figures are from the PLOS paper. 


Zipline delivers blood 7X24 using fixed wing drones in Rwanda

Read an article the other day in MIT Tech Review (Zipline’s ambitious medical drone delivery in Africa) about a startup in Silicon Valley, Zipline, that has started delivering blood by drones to remote medical centers in Rwanda.

We’ve talked about drones before (see my Drones as a leapfrog technology post) and how they could be another leapfrog 3rd world countries into the 21st century. Similar, to cell phones, drones could be used to advance infrastructure without having to go replicate the same paths as 1st world countries such as building roads/hiways, trains and other transport infrastructure.

The country

Rwanda is a very hilly but small (10.2K SqMi/26.3 SqKm) and populous (pop. 11.3m) country in east-central Africa, just a few degrees south of the Equator. Rwanda’s economy is based on subsistence agriculture with a growing eco-tourism segment.

Nonetheless, with all
its hills and poverty roads in Rwanda are not the best. In the past delivering blood supplies to remote health centers could often take hours or more. But with the new Zipline drone delivery service technicians can order up blood products with an app on a smart phone and have it delivered via parachute to their center within 20 minutes.

Drone delivery operations

In the nest, a center for drone operations, there is a tent housing the blood supplies, and logistics for the drone force. Beside the tent are a steel runway/catapults that can launch drones and on the other side of the tent are brown inflatable pillows  used to land the drones.

The drones take a pre-planned path to the remote health centers and drop their cargo via parachute to within a five meter diameter circle.

Operators fly the drones using an iPad and each drone has an internal navigation system. Drones fly a pre-planned flightaugmented with realtime kinematic satellite navigation. Drone travel is integrated within Rwanda’s controlled air space. Routes are pre-mapped using detailed ground surveys.

Drone delivery works

Zipline drone blood deliveries have been taking place since late 2016. Deliveries started M-F, during daylight only. But by April, they were delivering 7 days a week, day and night.

Zipline currently only operates in Rwanda and only delivers blood but they have plans to extend deliveries to other medical products and to expand beyond Rwanda.

On their website they stated that before Zipline, delivering blood to one health center would take four hours by truck which can now be done in 17 minutes. Their Muhanga drone center serves 21 medical centers throughout western Rwanda.

Photo Credits:

Axellio, next gen, IO intensive server for RT analytics by X-IO Technologies

We were at X-IO Technologies last week for SFD13 in Colorado Springs talking with the team and they showed us their new IO and storage intensive server, the Axellio. They want to sell Axellio to customers that need extreme IOPS, very high bandwidth, and large storage requirements. Videos of X-IO’s sessions at SFD13 are available here.

The hardware

Axellio comes in 2U appliance with two server nodes. Each server supports  2 sockets of Intel E5-26xx v4 CPUs (4 sockets total) supporting from 16 to 88 cores. Each server node can be configured with up to 1TB of DRAM or it also supports NVDIMMs.

There are two key differentiators to Axellio:

  1. The FabricExpress™, a PCIe based interconnect which allows both server nodes to access dual-ported,  2.5″ NVMe SSDs; and
  2. Dense drive trays, the Axellio supports up to 72 (6 trays with 12 drives each) 2.5″ NVMe SSDs offering up to 460TB of raw NVMe flash using 6.4TB NVMe SSDs. Higher capacity NVMe SSDS available soon will increase Axellio capacity to 1PB of raw NVMe flash.

They also probably spent a lot of time on packaging, cooling and power in order to make Axellio a reliable solution for edge computing. We asked if it was NEBs compliant and they told us not yet but they are working on it.

Axellio can also be configured to replace 2 drive trays with 2 processor offload modules such as 2x Intel Phi CPU extensions for parallel compute, 2X Nvidia K2 GPU modules for high end video or VDI processing or 2X Nvidia P100 Tesla modules for machine learning processing. Probably anything that fits into Axellio’s power, cooling and PCIe bus lane limitations would also probably work here.

At the frontend of the appliance there are 1x16PCIe lanes of server retained for networking that can support off the shelf NICs/HCAs/HBAs with HHHL or FHHL cards for Ethernet, Infiniband or FC access to the Axellio. This provides up to 2x100GbE per server node of network access.

Performance of Axellio

With Axellio using all NVMe SSDs, we expect high IO performance. Further, they are measuring IO performance from internal to the CPUs on the Axellio server nodes. X-IO says the Axellio can hit >12Million IO/sec with at 35µsec latencies with 72 NVMe SSDs.

Lab testing detailed in the chart above shows IO rates for an Axellio appliance with 48 NVMe SSDs. With that configuration the Axellio can do 7.8M 4KB random write IOPS at 90µsec average response times and 8.6M 4KB random read IOPS at 164µsec latencies. Don’t know why reads would take longer than writes in Axellio, but they are doing 10% more of them.

Furthermore, the difference between read and write IOP rates aren’t close to what we have seen with other AFAs. Typically, maximum write IOPs are much less than read IOPs. Why Axellio’s read and write IOP rates are so close to one another (~10%) is a significant mystery.

As for IO bandwitdh, Axellio it supports up to 60GB/sec sustained and in the 48 drive lax testing it generated 30.5GB/sec for random 4KB writes and 33.7GB/sec for random 4KB reads. Again much closer together than what we have seen for other AFAs.

Also noteworthy, given PCIe’s bi-directional capabilities, X-IO said that there’s no reason that the system couldn’t be doing a mixed IO workload of both random reads and writes at similar rates. Although, they didn’t present any test data to substantiate that claim.

Markets for Axellio

They really didn’t talk about the software for Axellio. We would guess this is up to the customer/vertical that uses it.

Aside from the obvious use case as a X-IO’s next generation ISE storage appliance, Axellio could easily be used as an edge processor for a massive fabric of IoT devices, analytics processor for large RT streaming data, and deep packet capture and analysis processing for cyber security/intelligence gathering, etc. X-IO seems to be focusing their current efforts on attacking these verticals and others with similar processing requirements.

X-IO Technologies’ sessions at SFD13

Other sessions at X-IO include: Richard Lary, CTO X-IO Technologies gave a very interesting presentation on an mathematically optimized way to do data dedupe (caution some math involved); Bill Miller, CEO X-IO Technologies presented on edge computing’s new requirements and Gavin McLaughlin, Strategy & Communications talked about X-IO’s history and new approach to take the company into more profitable business.

Again all the videos are available online (see link above). We were very impressed with Richard’s dedupe session and haven’t heard as much about bloom filters, since Andy Warfield, CTO and Co-founder Coho Data, talked at SFD8.

For more information, other SFD13 blogger posts on X-IO’s sessions:

Full Disclosure

X-IO paid for our presence at their sessions and they provided each blogger a shirt, lunch and a USB stick with their presentations on it.


When 64 nodes are not enough

Why would VMware with years of ESX development behind them want to develop a whole new virtualization system for Docker and other container frameworks. Especially since they already have a compatible Docker support in their current product line.

The main reason I can think of is that a 64 node cluster may be limiting to some container services and the likelihood of VMware ESX/vSphere to supporting 1000s of nodes in a single cluster seems pretty unlikely. So given that more and more cloud services are being deployed across 1000s of nodes using container frameworks, VMware had to do something or say goodbye to a potentially lucrative use case for virtualization.

Yes over time VMware may indeed extend vSphere clusters to 128 or even 256 nodes but by then the world will have moved beyond VMware services for these services and where will VMware be then – left behind.

Photon to the rescue

With the new Photon system VMware has an answer to anyone that needs 1000 to 10,000 server cluster environments. Now these customers can easily deploy their services on a VMware Photon Platform which is was developed off of ESX but doesn’t have any cluster limitations of ESX.

Thus, the need for Photon was now. Customers can easily deploy container frameworks that span 1000s of nodes. Of course it won’t be as easy to manage as a 64 node vSphere cluster but it will be easy automated and easier to deploy and easier to scale when necessary, especially beyond 64 nodes.

The claim is that the new Photon will be able to support multiple container frameworks without modification.

So what’s stopping you from taking on the Amazons, Googles, and Apples of the worlds data centers?

  • Maybe storage, but then there’s ScaleIO, and the other software defined storage solutions that are there to support local DAS clusters spanning almost incredible sizes of clusters.
  • Maybe networking, I am not sure just where NSX is in the scheme of things but maybe it’s capable of handling 1000s of nodes and maybe not but networking could be a clear limitation to what how many nodes can be deployed in this sort of environment.

Where does this leave vSphere? Probably continuation of the current trajectory, making easier and more efficient to run VMware clusters and over time extending any current limitations. So for the moment two development streams based off of ESX and each being enhanced for it’s own market.

How much of ESX survived is an open question but it’s likely that Photon will never see the VMware familiar services and operations that is readily available to vSphere clusters.


Photo Credit(s): A first look into Dockerfile system

Peak code, absurd

Read a post the other day that said we would soon reach Peak Code (see ROUGH TYPE Peak Code? post). In his post, Nick Carr discussed a NBER paper (see Robots Are Us: Some Economics of Human Replacement, purchase required). The paper implied we will shortly reach peak code because of the proliferation of software reuse and durability which will lead to less of a need for software engineers/coders.

Peak code refers to a maximum amount of code produced in a year that will be reached at some point, afterwards, code production will decline.

Software durability, hogwash

Let’s dispense with the foolish first – durability. Having been a software engineer, and managed/supervised massive (>1MLoC) engineering developments over my 30 years in the industry, code is anything but durable. Fragile yes, durable no.

Code fixes beget other bugs, often more substantial than the original. System performance is under constant stress, lest the competition take your market share. Enhancements are a never ending software curse.

Furthermore, hardware changes constantly, as components go obsolete, new processors come online, IO changes, etc. One might think new hardware would be easy  to accommodate. But you would be sadly mistaken.

New processors typically come with added enhancements beyond speed or memory size that need to be coded for. New IO busses often require significant code “improvements” to use effectively. New hardware today is moving to more cores, which makes software optimization even more difficult.

On all the projects I was on, code counts never decreased. This was mostly due to enhancements, bug fixes, hardware changes and performance improvements.

Software’s essential difference is that it is unbounded by any physical reality. Yes it has to fit in memory, yes it must execute instructions, yes it performs IO with physical devices/memory/busses. But these are just transient limitations, not physical boundaries. They all go away or change after the next generation hardware comes out, every 18 months or so.

So software grows to accommodate any change, any fix, any enhancement that can be dreamed up by man, women or beast. Software is inherently, not durable and is subject to too many changes which most often leads to increased fragility, not durability.

Software reuse, maybe

I am on less firm footing here. Code reuse is wonderful for functionality that has been done before. If adequate documentation exists, if interfaces are understandable, if you don’t mind including all the tag-along software needed to reuse the code, then reuse is certainly viable.

But, reusing software component often requires integration work, adding or modifying code to work with the module. Yes there may be less code to generate and potentially, validate/test. But, you still have to use the new function somewhere.

And Linux, OpenStack, Hadoop, et al, are readily reusable for organizations that need OS, cloud services or big data. But these things don’t operate in a vacuum. Somebody needs to code a Linux application that views, adds, changes or deletes data somewhere.  Somebody needs to write that cloud service offering which runs under OpenStack that services and moves data across the network. Somebody needs to code up MapReduce, MapR or Spark modules to take unstructured data and do something with it.

Yes there are open source applications, cloud services, and MapReduce packages for standardized activities. But these are the easy, already done parts and seldom suffice in and of themselves for what needs to be done next. Often, even using these as is requires some modifications to run on your data, your applications, and in your environment.

So, does software reuse diminish requirements for new coding, yes. Does software reuse eliminate the need for new code, definitely not.

Coding Automation, yes

Coding automation could diminish the need for new software engineers/coders. However, this would  be equivalent to human level artificial intelligence and would eliminate the need for coders/software engineers, if and when it becomes available. But if anything this would lead to a proliferation of ever more (automated) code, not less. So it’s not peak code as much as peak coders. Hopefully, I won’t see this transpire anytime soon.

So as far as I’m concerned peak code is never going to happen and when peak coders does happen, if ever we will have worse problems to contend with (see my post on Existential Threats).


Photo Credit(s): PDX Creative Coders by Bill Automata 

Apple SIM and more flexible data plans

(c) 2014, Apple (from their website)The new US and UK iPad Air 2 and Mini iPad 3’s now come with a new, programable SIM (see wikipedia SIM article for more info) card for their cellular data services. This is a first in the industry and signals a new movement to more flexible cellular data plans.

Currently, the iPad 2 Apple SIM card supports AT&T, Sprint and T-Mobile in the US (what no Verizon?) and EE in the UK. With this new flexibility one can switch iPad data carriers anytime, seemingly right on the phone without having to get up from your chair at all. You no longer need to go into a cellular vendor’s store and get a new SIM card and insert the new SIM card into your iPad Air 2.

It seems not many cellular carriers are be signed up to the new programmable SIM cards. But with the new Apple SIM’s ability to switch data carriers in an instant, can the other data carriers hold out for long.

What’s a little unclear to me is how the new Apple SIM doesn’t show support for Verizon but the iPad 2 Air literature does show support for Verizon data services. After talking with Apple iPad sales there is an actual SIM card slot in the new iPads that holds the new Apple SIM card and if you want to use Verizon you would need to get a SIM card from them and swap out the Apple SIM card for the Verizon SIM card and insert it into the iPad Air 2.

Having never bought a cellular option for my iPad’s this is all a little new to me. But it seems that when you purchase a new iPad Air 2 wifi + cellular, the list pricing is without any data plan already. So you are free to go to whatever compatible carrier you want right out of the box. With the new Apple SIM the compatible US carriers are AT&T, T-Mobile and Sprint. If you want a Verizon data plan you have to buy a Verizon iPad.

For AT&T, it appears that you can use  their Dataconnect cellular data service for tablets on a month by month basis. I assume the same is true for  T-Mobile who makes a point of not having any service contract even for phones.  Not so sure about Sprint but if AT&T offer it can Sprint be far behind.

I have had a few chats with the cellular service providers and I would say they are not all up to speed on the new Apple SIM capabilities but hopefully they will get there over time.

Now if Apple could somehow do the same for cable data plans or cable TV providers, they really could change the world – Apple TV anyone?


The Mac 30 years on

I have to admit it. I have been a Mac and Apple bigot since 1984. I saw the commercial for the Mac and just had to have one. I saw the Lisa, a Mac precursor at a conference in town and was very impressed.

At the time, we were using these green or orange screens at work connected to IBM mainframes running TSO or VM/CMS and we thought we were leading edge.

And then the Mac comes out with proportional fonts, graphics terminal screen, dot matrix printing that could print anything you could possibly draw, a mouse and a 3.5″ floppy.

Somehow my wife became convinced and bought our family’s first Mac for her accounting office. You could buy spreadsheet and a WYSIWIG Word processor software and run them all in 128KB. She ended up buying Mac accounting software and that’s what she used to run her office.

She upgraded over the years and got the 512K Mac but eventually when  she partnered with two other accountants she changed to a windows machines. And that’s when the Mac came home.

I used the Mac, spreadsheets and word processing for most of my home stuff and did some programming on it for odd jobs but mostly it just was used for home office stuff. We upgraded this over the years, eventually getting a PowerMac which had a base station with a separate CRT above it, but somehow this never felt like a Mac.

Then in 2002 we got the 15″ new iMac. This came as a half basketball base with a metal arm emerging out of the top of it, with a color LCD screen attached. I loved this Mac. We still have it but nobody’s using it anymore. I used it to edit my first family films using an early version of iMovie. It took hours to upload the video and hours more to edit it. But in the end, you had a movie on the iMac or on CD which you could watch with your family. You can’t imagine how empowered I felt.

Sometime later I left corporate America for the life of a industry analyst/consultant. I still used the 15″ iMac for the first year after I was out but ended up purchasing an alluminum Powerbook Mac laptop with my first check. This was faster than the 15″ iMac and had about the same size screen. At the time, I thought I would spend a lot out of time on the road.

But as it turns out, I didn’t spend that much time out of the office so when I generated enough revenue to start feeling more successful, I bought a iMac G5. The kids were using this until last year when I broke it. This had a bigger screen and was definitely a step up in power, storage and had a  Superdrive which allowed me to burn DVD-Rs for our family movies. When I wasn’t working I was editing family movies in half an hour or less (after import) and converting them to DVDs. Somewhere during this time, Garageband came out and I tried to record and edit a podcast, this took hours to complete and to export as a podcast.

I moved from the PowerBook laptop to a MacBook laptop. I don’t spend a lot of time out of the office but when I do I need a laptop to work on. A couple of years back I bought a MacBook Air and have been in love with it ever since. I just love the way it feels, light to the touch and doesn’t take up a lot of space. I bought a special laptop backpack for the old MacBook but it’s way overkill for the Air. Yes, it’s not that powerful, has less storage and  has the smaller screen (11″) but in a way it’s more than enough to live with on long vacations or out of the office

Sometime along the way I updated to my desktop to the aluminum iMac. It had a bigger screen, more storage and was much faster. Now movie editing was a snap. I used this workhorse for four years before finally getting my latest generation iMac with the biggest screen available and faster than I could ever need (he says now). Today, I edit GarageBand podcasts in a little over 30 minutes and it’s not that hard to do anymore.

Although, these days Windows has as much graphic ability as the Mac, what really made a difference for me and my family is the ease of use, multimedia support and the iLife software (iMovie, iDVD, iPhoto, iWeb, & GarageBand) over the years and yes, even iTunes. Apple’s Mac OS software has evolved over the years but still seems to be the easiest desktop to use, bar none.

Let’s hope the Mac keeps going for another 30 years.

Photo Credits:  Original 128k Mac Manual by ColeCamp

my original Macintosh by Blake Patterson

Brand new iMac, February 16, 2002 by Dennis Brekke

MacBook Air by

iMac Late 2012 by Cellura Technology