The December 2022 issue of IEEE Spectrum is here!

Close bar

"When I helped design the PC, I didn't think I'd live long enough to witness its decline. But, while PCs will continue to be much-used devices, they're no longer at the leading edge of computing. They're going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent light bulbs."

So wrote Dr. Mark Dean in an IBM blog post last week marking IBM's personal PC debut 30 years ago at the Waldorf Astoria Ballroom in New York City. Dean is an IBM Fellow and now CTO for IBM's Middle East and Africa region, and he was the chief engineer for the IBM PC/AT. Naturally, his comments have spurred a lot of additional comments.

Dean's argument—and I would urge you to read it in full—in essence states:

 "PCs are being replaced at the center of computing not by another type of device—though there’s plenty of excitement about smart phones and tablets—but by new ideas about the role that computing can play in progress. These days, it’s becoming clear that innovation flourishes best not on devices but in the social spaces between them, where people and ideas meet and interact. It is there that computing can have the most powerful impact on economy, society and people’s lives."

However, according to this article in the San Francisco Chronicle,Frank Shaw, Microsoft's corporate vice president of corporate communications, seemed to question Dean's assertion, saying that we have entered not the post-PC era but the PC-plus era, since there will still be some 400 million PCs sold this coming year. Hardly an indication of an imminent death.

Lending a bit of support to this argument but from a different perspective is a piece in the Washington Post by Joshua Topolsky, who argues that "the PC isn't dying; it is coming to life." His reasoning, if I understand it correctly, is that the interface revolution created by the iPhone and iPad and other smart devices along with the interconnectivity/storage capability created by the advent of the "cloud" will migrate back to PCs, making them even more useful—and needed—in the future.

Then there is this article in today's Wall Street Journal on how Microsoft is facing what the Journal calls the "post PC" era, and especially about whether tablets are cannibalizing PC sales to Microsoft's detriment. The article states that:

 

 

"Microsoft and Intel have long argued they represent an expansion of the computing market, focusing on tasks such as watching movies and reading online magazines rather than the work related chores handled by PCs.

"Both say iPads have primarily hurt netbooks, the inexpensive notebook computers that rose to prominence but recently faded in popularity.

"Others are less sanguine: Goldman Sachs, in a research report from April, estimated that tablet computers such as the iPad will remain 'highly cannibalistic' to traditional PCs, stealing 35% and 33% of sales in 2011 and 2012, respectively."

Adding more confusion to the debate about whether PCs are indeed dying is that some market research firms are now including tablet sales in their PC market share numbers, which implies that tablets are considered by some as PCs in different packaging.

Another announcement this morning—that Google is buying Motorola Mobility for $12.5 billion in cash—will also no doubt influence the debate. Dean, for instance, in March speculated that Smartphones will have more impact on computing than tablets.

Regardless of whether PCs are dying or merely transforming into some other device(s), it seems that as computing becomes ubiquitous, the "what" that does the computation is becoming a lot less important than the "how" and "where" involving "who" and "why."

Feel free to weigh in with your thoughts.

For those interested, a nice history of the IBM PC can be found at IBM's Archives Web site.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
Vertical
A plate of spaghetti made from code
Shira Inbar
DarkBlue1

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less
{"imageShortcodeIds":["31996907"]}