Tech Talk iconTech Talk

Apple Just Announced a Flip-killer, the iPod Nano Video Camera

I've been thinking about putting a Flip video camera high on my Christmas list, so much more convenient than lugging around my old digital video cassette camera for family events. But Apple's intro today of its Flip-killer–a video camera that oh, by the way, is built into an iPod Nano–just sunk that idea. Not just because it's an iPod too (I'm thinking I wouldn't use it for music, I'd be saving the memory for movies), but because I have complete faith in Apple making the user interface easy, I won't need to load more software (Flip requires a special app), and it'll go right into iTunes without the conversion that Flip videos require. Plus it's thinner, boasts a five hour battery life, and is about the same price ($149 for 8 GB). And oh yeah, I like the colors. Which could present a problem--do I want pink, or red, or blue...

Followup: I saw my first video Nano in the wild shortly after 7 p.m., just eight hours after the announcement--in the hands of parent taking videos at a back-to-school event. It was a red one. It got away before I could check it out.

Tech Museum of Silicon Valley Announces 2009 Laureates


This week, the Tech Museum of Silicon Valley announced its 2009 laureates. Among the 15 honorees:
—Joseph Adelegan, whose project in Nigeria takes the waste stream from slaughterhouses and turns it into methane for electricity generation or cooking gas.
—Sean White, who is digitizing the plant collection of the Smithsonian to create an Electronic Field Guide that will identify species through object recognition.
—The Alternative Energy Development Corp. of South Africa, which is using zinc air fuel cells for household electricity.
—Solar Ear, a Brazilian company building inexpensive hearing aids that come with solar rechargers.
—Geogebra, an organization developing open-source software for teaching geometry, algebra, and calculus.

The Tech Awards annually honor efforts to use technology to improve the lives of people around the world. One laureate in each of five categories—environment, economic development, education, equality, and health—will receive a cash prize of $50,000, to be announced at a gala on November 19th. This year’s James C. Morgan Global Humanitarian Award recipient, Al Gore, will also be recognized at the gala.

The announcement came at the unveiling of a new Tech Museum gallery, “Technology Benefiting Humanity. The exhibit includes interactive looks at the inventions of eleven previous laureates including Solar Sailor, a company that combines wind, solar, and hybrid technology to power boats, and Adaptive Eyecare, a company that is developing glasses with lenses whose power can be adjusted by the wearer. 

Ray tracing, Parallel Computing and a Bugatti Veyron

At last week's Hot Chips symposium, Nvidia founder and CEO Jen-Hsun Huang delivered the first keynote about the GPU computing revolution.

The keynote was definitely the highlight of the conference, but before I get all swoony over the incredible directional flame sprites and the finger-licking Bugatti Veyron their GPUs can render, first I need to pick on Nvidia a little.

That’s because the company was selling $200 3-D glasses at their booth. Or, they were trying to. I didn’t see anyone buy them, and if anyone did, they didn’t tell me about it.

The glasses were supposed to augment a very engrossing 3-D Batman game Nvidia had nakedly set up to lure passers-by. Apparently they created a deeper z-space by giving each lens a different refresh rate. Something like that. I put on the glasses and played for a while. It’s telling either of my unsophistication with games or of how unimpressive these glasses were that I failed to notice that you had to actually turn them on—when someone out pointed my mistake, and I flipped the on switch, the only difference I noticed was a pretty blue LED light.

But enough: let’s make with the swooning.

First, Huang took the audience back to February of 1993, when he'd just finished his master’s in electrical engineering at Stanford, and Nvidia was just a gleam in a venture capitalist's eye. For perspective, 1993 is so long ago that there was no need to have a PC on your desktop even if you were trying to get people to invest in your computer company. “If we had told our investors at the time that we’d be using the same hardware to play games and try to cure cancer," he said, "I am sure we would not have been funded."

“The GPU will likely be parallel processor for the future,” he told the crowd. Computers are being driven to parallel computing because people can do magical things with them.

Nvidia’s Teraflop-capable GPUs can, in fact, do some things that would have literally appeared to be magic to a person in 1993: Augmented reality in Monday Night Football, where it’s possible for the football players to stand on top of the 3-D rendered line of scrimmage projected onto the field but under the players. The flags rendered under the ice at Olympic hockey games; Ann Curry’s set during the 2008 election coverage. But you know all this stuff.

The point is this: The GPU has evolved faster than any other tech component, their complexity increasing from a few million transistors in 1994 to billions in 2009. That’s a thousand-fold increase in complexity in only 15 years.

What did they do with all that complexity? Shaders. Shaders and programmable pipelines made it possible for computer game designers to be artists. Let’s take an extreme example. Pacman and his attendant ghosts are lovable, clunkety and pixelated.

Pacman

Let's leave aside the fact that these were animated with pixels instead of polygons and that GPUs barely existed when Pacman was born. With the obscene amount of processing power GPUs now command, a programmer can now create a specific mood for his or her game by automatically shading all scenes and objects with a hypercolor style or a sepia tint, you name it. The result can be anything from the eye-poppingly surreal textures of Super Mario Galaxy...

...to the otherworldly, overexposed dreamscape of Riven or Myst.

 

Shading is great but Nvidia wanted to take it to the next level: articulate the surfaces, but also the physics underlying what you see on the surface. Now you’re getting into computational visualization.

This is where ray tracing comes in. With ray tracing, an image is generated by tracing a path through each pixel in a virtual screen, and calculating the color of the object visible through it. Huang showed us what exactly ray tracing can do by way of a Bugatti Veyron, rendered with 2 million polygons worth of luscious, mouth-watering detail.

[This image was from the 2008 SIGGRAPH conference-- the image from Hot Chips isn't online yet but it's even prettier!]

Because ray tracing constructs the entire image using information from the computed trajectory of rays of light bouncing from surface to surface, you can light the scene, place your object into the scene, and then do a “walk through”, panning inside the car, where it’s possible to see details—there is no independent lighting inside the car—provided exclusively by ambient “light” rays diffracting and reflecting off the environment and streaming in through the windows. The lighting was so complex and subtle you begin to understand how the GPU could harness physics simulations as impossibly complex as molecular dynamics.

This animation was running on three GeForce GPUs, each with almost 1 Tflop of processing horsepower. That’s about 2.7 Tflops to sustain animation that was very close to photorealistic. (1500-2000 instructions per component, all in HD. 100 shader instructions per component, 4 components per pixel [R G B alpha], 1.5 Flops per instruction on average, 60 frames per second, etc—that adds up to 500 shader Gflps: and if this sentence makes you want to die, read "Data Monster," the tutorial on GPUs and graphics processing in the September issue of Spectrum.) But that only represents, Huang said, about 10 percent of the total math capability of a GPU.

Meanwhile, let’s do a little side by side comparison. Intel's vaunted Nehalem CPU, trotted out earlier that day: 3 Ghz, 4 cores, and a bunch of other stuff—theoretical peak performance of 96 Gflops. That's great for general purpose computing, but two orders of magnitude short of being able to run the Bugatti animation in real time, which requires 5 Tflops. Nehalem—and the CPU in general—is designed for general-purpose computing, but not for graphics.

Animators will be making increasingly photorealistic art for games: water, fire, clouds, smoke—anything that obeys the laws of physics can be rendered to look real, provided you have the right algorithms and a monster amount of GPU muscle.  To prove that point, he showed a nice video of water gently rippling in the sunlit breeze. It was more than photorealistic. But to do all that, you’re using a 3D fluid solver that renders in agonizing detail about 262,000 individual particles to generate fluid motion. Each particle has its own shadow and motion blur. Not to mention color, alpha, etc.

 But ray tracing has a way to go, Huang said. Where it's great for photorealism, it’s not good for real-time rendering. The Bugatti for example was super-impressive in still frame; but when you moved around it, it got grainy and monochrome. Not for long—as soon as you stopped, the image filled in remarkably fast. If you're just making a movie, you can pre-bake the animation as long as you want. For games that's obviously a nonstarter.

To illustrate the true power of ray tracing, Huang showed us the directional flames Industrial Light & Magic did for the Harry Potter movie, which are apparently just unthinkable without monster processing power. Fire is amazingly complex because it’s alive, dynamic, moving and turbulent, so normally, to do fire special effects, animators use and sculpt sprites of real flames. But you can’t animate flame sprites directionally. The ILM fire simulator runs on top of CUDA, and the realistic flames shooting out of Dumbledore's hands are as good as any real-life flame thrower.

In addition, there are some things you can’t pre-animate because you don’t know how it will work at game time. For example, a really awful tackle in a football video game. Animators combine physics simulations and morph them with motion capture, because even though the motion capture is convincing to a certain extent, a brutal tackle would be really painful to motion-capture.

When a program is written taking full advantage of the GPU, obscene improvements are the norm, and not just for graphics. A certain unnamed quantum chemistry program, for example, had a 130X speedup when it was run properly on a GPU. It’s totally doable when an application is inherently parallelizeable.

The point is this: Moore’s law as applied to Intel’s CPUs can reap performance improvements of, on average, 20 percent per year.

By contrast, over the next 6 years, Huang predicted, a co-processing architecture (ganging together a CPU and one or more GPUs) would enable a performance improvement of 570X. Understandably, later blog posts that referenced this figure had people's heads exploding. But keep in mind, this is for specialized applications: graphics, oil & gas equations, seismic, molecular dynamics, quantum chemistry.

I assume ray tracing lends itself to parallel computing, and also that with a 570X performance improvement, this Bugatti will look photorealistic in real time by 2015.  But I think the real issue is whether that 570X speedup will help humanoid characters be truly photorealistic by 2015.

Huang wrapped up the talk by wowing us with all manner of Star Trek daydreams—the real-time universal translator, the smartphone app that can tell you what you’re looking if you just snap a picture of it (WANT).

But even with all those goodies, I’m still stuck on the Uncanny Valley problem. I wonder how far we’ll have to go into physics simulations before we break humanoid characters out of the Uncanny Valley. Even the most advanced animations—Beowulf and Digital Emily—are convincing until they start talking. There’s something impossible to render accurately about teeth, I think. Digital Emily was perfect until she showed her teeth, and the sad thing is, when I mentioned this to Paul Debevec, he looked crestfallen and explained that they had modeled the teeth exactly.

The upshot is this: I don’t think we’re going to get out of the Uncanny Valley until we can do essentially molecular dynamics on every part of the human face, and that includes building the teeth from the ground up.

The good news is, if Huang’s prediction proves true, and GPU performance increases by 570 over the next six years, that’s not a crazy thing to aspire to do. Whether it’s worthwhile, that’s another story.

 

 

Becton: 8 cores, uncore and hardcore

On Monday at the Hot Chips conference, Intel shared the first details of the Beckton processor, which is now called Nehalem EX. (Gotta love those crazy Intel naming conventions—Lynnfield, Beckton, Clarksview. They sound like gated communities in hell.)

The 8-core, 2.3-billion transistor Nehalem EX debuted in a pretty white dress and 18-button white gloves. Charlie Demerjian (formerly with bright red web site The Inquirer, but who has recently started up his own SemiAccurate) has the roundup of what makes Becton interesting: “On the surface, Becton looks like a simple mashing together of two 4-core Nehalems. The specs are 8 cores, 16 threads, 4 DDR3 memory channels, 4 QPI links and 24MB of L3 cache all stuffed into a mere 2.3 billion transistors.”

“With 4 QPI links, 8 memory channels, 8 cores, 8 cache slices, 2 memory controllers, 2 cache agents, 2 home agents and a pony, this chip is getting quite complex… To make it all work, the center of the chip has a block called the router. It is a crossbar switch that connects all internal and external channels, up to eight at a time.

With that many available inputs and outputs, you start to understand why the focus of Becton was on the uncore, and how things get moved around the die and the system in general. Without all the effort put in, just doubling up a Bloomfield or Lynnfield wouldn't scale at all, much less to the 2,000-plus cores Intel is claiming Becton will hit.”

Leaving aside for the moment the hair-raising projection of 2000 cores—let’s discuss the uncore.

Because, with the terms multicore and manycore shoved firmly down marketers’ throats, it’s time for a new buzzword: Uncore. That’s right, uncore: a term so new, its Wikipedia page doesn’t even have references. (disclaimer for anyone who is going to burst into tears of didactic rage telling me the term is not new: that was dry, sardonic humor.)

The uncore is just everything on a microprocessor that is not the core: the arithmetic logic unit, floating point unit, and cache.

For reasons that make me sleepy, the uncore must run at twice the memory frequency. But the rest of the Hot Chips conference was much more multicore than uncore.

 

Medical Alerts in 140 Characters or Less

"@trialx CT looking for diabetes trials in new york for 55 yr old male"

This message was posted today on the TrialX Twitter feed. TrialX, which started as an online matchup between clincal trial organizers and participants, has expedited this courtship even further by opening up its forum to Twitter, and they're making a nice profit while doing it (the company was awarded this year at New York City Entrepreneur Week's business plan competition). Clinical trial investigators pay $99 per month to post experiments they're running. Patients, on the other hand, can search the feed or post their own medical details and wait for a response from the program.

In an article published yesterday in the journal Telemedicine and e-health (available here), Mark Terry catalogues the ways that TrialX and others in the medical field are using Twitter.

Clinical trials is only one of the areas that is getting a boost from Twitter, according to Terry. Many doctors have begun using it in their private practices and are offering advice to Twitter novices. There are a few medical twitter pioneers out there worth note. Phil Baumann and Michael Lara have both blogged about how the community should be using twitter to do things like update physicians on conference highlights and keep up to date with fluctuations in the blood glucose levels of diabetic patients.

Of course, the most beneficial aspect of Twitter is that it communicates in real-time. This will lend itself very well to enhancing disaster alerts. The CDC has jumped on the bandwagon with three different feeds, one dedicated to emergency notifications, one specifically for information about the flu, and one that more generally redirects traffic to the CDC site.

Gadgets to fix problems I didn't know I had

I get a lot of press releases touting new gadgets. Most are minor improvements on existing technologies—Sonos has a new touch screen controller, Griffin has new iPhone cases.  After all, how many truly different household or handheld gadgets can their be?

Turns out there are at least two more than I thought. In the past week I’ve heard from two companies offering gadgets that solve horrible household problems I confess I’d never before worried about—flooding from toilet overflows and death from killer icicles.

A company called AquaOne Technologies would like to stop my toilet from overflowing, wasting water and ruining my floors in the process, with a gadget called the H2Orb, $130 gizmo that installs between the water line and the toilet. The device uses a low power microcontroller from Texas Instruments and two wireless sensors, one in the tank that detects a slow leak, the other in the bowl that detects imminent overflow.

And Gutterglove would like me to install their heated gutter guard, the Gutterglove IceBreaker, that melts ice and prevents icicles from forming, pointing out that falling icicles can kill people. The Icebreaker uses a single self-regulating cable to generate heat. Pricing is available from local dealers.


How have I lived so long without these?

Photos: top left: H2Orb; bottom right: Gutterglove IceBreaker

Will Cooler Heads Prevail in Nanotoxicology Issue

Earlier this month in a blog entry in which I bemoaned the rather polemical tone of the debate on nanotechnology and its environmental, health and safety (EHS) issues I referenced an article where the author was getting a little fed up with all this talk about the threat of nanotechnology on our health when there was not one example of anyone being harmed by nanotech.

Unfortunately, no sooner did he speak than the first example of that harm was chronicled in a new study. The European study has linked seven cases of lung disease in China to working with nanoparticles in which two of the cases resulted in death.

Andrew Maynard on his 20/20 Science blog has done a thorough job of analyzing this report and what its ramifications may be for nanotech, so I haven’t much to add.

But I would like to highlight a point Maynard makes that may be missed by some alarmists and shouldn’t be:

At the end of the day, the study says little about the potential hazards of nanoparticles in general, and next to nothing about the possible dangers of nanotechnology.  If the sad deaths of the two workers and the lung disease of their five colleagues were used to press home a preordained nanotechnology agenda, it would amount to little more than a cynical misuse of the data—not a move that is likely to encourage evidence-based decisions on either workplace safety or safe nanotechnology.
 

As well as his balancing thoughts:

Yet to dismiss the study as flawed and irrelevant would be equally foolish.  The reality is that two workers died and nanoparticles were implicated, at a time when increasing numbers of nanoparticle-containing products are entering the market.  As the details of the study become known, people are going to want to know what the findings mean for them—whether there are risks associated with emerging nanotechnologies, and what government and industry are doing about it.  If nanotech-promoters downplay or even discredit the work, the move is more likely to engender suspicion than allay fears in many quarters.  And once again, evidence-based decision-making will be in danger of being sacrificed in favor of maintaining a set agenda.

I am not so sure either side of the EHS/nanotechnology debate will be as balanced in its approach to this issue as Andrew Maynard, but I am hoping they will.


Hot Town

Appropriately enough, the three weeks that the song Light My Fire spent as the #1 song on the Billboard 100 in 1967 were the dog days of that summer. That phrase, by the way, refers to

the ancient Romans, who noticed that Sirius rose with the sun from July 3 to Aug. 11. As the major star of the "Big Dog" constellation, Sirius is often called the "dog star." It's the brightest star in the nighttime sky. The Romans assumed that the two stars were acting in league to create the "days of great heat."

I was reminded of Light My Fire by an email from frequent Spectrum contributor Kieron Murphy. Another contributor, Brian Santo (author of our popular May 2009 feature, "25 Microchips That Shook the World" and its hilarious backstory sidebar, "Where in the World Wide Web Is Al Phillips?"), who also received the email, responded:

Even though I was still in elementary school in 1970, I had an intuitive grasp of what was going on with The Beatles, and The Stones, and Hendrix and Joplin and many of the other great artists I heard on the radio. But I never really got The Doors. More specifically, I never really got Morrison. I understand Krieger and Manzarek showed sparks of brilliance. I understand that Morrison's Lizard King schtick was dangerous/sexy. But IMHO, the man was a diffident poet/writer at best, and if he hadn't died (let's assume, for now, that he's really in the grave in Paris that I have actually visited) he would have been off the charts for years and doing "This is not your father's Oldsmobile" commercials with a couple of kids he'd been legally forced to adopt.

It's revealing that Brian would refer to the radio, and it's funny that he would picture the song in, of all things, a car commercial. According to Wikipedia,

when Buick wanted to buy the piece for use in a 1968 TV commercial ("Come on, Buick, light my fire") and Morrison, who had been out of town, learned that other group members agreed, Morrison called Buick and threatened to have a Buick smashed with a sledgehammer on a TV show should the (presumably ready) commercial be aired.

I'm a little older than Brian - not a lot, but perhaps just enough to feel very differently. I was sitting with a friend in a pizza place on 37th Road in Jackson Heights, Queens, my first week of 7th grade, when I first heard "Light My Fire" on the radio. The pizzeria was, literally, ovenlike, the pizza was thin, blistering, and delicious; the time was one of those proverbial fry-an-egg-on-a-New-York-sidewalk afternoons; the song was just as fiery hot and yet slow and lyrical; it was clearly about sex, something that, as a twelve-year-old, I was coming to understand the importance of, if I didn't quite understand it itself; the lyrics were kind of silly but the melody was big and ballad-like and beautiful and it went on forever — I had never heard a seven-minute-long song on the radio, and as the keyboard solo gave way to the guitar solo it seemed impossible to believe it was still the same song playing. It was, I now realize, opulent and yet not in the least self-indulgent. In the pizzeria, my friend and I both stopped talking somewhere during the guitar solo and just listened.

In a 13-minute radio story in 2000, NPR reporter Guy Raz said it "broke the mold of the conventional hit pop song when Light My Fire went to the top of the charts."

Light My Fire clocked in at just over seven minutes. No one in the music industry believed it could work at that length.

John Densmore, the Doors drummer, told Raz, "In those days, if you wanted to be on AM radio, you had to be at three minutes." Raz says the band cut out the solos and "whittled it down to three minutes. But fans who owned the album swamped radio stations with requests for the full seven-minute version."

I wasn't one of those album owners — not yet. Sixteen months later, I turned 13, and of the three birthday albums I got from my friends, "The Doors" was the only one I had requested. The other two were Cream's "Wheels of Fire" and the Beatles' White Album — by then the psychedelic movment was in full sway, led by The Doors' eponymous album and the Beatle's Sgt. Pepper's Lonely Hearts Club Band. Sure, Sgt Pepper was the first rock album to win Album of the Year at the Grammy Awards and Rolling Stone magazine has named it the greatest album of all time. But Light My Fire was the song that changed radio forever. Guy Raz again:

No one had ever heard a song like it - seven minutes, free-form, psychedelic, Light My Fire was dark and brooding, haunting and romantic, at the same time. The song is a demarcation point in rock 'n' roll history. It shattered the acceptable boundaries of popular music. Themes of love, mortality, intoxication, and recklessness. All offer a glimpse into the turbulent era that was to come soon after its release.

With satellite radio, digital radio, and podcasts, radio is metamorphosizing today, as it did in the late 1960s. The changes today are technological, though, while back then, as AM gave way to FM, radio  — and music itself — became both more personal and more political.

What's different between now and then is how important radio was - more important, to music at least, than television or any other medium. (When the Doors or the Beatles appeared on Ed Sullivan, it certified a popularity that had been created by radio.) In 10 or 15 years, surely all radio programming will be delivered by the Internet, which will be given the AM and FM frequencies. It seems odd to think that once, the term "wireless" was synonymous with AM radio, and that the two leading communications technologies at the time of the seminal U.S. Communications Act of 1934, radio and telephony, will be digital afterthoughts, little more than a small fraction of the packets riding the TCP/IP radiowaves.

For me and my friends of forty years ago, our favorite DJs defined more than our musical tastes, they helped us think about drugs and sex, philosophy and fashion, war and patriotism. They sometimes set the very calendar we lived by. I remember how, well into the 1970s, each year my friends and I would wait impatiently for the first hot late-spring day. The radio would be set to 102.7. WNEW-FM's afternoon DJ Dennis Elsas would come on the air and play The Lovin' Spoonful's Summer in the City, and so would begin the dog days of summer.

null

Hot Chips for Games

IMAGE CREDIT: Wikimedia Commons

I'll be covering the 21st annual Hot Chips conference for the next couple of days.

Hot Chips is an industry nerd-off brings together designers and architects of high-performance hardware and software among the Spanish colonial architecture and rarefied air of Stanford University every August. The logic-heavy Powerpoints are interspersed with a few keynotes to remind everyone what’s at stake in all these mind-numbing comparisons of SIMD vs MIMD.

One of the big ideas this year appears to be the future of gaming. On Tuesday, Intel’s Pradeep Dubey will chair a keynote presented by Electronic Arts chief creative officer Rich Hilleman.

When I first saw the title of the keynote, “Let's Get Small: How Computers are Making a Big Difference in the Games Business,” I pinched my arm because I thought for a second I was experiencing some horrible Life on Mars style delusion/time travel. Computers making a big difference in the games business? Oh, you think so, doctor!

But it turns out to be more complicated than the title indicates. As usual, it all goes back to Moore’s Law.

Moore’s law says that as transistor size keeps shrinking, more of them can be squeezed onto a given area of silicon, and as long as the price of silicon remains the same, those trasistors will just get cheaper as they get smaller. That means the chips will get cheaper too. That’s why you can get so much processing power for ever-decreasing amounts of outlay.

These days you can get the kind of processing power in your mother’s basement that  just 10 years ago, was reserved for the Crays and Blue Genes and other monstrosities available only to government research facilities.

However--the cost of developing a PC or console game increases exponentially alongside Moore's Law. David Kanter, my go-to guru at Real World Technologies explained it thus:

Moore's law says transistors double in density roughly every 18 months. Graphics performance is perfectly parallel with Moore’s law, which means graphics performance too, roughly doubles every 18 months.

And when graphics performance doubles, you need higher-resolution artwork to render in a game. At that point frame rate over 60 FPS aren’t helpful, what you really want is more details and new effects to wow your gamers.

And if you want that higher-resolution artwork, you need to hire more artists. That rule also tracks with Moore’s law—more and more artists are necessary for each generation of chip.

The upshot is that the cost of developing artwork scales with transistor counts for GPUs, which are themselves driven by Moore's Law. This means that the cost of big-name games—like Grand Theft Auto, Quake, Doom, and the like—increases exponentially. That's a big problem for developers whose pockets are shallower than EA’s.

And that is one reason the games market for phones is exploding. For these little rinky dink displays (iPhone = 480 x 320 pixels, my phone = 220 x176), development costs are so low compared to a PC or console, anyone can make one (maybe even in their spare time).

Back in March, at the 2009 Game Developer’s Conference, ex-EA developer Neil Young (founder and CEO, ngmoco) delivered a keynote called “Why the iPhone just changed everything.”

He said that the iPhone--and the class of devices it represents--is a game changer on the order of the Atari 2600, Gameboy, PlayStation One, or Wii. He predicted that the iPhone will “emerge a gaming device as compelling as existing dedicated handheld game devices.”

Kanter suspects that at Tuesday's keynote, EA and Intel may discuss how ray tracing could make developing artwork easier and less expensive. Thoughts? Comments? Predictions? Do you buy the idea that gaming is splitting the world into empire of AAA game development vs. rebellion of mobile phone developers?

 

Business Articles on Nanotechnology Take on a Familiar Formula

I am simultaneously amazed and concerned when I read mainstream publications tackle the issue of business and nanotechnology. Last month we had the NY Times informing us that things were looking up for the commercialization of nanotech because it seemed industry and academic research centers were beginning to team up. What a novel idea.

Of course, this penetrating analysis followed the Grey Lady’s previous prediction over 18 months earlier that nanotech was going to finally experience its long-awaited boom with a series of IPOs…that never came.

But the latest bit of business journalism I’ve read on nanotech comes from one of the NY Times’ subsidiary publications, the Boston Globe. What is fascinating about this one is how it reads like a “Mad Lib” for nanotech articles formulated in 2001.

We get insights like the nanotechnology market in 2015 will be worth (fill in blank with number) trillion. Then they even find ways to throw in all the favored terms used in 2001, like “nanobots” or size definitions such as a “nanometer, equal to one-billionth of a meter”.

So, why am I am amazed and concerned at the same time? I am amazed because we can read articles that manage to be repeating articles written from nearly a decade before, or have such a flimsy grasp of the mechanisms of commercializing emerging technologies that they believe industry/lab partnerships are actually an innovative idea. And I am also concerned because I read business articles from publications like this on topics that I know far less about than nanotech, should I be worried? I think maybe, yes.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More