Tech Talk iconTech Talk

FCC to Tackle Internet Rules

FCC Chairman Julius Genachowski announced that the Commission will kick off a rulemaking proceeding on Internet regulation shortly, as soon as the formality of a vote is completed. The long-anticipated announcement has more to do with cleaning up the state of Internet regulation, which the previous chairman left in a bit of a mess. The FCC is supposed to make rules in public proceedings before enforcing them, but the previous commission slapped Comcast's wrists with a set of rules that it had declared unenforceable when they were written, the Four Freedoms that make up the Internet Policy Statement. 

As expected, the Genachowski announced that he intends to propose an anti-discrimination rule and a transparency rule. These had been considered mutually exclusive, so the combination is a bit of a surprise.

As the purpose of the speech was to announce the rule making procedure and not the precise nature of the rules themselves, it wasn't the most stirring piece of oratory. There were some curious moments early in the narrative when the chairman walked through the history of ARPANET and touted the architectural wonder of the Internet (the speech is on a new web site the FCC created today,

Historian John Naughton describes the Internet as an attempt to answer the following question: How do you design a network that is “future proof” -- that can support the applications that today’s inventors have not yet dreamed of? The solution was to devise a network of networks that would not be biased in favor of any particular application. The Internet’s creators didn’t want the network architecture -- or any single entity -- to pick winners and losers. Because it might pick the wrong ones. Instead, the Internet’s open architecture pushes decision-making and intelligence to the edge of the network -- to end users, to the cloud, to businesses of every size and in every sector of the economy, to creators and speakers across the country and around the globe. In the words of Tim Berners-Lee, the Internet is a “blank canvas” -- allowing anyone to contribute and to innovate without permission.

While this is pretty much standard Internet mythology, it's not accurate enough for regulatory work. Network engineers know that no single-service network, which is what the Internet has become post-BGP, can ever be application neutral. The Internet's best-effort delivery service is fine for generic content applications like the Web, and much less fine for real-time services and for high bandwidth content applications like P2P file sharing. There is no such thing as a truly neutral network, and we can only approach neutrality to the extent that the network can tailor delivery services to the needs of applications. That's why we have Quality of Service logic in IEEE 802 LANs, WLANs, WPANs, and WWANs. One-size-fits-all is a myth. A network with no QoS does pick winners and losers, make no mistake about it.

The chairman's lack of precision is par for the course in political circles, but there's a significant danger to innovation from trying to apply these metaphoric descriptions too literally. When your network has a structural bias in favor of a particular class of applications, it needs to permit management practices to overcome it. It's not clear that the FCC has the digital chops to appreciate this.

So the shoe has finally dropped and the FCC is on the road to fulfilling President Obama's campaign promise to protect the open Internet. This could result in clarity, certainty, and a good environment for investment, or it could degenerate into a circus as the Comcast proceeding did in 2008. Chairman Genachowski is a bright and earnest public servant, the odds are better than even money that the rulemaking will not do significant harm, but you never know how these things will turn out until the votes are counted. Those of us who do network engineering for a living need to keep a close watch on this proceeding.

Can You Trust Crowd Wisdom?

Can you trust crowd wisdom? An article this week on the MIT Technology Review website asks that question and answers it in the negative - or rather, says that new research indicates the answer is no: "Researchers say online recommendation systems can be distorted by a minority of users."

When searching online for a new gadget to buy or a movie to rent, many people pay close attention to the number of stars awarded by customer-reviewers on popular websites. But new research confirms what some may already suspect: those ratings can easily be swayed by a small group of highly active users.
Vassilis Kostakos, an assistant professor at the University of Madeira in Portugal and an adjunct assistant professor at Carnegie Mellon University (CMU), says that rating systems can tap into the "wisdom of the crowd" to offer useful insights, but they can also paint a distorted picture of a product if a small number of users do most of the voting. "It turns out people have very different voting patterns," he says, varying both among individuals and among communities of users.

What¿s the official informal fallacy name for bait-and-switch? This Tech Review article commits it. It wants you to think this about recommendation systems, but it isn't. It wants you to think that there's a hidden problem of only a few people voting, when the research is really talking about the fact that a relatively small fraction of people are doing a large share of the total voting at places like IMDb.

That's not to say that there aren't problems with the voting at IMDb. Is Inglourious Basterds really the 43rd best movie ever made, better than The Departed (#57), Slumdog Millionaire (#75), Braveheart (#100), Unforgiven (#110), No Country For Old Men (#116), Million Dollar Baby (#150), or Crash (#232), each of which won the Academy Award for Best Picture in its respective year? Of course not. But the problem is''t a handful of voters influencing the vote - these fewest number of votes for any one of these is 85 000. The problem is 18-years-olds with no historical memory of cinema giving a movie a 10 the same night they see it, while those of us over 40 are carefully weighing whether Yojimbo gets an 8 or a 9.

Suppose for the sake of argument there's an 80/20 rule for IMDb voting - that is, 80 percent of all votes are cast by 20 percent of the people who vote. Is that a problem? What if it turns out there's an 80/20 rule for electoral voting in the United States. Does that invalidate the election process?

In other words, consider the entire aggregation of election votes cast by everyone alive who has ever voted. It might very well be the case that a handful of people turn out to every election, casting votes for every county supervisor and municipal judge election, while a large number of people turn out once every four years to vote for the U.S. President, while another large group votes even less frequently than that. It might well turn out that 20 percent of all citizens cast 80 percent of the votes. In fact, in the absence of Soviet-style mandatory voting, it would be surprising if something like that weren't the case.

As might be expected, the paper itself, which was presented at the 2009 IEEE International Conference on Social Computing and is available from Kostakos's website here [PDF], isn't about the unreliability of crowd wisdom at all. It looked at three different online voting systems with different barriers of entry to voting. (Its conclusion that experts can be encouraged to vote more often by lowering the barriers to voting seems to me to be rather circular and obvious, given that it defines experts simply as people who vote often.)

The paper takes for granted that if an item has been only reviewed or voted on a couple of times, the result is unreliable, and it doesn't seem to have anything particular to say about the reliability of a recommendation based on a large number of votes or reviews. It doesn't, by the way, even contain the word "distorted" - that seems to have come from a conversation or interview with Kostakos, not from the paper itself.

Nor does the paper have anything to say about "online recommendation systems" - when discussing Amazon, for example, it considers only the voting and reviewing on the Amazon site, and not the feature by which it recommends other products based on what other people looked at or bought. This reviewer's recommendations: One shaky thumb up for the research, two firm thumbs down for Tech Review's report on it.

IEEE Standards Board Member to Rejoin Iggy Pop and The Stooges


Many of James Williamson’s colleagues—at Sony, where, until a few months ago, he was Vice President of Technology Standards, and at IEEE, where he serves as a member of the Standards Association Board of Governors and the Association's Corporate Advisory Group—didn’t know about the years he spent as a punk guitarist and member of The Stooges. His calm manner and even temper at standards meetings belied his previous reputation as one of the loudest and raunchiest punk rockers in the business.

Williamson co-wrote the songs and played guitar on the 1973 album, Raw Power, now considered a punk classic. He collaborated with Iggy Pop on the 1975 album Kill City, then turned to electrical engineering, getting his BSEE degree from California State Polytechnic University.

He did return to music briefly, contributing to Iggy Pop’s 1979 album New Values, then focused on his technical career.

But now, recently retired from Sony, he’s picking up the guitar again. Williamson, who hasn’t performed in front of a paying audience in 35 years, has reportedly started practicing for his musical comeback. The Stooges are currently booked to appear next year at the All Tomorrow’s Parties Festival in London, possibly the first stop on a tour.

No word yet as to whether IEEE members will be able to purchase concert tickets at a discount.

Apple Just Announced a Flip-killer, the iPod Nano Video Camera

I've been thinking about putting a Flip video camera high on my Christmas list, so much more convenient than lugging around my old digital video cassette camera for family events. But Apple's intro today of its Flip-killer–a video camera that oh, by the way, is built into an iPod Nano–just sunk that idea. Not just because it's an iPod too (I'm thinking I wouldn't use it for music, I'd be saving the memory for movies), but because I have complete faith in Apple making the user interface easy, I won't need to load more software (Flip requires a special app), and it'll go right into iTunes without the conversion that Flip videos require. Plus it's thinner, boasts a five hour battery life, and is about the same price ($149 for 8 GB). And oh yeah, I like the colors. Which could present a problem--do I want pink, or red, or blue...

Followup: I saw my first video Nano in the wild shortly after 7 p.m., just eight hours after the announcement--in the hands of parent taking videos at a back-to-school event. It was a red one. It got away before I could check it out.

Tech Museum of Silicon Valley Announces 2009 Laureates

This week, the Tech Museum of Silicon Valley announced its 2009 laureates. Among the 15 honorees:
—Joseph Adelegan, whose project in Nigeria takes the waste stream from slaughterhouses and turns it into methane for electricity generation or cooking gas.
—Sean White, who is digitizing the plant collection of the Smithsonian to create an Electronic Field Guide that will identify species through object recognition.
—The Alternative Energy Development Corp. of South Africa, which is using zinc air fuel cells for household electricity.
—Solar Ear, a Brazilian company building inexpensive hearing aids that come with solar rechargers.
—Geogebra, an organization developing open-source software for teaching geometry, algebra, and calculus.

The Tech Awards annually honor efforts to use technology to improve the lives of people around the world. One laureate in each of five categories—environment, economic development, education, equality, and health—will receive a cash prize of $50,000, to be announced at a gala on November 19th. This year’s James C. Morgan Global Humanitarian Award recipient, Al Gore, will also be recognized at the gala.

The announcement came at the unveiling of a new Tech Museum gallery, “Technology Benefiting Humanity. The exhibit includes interactive looks at the inventions of eleven previous laureates including Solar Sailor, a company that combines wind, solar, and hybrid technology to power boats, and Adaptive Eyecare, a company that is developing glasses with lenses whose power can be adjusted by the wearer. 

Ray tracing, Parallel Computing and a Bugatti Veyron

At last week's Hot Chips symposium, Nvidia founder and CEO Jen-Hsun Huang delivered the first keynote about the GPU computing revolution.

The keynote was definitely the highlight of the conference, but before I get all swoony over the incredible directional flame sprites and the finger-licking Bugatti Veyron their GPUs can render, first I need to pick on Nvidia a little.

That’s because the company was selling $200 3-D glasses at their booth. Or, they were trying to. I didn’t see anyone buy them, and if anyone did, they didn’t tell me about it.

The glasses were supposed to augment a very engrossing 3-D Batman game Nvidia had nakedly set up to lure passers-by. Apparently they created a deeper z-space by giving each lens a different refresh rate. Something like that. I put on the glasses and played for a while. It’s telling either of my unsophistication with games or of how unimpressive these glasses were that I failed to notice that you had to actually turn them on—when someone out pointed my mistake, and I flipped the on switch, the only difference I noticed was a pretty blue LED light.

But enough: let’s make with the swooning.

First, Huang took the audience back to February of 1993, when he'd just finished his master’s in electrical engineering at Stanford, and Nvidia was just a gleam in a venture capitalist's eye. For perspective, 1993 is so long ago that there was no need to have a PC on your desktop even if you were trying to get people to invest in your computer company. “If we had told our investors at the time that we’d be using the same hardware to play games and try to cure cancer," he said, "I am sure we would not have been funded."

“The GPU will likely be parallel processor for the future,” he told the crowd. Computers are being driven to parallel computing because people can do magical things with them.

Nvidia’s Teraflop-capable GPUs can, in fact, do some things that would have literally appeared to be magic to a person in 1993: Augmented reality in Monday Night Football, where it’s possible for the football players to stand on top of the 3-D rendered line of scrimmage projected onto the field but under the players. The flags rendered under the ice at Olympic hockey games; Ann Curry’s set during the 2008 election coverage. But you know all this stuff.

The point is this: The GPU has evolved faster than any other tech component, their complexity increasing from a few million transistors in 1994 to billions in 2009. That’s a thousand-fold increase in complexity in only 15 years.

What did they do with all that complexity? Shaders. Shaders and programmable pipelines made it possible for computer game designers to be artists. Let’s take an extreme example. Pacman and his attendant ghosts are lovable, clunkety and pixelated.


Let's leave aside the fact that these were animated with pixels instead of polygons and that GPUs barely existed when Pacman was born. With the obscene amount of processing power GPUs now command, a programmer can now create a specific mood for his or her game by automatically shading all scenes and objects with a hypercolor style or a sepia tint, you name it. The result can be anything from the eye-poppingly surreal textures of Super Mario Galaxy... the otherworldly, overexposed dreamscape of Riven or Myst.


Shading is great but Nvidia wanted to take it to the next level: articulate the surfaces, but also the physics underlying what you see on the surface. Now you’re getting into computational visualization.

This is where ray tracing comes in. With ray tracing, an image is generated by tracing a path through each pixel in a virtual screen, and calculating the color of the object visible through it. Huang showed us what exactly ray tracing can do by way of a Bugatti Veyron, rendered with 2 million polygons worth of luscious, mouth-watering detail.

[This image was from the 2008 SIGGRAPH conference-- the image from Hot Chips isn't online yet but it's even prettier!]

Because ray tracing constructs the entire image using information from the computed trajectory of rays of light bouncing from surface to surface, you can light the scene, place your object into the scene, and then do a “walk through”, panning inside the car, where it’s possible to see details—there is no independent lighting inside the car—provided exclusively by ambient “light” rays diffracting and reflecting off the environment and streaming in through the windows. The lighting was so complex and subtle you begin to understand how the GPU could harness physics simulations as impossibly complex as molecular dynamics.

This animation was running on three GeForce GPUs, each with almost 1 Tflop of processing horsepower. That’s about 2.7 Tflops to sustain animation that was very close to photorealistic. (1500-2000 instructions per component, all in HD. 100 shader instructions per component, 4 components per pixel [R G B alpha], 1.5 Flops per instruction on average, 60 frames per second, etc—that adds up to 500 shader Gflps: and if this sentence makes you want to die, read "Data Monster," the tutorial on GPUs and graphics processing in the September issue of Spectrum.) But that only represents, Huang said, about 10 percent of the total math capability of a GPU.

Meanwhile, let’s do a little side by side comparison. Intel's vaunted Nehalem CPU, trotted out earlier that day: 3 Ghz, 4 cores, and a bunch of other stuff—theoretical peak performance of 96 Gflops. That's great for general purpose computing, but two orders of magnitude short of being able to run the Bugatti animation in real time, which requires 5 Tflops. Nehalem—and the CPU in general—is designed for general-purpose computing, but not for graphics.

Animators will be making increasingly photorealistic art for games: water, fire, clouds, smoke—anything that obeys the laws of physics can be rendered to look real, provided you have the right algorithms and a monster amount of GPU muscle.  To prove that point, he showed a nice video of water gently rippling in the sunlit breeze. It was more than photorealistic. But to do all that, you’re using a 3D fluid solver that renders in agonizing detail about 262,000 individual particles to generate fluid motion. Each particle has its own shadow and motion blur. Not to mention color, alpha, etc.

 But ray tracing has a way to go, Huang said. Where it's great for photorealism, it’s not good for real-time rendering. The Bugatti for example was super-impressive in still frame; but when you moved around it, it got grainy and monochrome. Not for long—as soon as you stopped, the image filled in remarkably fast. If you're just making a movie, you can pre-bake the animation as long as you want. For games that's obviously a nonstarter.

To illustrate the true power of ray tracing, Huang showed us the directional flames Industrial Light & Magic did for the Harry Potter movie, which are apparently just unthinkable without monster processing power. Fire is amazingly complex because it’s alive, dynamic, moving and turbulent, so normally, to do fire special effects, animators use and sculpt sprites of real flames. But you can’t animate flame sprites directionally. The ILM fire simulator runs on top of CUDA, and the realistic flames shooting out of Dumbledore's hands are as good as any real-life flame thrower.

In addition, there are some things you can’t pre-animate because you don’t know how it will work at game time. For example, a really awful tackle in a football video game. Animators combine physics simulations and morph them with motion capture, because even though the motion capture is convincing to a certain extent, a brutal tackle would be really painful to motion-capture.

When a program is written taking full advantage of the GPU, obscene improvements are the norm, and not just for graphics. A certain unnamed quantum chemistry program, for example, had a 130X speedup when it was run properly on a GPU. It’s totally doable when an application is inherently parallelizeable.

The point is this: Moore’s law as applied to Intel’s CPUs can reap performance improvements of, on average, 20 percent per year.

By contrast, over the next 6 years, Huang predicted, a co-processing architecture (ganging together a CPU and one or more GPUs) would enable a performance improvement of 570X. Understandably, later blog posts that referenced this figure had people's heads exploding. But keep in mind, this is for specialized applications: graphics, oil & gas equations, seismic, molecular dynamics, quantum chemistry.

I assume ray tracing lends itself to parallel computing, and also that with a 570X performance improvement, this Bugatti will look photorealistic in real time by 2015.  But I think the real issue is whether that 570X speedup will help humanoid characters be truly photorealistic by 2015.

Huang wrapped up the talk by wowing us with all manner of Star Trek daydreams—the real-time universal translator, the smartphone app that can tell you what you’re looking if you just snap a picture of it (WANT).

But even with all those goodies, I’m still stuck on the Uncanny Valley problem. I wonder how far we’ll have to go into physics simulations before we break humanoid characters out of the Uncanny Valley. Even the most advanced animations—Beowulf and Digital Emily—are convincing until they start talking. There’s something impossible to render accurately about teeth, I think. Digital Emily was perfect until she showed her teeth, and the sad thing is, when I mentioned this to Paul Debevec, he looked crestfallen and explained that they had modeled the teeth exactly.

The upshot is this: I don’t think we’re going to get out of the Uncanny Valley until we can do essentially molecular dynamics on every part of the human face, and that includes building the teeth from the ground up.

The good news is, if Huang’s prediction proves true, and GPU performance increases by 570 over the next six years, that’s not a crazy thing to aspire to do. Whether it’s worthwhile, that’s another story.



Becton: 8 cores, uncore and hardcore

On Monday at the Hot Chips conference, Intel shared the first details of the Beckton processor, which is now called Nehalem EX. (Gotta love those crazy Intel naming conventions—Lynnfield, Beckton, Clarksview. They sound like gated communities in hell.)

The 8-core, 2.3-billion transistor Nehalem EX debuted in a pretty white dress and 18-button white gloves. Charlie Demerjian (formerly with bright red web site The Inquirer, but who has recently started up his own SemiAccurate) has the roundup of what makes Becton interesting: “On the surface, Becton looks like a simple mashing together of two 4-core Nehalems. The specs are 8 cores, 16 threads, 4 DDR3 memory channels, 4 QPI links and 24MB of L3 cache all stuffed into a mere 2.3 billion transistors.”

“With 4 QPI links, 8 memory channels, 8 cores, 8 cache slices, 2 memory controllers, 2 cache agents, 2 home agents and a pony, this chip is getting quite complex… To make it all work, the center of the chip has a block called the router. It is a crossbar switch that connects all internal and external channels, up to eight at a time.

With that many available inputs and outputs, you start to understand why the focus of Becton was on the uncore, and how things get moved around the die and the system in general. Without all the effort put in, just doubling up a Bloomfield or Lynnfield wouldn't scale at all, much less to the 2,000-plus cores Intel is claiming Becton will hit.”

Leaving aside for the moment the hair-raising projection of 2000 cores—let’s discuss the uncore.

Because, with the terms multicore and manycore shoved firmly down marketers’ throats, it’s time for a new buzzword: Uncore. That’s right, uncore: a term so new, its Wikipedia page doesn’t even have references. (disclaimer for anyone who is going to burst into tears of didactic rage telling me the term is not new: that was dry, sardonic humor.)

The uncore is just everything on a microprocessor that is not the core: the arithmetic logic unit, floating point unit, and cache.

For reasons that make me sleepy, the uncore must run at twice the memory frequency. But the rest of the Hot Chips conference was much more multicore than uncore.


Medical Alerts in 140 Characters or Less

"@trialx CT looking for diabetes trials in new york for 55 yr old male"

This message was posted today on the TrialX Twitter feed. TrialX, which started as an online matchup between clincal trial organizers and participants, has expedited this courtship even further by opening up its forum to Twitter, and they're making a nice profit while doing it (the company was awarded this year at New York City Entrepreneur Week's business plan competition). Clinical trial investigators pay $99 per month to post experiments they're running. Patients, on the other hand, can search the feed or post their own medical details and wait for a response from the program.

In an article published yesterday in the journal Telemedicine and e-health (available here), Mark Terry catalogues the ways that TrialX and others in the medical field are using Twitter.

Clinical trials is only one of the areas that is getting a boost from Twitter, according to Terry. Many doctors have begun using it in their private practices and are offering advice to Twitter novices. There are a few medical twitter pioneers out there worth note. Phil Baumann and Michael Lara have both blogged about how the community should be using twitter to do things like update physicians on conference highlights and keep up to date with fluctuations in the blood glucose levels of diabetic patients.

Of course, the most beneficial aspect of Twitter is that it communicates in real-time. This will lend itself very well to enhancing disaster alerts. The CDC has jumped on the bandwagon with three different feeds, one dedicated to emergency notifications, one specifically for information about the flu, and one that more generally redirects traffic to the CDC site.

Gadgets to fix problems I didn't know I had

I get a lot of press releases touting new gadgets. Most are minor improvements on existing technologies—Sonos has a new touch screen controller, Griffin has new iPhone cases.  After all, how many truly different household or handheld gadgets can their be?

Turns out there are at least two more than I thought. In the past week I’ve heard from two companies offering gadgets that solve horrible household problems I confess I’d never before worried about—flooding from toilet overflows and death from killer icicles.

A company called AquaOne Technologies would like to stop my toilet from overflowing, wasting water and ruining my floors in the process, with a gadget called the H2Orb, $130 gizmo that installs between the water line and the toilet. The device uses a low power microcontroller from Texas Instruments and two wireless sensors, one in the tank that detects a slow leak, the other in the bowl that detects imminent overflow.

And Gutterglove would like me to install their heated gutter guard, the Gutterglove IceBreaker, that melts ice and prevents icicles from forming, pointing out that falling icicles can kill people. The Icebreaker uses a single self-regulating cable to generate heat. Pricing is available from local dealers.

How have I lived so long without these?

Photos: top left: H2Orb; bottom right: Gutterglove IceBreaker

Will Cooler Heads Prevail in Nanotoxicology Issue

Earlier this month in a blog entry in which I bemoaned the rather polemical tone of the debate on nanotechnology and its environmental, health and safety (EHS) issues I referenced an article where the author was getting a little fed up with all this talk about the threat of nanotechnology on our health when there was not one example of anyone being harmed by nanotech.

Unfortunately, no sooner did he speak than the first example of that harm was chronicled in a new study. The European study has linked seven cases of lung disease in China to working with nanoparticles in which two of the cases resulted in death.

Andrew Maynard on his 20/20 Science blog has done a thorough job of analyzing this report and what its ramifications may be for nanotech, so I haven’t much to add.

But I would like to highlight a point Maynard makes that may be missed by some alarmists and shouldn’t be:

At the end of the day, the study says little about the potential hazards of nanoparticles in general, and next to nothing about the possible dangers of nanotechnology.  If the sad deaths of the two workers and the lung disease of their five colleagues were used to press home a preordained nanotechnology agenda, it would amount to little more than a cynical misuse of the data—not a move that is likely to encourage evidence-based decisions on either workplace safety or safe nanotechnology.

As well as his balancing thoughts:

Yet to dismiss the study as flawed and irrelevant would be equally foolish.  The reality is that two workers died and nanoparticles were implicated, at a time when increasing numbers of nanoparticle-containing products are entering the market.  As the details of the study become known, people are going to want to know what the findings mean for them—whether there are risks associated with emerging nanotechnologies, and what government and industry are doing about it.  If nanotech-promoters downplay or even discredit the work, the move is more likely to engender suspicion than allay fears in many quarters.  And once again, evidence-based decision-making will be in danger of being sacrificed in favor of maintaining a set agenda.

I am not so sure either side of the EHS/nanotechnology debate will be as balanced in its approach to this issue as Andrew Maynard, but I am hoping they will.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More