Tech Talk iconTech Talk

Fuel 2.0: E-Fuel's MicroFueler Will Put An Ethanol Refinery in Your Driveway

I didn’t expect to see an ethanol production plant pitched as a consumer technology at DemoFall 2010. But after more than a day of social networking, online shopping, and cloud technologies I was happy to see a product that does something useful, is good for the environment, and doesn’t want to be my friend. Even if it was big, green, and kind of ugly.

The Microfueler, from E-Fuel Corp., is definitely not for everyone. But if you’re really commited to energy independence—or, like some of my neighbors, have a little home wine-making operation with a lot of high-sugar content organic waste, it may be for you. E-Fuel already sells a system that ferments organic waste and converts it into ethanol. Start out with waste with a heavy sugar content, if you have a home wine-making operation, say, and the process is particularly efficient. (I do know folks who fall into this category.) The system includes a $10,000 home auto fueling pump to extracts gas-tank-ready ethanol from a fermented tank, and a $6000 generator that can power your home from this fuel supply. At DemoFall 2010, held in Santa Clara, Calif., the company added a component to this system--what it calls a column reactor, which it says will speed the fermentation process to minutes instead of days; no word on the price yet on this final device, says CEO Thomas Quinn, who tells me how it all fits together, in the video above.

Mag-Stripe Cards Get Smart

Magnetic stripe cards are by no means perfect—they’re basically as dumb as digital technology gets. But they are supported by a vast infrastructure of magnetic stripe readers, and any discussion about making the cards better typically grinds to a halt when this infrastructure is considered.

Enter Dynamics Inc., a little company from Pittsburgh. Attending the DemoFall 2010 conference in Santa Clara, Calif., its CEO Jeff Mullen admitted it was his first visit ever to Silicon Valley.

Mullen may be a stranger to the valley, but he’s no stranger to creative ideas implemented in Silicon. Instead of trying to replace the magnetic stripe on the card, Dynamics added a layer of electronics under it. Those electronics can reprogram that stripe on the fly. That means cards could have switchable numbers—use one for debit, one for credit, perhaps. They could also require that users tap in a security code before they’ll work, meaning a stolen card is essentially a dead card. Mullen’s plan for what he calls Card 2.0 is to sell the technology to the credit card issuers—and, he says, they are definitely interested. He tells me all about it in the video, above.

(And if you think he’s exuberant here, you should have seen him after the DemoFall attendees voted Dynamics winner of the People’s Choice award.)

Fastest Helicopter Flies Even Faster

In a recent article on this site, a current Sikorsky Aircraft engineer, Thomas Lawrence, and a retired Sikorsky hotshot, David Jenney, detailed how they designed the legendary aircraft company's high-speed helicopter

Sikorsky's flight test engineers were pushing their demonstrator vehicle, the X2, to progressively higher speeds, topping out in August at 435 km/h. The official speed record, maintained by the Fédération Aéronautique Internationale (FAI), is a mere 400 km/h and was set in 1986. Sikorsky was well past that when the September issue of IEEE Spectrum went to press, though the company has yet to jump the hurdles needed to have its new speed record recognized by the FAI.

Sikorsky had good reason to dawdle: On September 15, another test flight took the X2 to 463 km/h (287 mph). According to Lawrence, the speed was limited only by the amount of power available, rather than the helicopter's overall load or vibrations -- issues that have plagued previous attempts at building aircraft capable of both flying fast and performing vertical take-off and landing. To prove the point, the test pilot took the X2 into a shallow dive and reached 481 km/h.

The testing isn't finished yet, Thomas says, so the X2 may well be capable of even faster flight.

Being Mark Zuckerberg

It was late December 2005 in Palo Alto, California, and Silicon Valley’s newest wunderkind wasn’t feeling the holiday glee.  Upstairs in a busy loft decorated with Christmas lights and Family Guy posters, Facebook’s 21-year-old founder Mark Zuckerberg ambled from an elevator.  I had come to talk with him about his life and site, which was just then gaining steam.  He was followed by his business partner and former Harvard roommate, Dustin Moskovitz, on a Razor scooter.

“You want to ask us some questions about sexual harassment?” Zuckerberg asked me, half-jokingly, as he flopped into a chair and kicked off his worn Adidas sandals.  Moskovitz brandished a fistful of papers with an impish grin.  The two had just left a lengthy seminar on the subject, a requirement for doing business here and a small part of being the Internet’s next big thing. 

Today, nearly five years later, Facebook has become the biggest thing of all.   In the coming weeks, a fictionalized version of Zuckerberg’s startup days – a film called The Social Network – will be coming to theaters. It got me thinking about my first meeting with him, and how many of the goals he mapped out then ultimately came to life.   At his heart, he struck me as a determined engineer – more interested in building something genuinely cool than selling out (as impossible as that was to believe at the time).

Though the site had just been dubbed by Fortune as “the most buzzed-about company in Silicon Valley this side of Google,” Zuckerberg repeatedly told me that he had no interest in cashing in.  “We’re having too much fun,” he says.  He was particularly proud of how Facebook was systematically changing the way college students were communicating with each other. “We’ve effectively removed the need to get someone’s phone number,” Zuckerberg said, proudly.

On his desk there were two boxes of business cards.  One read, “CEO,” and the other, ‘I’m CEO…Bitch.”  As he flipped through the dozens of new emails on his laptop, Zuckerberg told me he was fazing out the latter after inadvertently handing one to a reporter and seeing it wind up in print.  “I’m a little more careful now about who I hand them too,” he said.

While Silicon Valley had seen its share of whiz kids, Zuckerberg was so young, he seemed larval.  But even then with his ruddy just-played-in-the-snow cheeks, he was noticeably adjusting to his new multimillion dollar role.  When Moskovitz made a crack about Facebooking attractive Harvard girls, Zuckerberg awkwardly smacked him in the arm.  “Dude, we just got out of a sexual harassment seminar,” he snapped.

Later that night, we went back to his apartment for a break.  He lived just a few blocks away in a modest rental that suited a monkish coder his age.  In the living room, was just a mattress on the floor.  The shower had no curtain.  The only indication that a budding dot com mogul lurked inside was the sweet Infiniti FX35, a gift from Facebook’s first investor, PayPal CEO Peter Thiel, parked in the alley nearby. 

Zuckerberg retreated to the tiny kitchen to boil up a pot of water for green tea.  Despite the fortune he knew he was facing, he was discovering himself to be a guy of simple needs.  His only splurge was an electric guitar, leaning up against the wall.  He had started meditating every day and, after getting off work at two or three a.m., unwound by driving aimlessly through the streets with his radio on. “I just want to build cool stuff,” he said, sipping his tea, “and not let anything get in the way.”

Back in the News: the Engineer as Terrorist

Two year ago, researchers Diego Gambetta and Steffen Hertog concluded that engineers were far more likely to be members of violent Islamist groups than average citizens of the same countries. (IEEE Spectrum reported on the initial research in 2008. ) Since then, says an article in today’s New York Times Magazine, more data from terrorist attempts of all sorts supports the engineer/terrorist connection.

Gambetta and Hertog found the connection only in right-wing groups, suggesting that there’s something about the engineering mindset that is influencing the connection, not just the fact that engineering skills are valuable to terrorist groups. Others say the data set is simply too small to be meaningful in any way.

What do you think?  

Another TV Technology Bites the Dust

Liquid crystal displays vanquished the CRT, and pummeled plasma. We love them because they are thin and relatively inexpensive in large sizes.

But nobody thinks they’re perfect. They don’t have a great contrast ratio—CRTs display much darker blacks and brighter whites. Fast motion creates a blur, unless they’re stuffed with extra processing power to play tricks with the images. After five years, the fluorescent backlights dim and the colors change; even LED-lit LCDs fade after about ten years. They sure aren’t going to have a great picture as long as my 20-some-year-old CRT, which is still going strong.

So a lot of us hoped they’d just be a temporary solution, and some new technology in the works would bring back the great colors and lifetime of the CRT in a big, flat format. Back in 2004, Canon and Toshiba announced that they had the answer—SED, or surface-conduction-electron-emitter display. SEDs, like CRTs, create an image by hitting phosphors with electrons, but instead of a big electron gun that scans rows of phosphors, SEDs use an array of tiny electron emitters, each of which produces a single pixel. This truly looked like the technology that would capture the best of both worlds—the fast response time, great color, and high contrast of CRTs, and the lower power, bigger sizes, and thin screens of LCD. The two companies set up a joint venture, SED Inc., to commercialize the technology. Toshiba predicted that, by 2010, at least a third of TVs over 40 inches sold to consumers would be SED models.

Manufacturing was supposed to be easy; SEDs can be printed with an inkjet-type of device. In 2006, however, Spectrum reported that the companies were having trouble with yields. In 2007, Canon announced that it was still working to bring down production costs, and commercialization would be delayed.

“It all came down to production costs,” Paul O’Donovan, a principal analyst with Gartner, told me. Back in 2007, he said, SEDs were rumored to cost five times as much as LCDs to produce. “They had no chance of being able to reduce those costs fast enough to compete with LCDs,” O’Donovan said.

So it was perhaps no surprise that earlier this year Canon announced that it was no longer going to try to make a consumer SED TV. At the time, the company said it was continuing to work on the technology for the commercial market. But last month it pulled the plug on little SED Inc. It’s over.

Says O’Donovan, “It’s a shame really, as SED was by far the best quality of all the flat panel displays. It just goes to show that the best quality is not always what the market wants if there is a cheaper ‘acceptable’ alternative solution.”

Photo: SEDtechnology.net 

Surveillance Software Catches "Demo Syndrome"

PHOTO CREDIT: Hustvedt, Wikimedia Commons

On Monday morning, Sadiye Guler, founder and president of the video analytics company intuVision, was telling a story to a small gathering of software engineers in a conference room at Boston University in Massachusetts. She was saying that her company had recently gotten its person-identifying software to work quite well. The software could estimate a videotaped person’s age, gender, and ethnicity with pretty good reliability. Then just last month, intuVision presented its products at the U.S. Joint Forces Command’s annual Empire Challenge—a kind of showcase for new surveillance and reconnaissance technologies. This year, the challenge was held in the desert state of Arizona.

“Everyone was wearing hats with big rims and sunglasses, and the lighting contrast was intense—everyone’s faces were showing up half light, half dark,” Guler said. Needless to say, intuVision’s engineers cut short the people-identifier portion of their software demonstration.

“Sounds like demo syndrome,” one of the engineers commented.

“Demo syndrome” is a term I’ve been hearing a lot here at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS). Another example: On Tuesday morning, Mahesh Saptharishi, chief scientist for VideoIQ, Inc.—which claims to be “the inventor of the world’s first and only intelligent surveillance cameras and encoders with built-in video recording”—was showing me some of the things a client of VideoIQ could do with its video search software.

“Suppose you want to see all the clips that show a white car like this one,” Saptharishi said, pointing to a white station wagon entering a parking lot on a computer screen. The VideoIQ software had previously determined that the wagon was a moving vehicle, drawn a red tracking box around it, and archived the clip. Saptharishi clicked the “look-for-all-cars-like-this-one” button and waited. The search timed out. He tried again using a black SUV. The first result he called up showed a light blue minivan. The second showed a gray station wagon.

“That’s demo syndrome for you,” an observer said.

As a phenomenon, “demo syndrome” encompasses much more than the unexpected glitches that happen exactly when you’re trying to show off your technology to a journalist or a couple hundred colleagues. It represents the gap between what store owners, police departments and Air Force intelligence analysts expect surveillance technology to do and what today’s computer vision programs actually offer. All too often, engineers have just a brief window of time to convince clients that automatic surveillance technologies are actually making their jobs easier. It doesn’t take much for an video analyst to throw up his hands and go back to the way he’s always analyzed video footage: with his own eyes.

“[Air Force intelligence analysts] will use systems put in front of them now, then turn them off because it just makes their job harder,” explained John Rush, chief of the Information Integration Data Engineering Division of the U.S. National Geospatial-Intelligence Agency, during a presentation Tuesday afternoon. “Getting them to accept the results [of automatic video search software] without going back and checking all the data—that’s a long time coming.”

Rush is leading the charge to get the U.S. Air Force and its funders to change the way they think about processing data. As I addressed in Wednesday’s post, the Air Force is upgrading its camera and sensor systems to the point where there’s just too much data coming in for analysts to sort through. For example, Rush mentioned that DARPA’s next-generation ARGUS-IS—a drone-mounted video sensor and processor—will be able to survey a 40-square kilometer area at 1.8 gigapixel resolution. “We’re talking about being able to capture one and a half to two million vehicles in that area during one mission,” Rush said. “You’d need 16,000 analysts based on the projected data coming out of these systems.”

You’d be hard-pressed to find any user of surveillance hardware who thinks automatic detection and tracking software is a bad idea. Barbara Shaw, a project leader for the National Institute of Standards and Technology, who participated in AVSS’s industry panel on Tuesday, recalled describing new advances in surveillance software to the vice president of security for a Las Vegas casino, which to Shaw’s surprise, relied solely on human eyeballs to monitor its cameras. Shaw recalled the VP responding: “This is exactly what we need! When can we get the technology?”

The problem, as with most scientific pursuits, is that computer vision technology advances slowly. And it seems, at the moment anyway, that the software is rarely as good as users expect it to be. Perhaps that’s because what we have to compare it to is one of the best vision systems on Earth: our own.

“A two-year-old knows that’s a trash can over there,” said Guler of intuVision, pointing to a gray plastic bin against a gray wall. “People think it’s an easy thing to do, identifying a trash can. ‘You mean your program can’t even see that?’ they ask me. The truth is, a computer isn’t going to perform like a human next month or next year.”

Still, at this conference alone, I’ve seen lots of impressive algorithms—algorithms that can re-identify a person in infrared or detect a pedestrian crossing a freeway from two kilometers above the ground. The question is: can they do these things when it counts?

Just $99 for the New Apple TV

“I was right,” I just emailed my husband, while watching the live blogging of today’s Apple press event. “The new Apple TV does Netflix.” We’d had a debate about this a few days ago; he said Steve Jobs would want to do his own movie streaming, I thought any TV peripheral these days was going to have to have Netflix support.

Apple TV also does $4.99 movie rentals for newer releases not available on Netflix streaming and 99-cent TV show rentals. We’re not huge movie consumers in our family, but we do sometimes call for family movie nights with rather short notice, and find we have nothing decent to watch. So these prices—and the fact that the Apple TV peripheral will retail for only $99—almost had me pre-ordering the gizmo.

Then I realized that it’s HDMI out. Only HDMI. This means a great digital signal for newer TVs. Trouble is, we’ve been hanging on to our much-loved older TV, which was manufactured before HDMI was a twinkle in anybody’s eye.

Surely, I thought, I could buy an inexpensive converter and solve this problem. Uh, no. Converter: $299. A bit much to spend for a kludge. Instead, this $99 gizmo could finally force me to buy a new TV. So that’s more like $599, plus the $99 for Apple TV, which brings us up to well over $700, including the sales tax. Re-upping the Netflix subscription is going to cost at least $108 a year. And then there are those movie rentals, say, four a month, that’s $240 a year. And all those episodes of Glee that are on past my son’s bedtime; another $20 or so a season....Apple TV is suddenly looking very expensive, at least for my family. Darn it, Steve, enough with these great new products already!

Hacking Sarah Palin

It’s not easy being a politician’s son. Just ask David Kernell.  

The 22-year-old grew up as the golden-haired son of a golden boy of Tennessee politics:  longtime Democrat legislator, Mike Kernell, chairman of Tennessee's House Government Operations Committee.

In the fall of 2008, his father was embroiled in a fight for reelection against a formidable opponent:  Tim Cook, a Homeland Security Special Agent and member of the FBI Gang Task Force.  David watched the battle from the sidelines, while he attended the University of Tennessee, an undeclared major living off campus with friends.  In the wee hours of Sunday September 21, however, David was having a party with friends when a team of FBI agents showed up.  The party-goers assumed there had been a noise violation, as the agents spent the next couple hours photographing the people and place.   But one kid wasn’t there to see the action unfold – David Kernell had slipped out the back door. 

On March 10, 2009, Kernell had nowhere to run as he stood in a US district court in Knoxville.  That day, the Feds filed felony charges against him for the most notorious cybercrime in years:   hacking into Sarah Palin’s email at the height of the presidential race.  Kernell was allegedly the hacker known as Rubico, who, posted a long explanation of how he busted into Palin’s Yahoo mail account.

"I really wanted to get something incriminating which I was sure there would be," he blogged.  The craziest thing of all was how easy it was for him to break in.  “It took seriously 45 mins on wikipedia and google to find the info,” he wrote.  But after successfully changing her password to Popcorn, the reality hit him hard. "It finally set in,” he continued, “THIS internet was serious business, yes I was behind a proxy, only one, if this shit ever got to the FBI I was fucked, I panicked, i still wanted the stuff out there but I didn't know how to rapid [share] all that stuff, so I posted the pass…and then promptly deleted everything, and unplugged my internet and just sat there in a comatose state."

in April, Kernell was convicted of obstructing justice and unauthorized access to a computer. He faces sentencing in October.  Though the case has not been found to be politically motivated, politicians and pundits have seized the story as just that.  "The fact that a Democratic activist - and possibly an Obama supporter - could go to these lengths is deeply disturbing and criminal," said Republican National Committee spokesman Danny Diaz.

Earlier this year, hackers broke into the Facebook page of UK Conservative Bill Wiggin, Shadow Minister for Agriculture and Fisheries, and used it to invite his “friends” to view free webcam sex shows.  Not long before that, a candidate in Edinburgh named Cammy Day claimed hackers had hijacked his email address to post hateful comments about his opponent online. "In the last two months I have had to change my password twice to my Hotmail account, so someone is obviously trying to hack into my account,” he lamented, “I would suggest it is someone in the other parties who is trying to cause mischief."

Three Software Tricks for Sorting Through the Barrage of UAV Footage

PHOTO CREDIT: Lt. Col. Leslie Pratt/USAF. An MQ-9 Reaper drone flies a combat mission over southern Afghanistan.

In 2009 alone, the U.S. Air Force shot 24 years’ worth of video over Iraq and Afghanistan using spy drones (UAVs). With so many planes in the air, and more and more cameras being attached to each plane, the Air Force is generating more footage than analysts can sift through, the New York Times reported back in January. “We’re going to find ourselves in the not too distant future swimming in sensors and drowning in data,” Lt. Gen. David A. Deptula the U.S. Air Force’s top intelligence official, told National Defense Magazine the same month.

Of course, the best way to reach the surface of such a large data pool is to get a computer to show you which way to swim. As one might guess, programming a system to automatically search video and pick out noteworthy information is not an easy problem. And so far, no one has developed software that can keep up with the Air Force’s high-tech hardware. But Mubarak Shah, who founded and now directs the Computer Visions Lab at the University of Central Florida, recently presented a few ideas about how to make improvements.

Shah, who spoke Monday at the 7th IEEE International Conference on Advanced Video and Signal-Based Surveillance in Boston, Massachusetts, is a graying, deep-eyed fellow. According to the computer engineer perched next to me, Shah is “the guy you’ve heard about if you know anything about anyone in video surveillance.”

Shah focused on three problems for surveillance software that are notoriously difficult, particularly when you’re trying to analyze video shot thousands of meters above the ground from a drone flying hundreds of kilometers per hour.

Follow the Dots

The first problem he addressed was how to track big swarms of objects, such as cars, traveling over a wide area, such as an expressway. The difficulty of this task lies in the fact that, when shot from above, cars traveling on an expressway are exceedingly small (no more than 30 pixels), and there are thousands of them. Plus, the plane that’s shooting footage is moving faster than the cars it’s capturing, so you’ve only got a few frames to work with for each car.

Shah’s solution to this problem depends on keeping track of all the possible paths a vehicle may have taken, then weeding out the poor choices based on common sense (Two vehicles probably didn’t cross paths at the same time…unless, of course, they crashed) and a bit of modern transportation theory (If one car is behind another car, it’s probably accelerating at a similar rate). It looked like he got pretty good results: about 80-90 percent accuracy.

Wait... Is That a Pedestrian or a Palm Tree?

If you think antlike cars are hard for a computer to follow, you can imagine the difficulty in trying to program it to find a person. Some of the best people-detection systems rely on histograms—statistical distributions that determine whether an object is a person based on probabilities. But because people are so very tiny in aerial images, the histogram method mistakes quite a lot of things (trees, mailboxes, stoplights) for people. “There can be thousands of those that are completely wrong,” Shah says. He proposes using some basic 8th-grade geometry tricks to find a person based on the relationship between the height of an object and the length of its shadow. He admits that this strategy wouldn’t work so well on video shot on cloudy days or using infrared light.

Mapping by Motion

The third—and in my opinion, coolest—tool Shah presented was a method for determining movement patterns. Say, for example, you have some aerial footage of an Afghan city and you want to automatically know how it’s laid out—where the roads are, the bridges, the intersections, where people regularly travel, the areas they avoid, where they gather. What a computer sees, however, is “very noisy optical flow,” Shah says—lots of motion but not much order. Using a mathematical noise-reducing tool known as a "mixture of Gaussians," Shah can find order in the static-like mess of optical data and get his software to draw a picture of the city in motion. “Using this, we can basically discover the road networks,” he says.

Automating video search seems an ambitious project, but one that needs to be done. The newer Reaper drones now shoot video in 10 directions at once, and the Air Force plans to eventually upgrade that number to 65. That’s 65 video streams coming from one spy plane. The deluge of data isn’t stopping, and there just aren’t enough eyeballs to sort through it all.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More