Tech Talk iconTech Talk

Google Explains How It Forgets

Google can forget, but unlike the rest of us, the process is not automatic.

Yesterday Google told a European government data protection working party how it handles requests for search result link removals. The removals began in June after a May European court ruling (see our coverage) upholding a Spanish man's right to be forgotten.

The working group had earlier sent Google a questionnaire on the practicalities of the removals and met with Google and two other unnamed U.S. search engines. Google's reply revealed that it is handling the requests on a case-by-case basis, with decisions resting on recently-hired staff.  Companies that help individuals request link removals have begun receiving rejections, The New York Times reported.

Read More

No Need for Reading Glasses With Vision-Correcting Display

These days, people reach for their smartphone after stumbling out of bed in the morning, but many see just a blurry mess instead of an alarm, messages, or pictures. A new vision-correcting display would pre-distort a digital screen so their imperfect vision renders it crystal clear—without glasses.

Fu-Chung Huang started working on vision-correcting displays in 2011 as a graduate student at University of California, Berkeley (he's now at Microsoft Corp). “Photoshop can deblur a photo,” he says, “so why can’t I correct the visual blur on a display?”

Earlier attempts to make a vision-corrected view led to quality issues: an image-processing algorithm on a normal 2-D screen or two screens layered on top of each other led to low image contrast, and a light-field display projected multiple images from different perspectives with low resolution. Huang and his collaborators at UC Berkeley and the Massachusetts Institute of Technology realized they’d have to fine-tune both a specialized display and an algorithm to make the system work.

Huang constructed a simple prototype: an ordinary iPod whose screen was covered with a clear film sandwiching a thin grid of pinholes. From any given point in space only some of the pixels are revealed through the pinholes, which lets the algorithm choose the selection of pixels making their way to different parts of the eye by controlling their position on the screen. This can compensate for a viewer's incorrect focus—for instance, by presenting pixels as if the screen was half the distance away for a nearsighted viewer, or varying their distance if the viewer's field of vision is irregular. Similar technology is used to show a 3-D effect on displays like the Nintendo 3DS, where each eye sees a slightly different view.

Future incarnations of the display might use tiny lenses or a more sophisticated barrier to make the image brighter and sharper, but for now the researchers chose to keep it simple with an array that could easily be added as a screen cover to existing devices.

“The overall cost is less than $10,” says Huang. “I can build the thing in a few minutes.” (And he posted instructions here.)

In order to test the algorithm’s compensation for different eyesight problems, the researchers turned a DSLR camera (with lens similar in shape to the human eye) on the display. Focusing the camera too far away simulated farsightedness, and the researchers could tell whether the display was working by examining the pictures. To test other, more complicated visual problems, the researchers ran simulations and found that their algorithm was able to make a clear picture even for irregular eye shapes that current glasses cannot correct.

Although the group didn't run a human study, Huang tested the algorithm out with his own nearsighted vision. “It requires precise calibration between the eye and the display,” says Huang, “and it took some time to find the sweet spot for my eye.” But with eye-tracking technology, like that on the new Amazon Fire Phone, the next version could compensate for the viewer’s movement and adjust the picture to stay in focus.

Gordon Wetzstein, one of the project's MIT collaborators, focuses his research on compression algorithms for 2-D and 3-D displays—which he believes are key to unlocking creative new uses for the technology.

“I think this is what people need to spend more effort on,” says Wetzstein. “Finding new applications like vision correction, new user interfaces, heads-up displays for augmented virtual reality—these kinds of things are very hot. Finding the right killer app is something nobody’s really solved yet.”

Explanatory video from MIT below.

Gaza Power Station Wrecked

The Gaza Strip's only power plant was hit by Israeli shelling and caught fire earlier this week, according to news reports. "The power plant is finished," its director, Mohammed al-Sharif told The Guardian. The plant and its engineers were the subject of a profile in the December 2009 issue of IEEE Spectrum.

Residents were only getting about four hours of power per day even when the plant was functioning. According to The New York Times, the plant was the main source of electricity for the territory, as eight of the 10 power lines coming from Israel had been damaged prior to the power plant's destruction.

An Israeli military spokesman, Lt. Col. Peter Lerner, told the Times the plant “was not a target.”

Read More

How to Catch a Memory Copycat

In 2008, allegedly, a technician left SanDisk with a particularly good gift for his new employer—proprietary details about memory chips made by SanDisk and its partner on the project, Toshiba. Last Monday Toshiba revealed that it was suing the alleged receiver of that gift, SK Hynix, for US $1.1 billion and demanding that the company remove any chips from the market that use the trade secrets.

The chips in question are NAND flash memory chips, the nonvolatile memory of smartphones, tablets, USB drives, you name it. In this case the lawsuit is clear-cut: An employee allegedly downloaded and passed on files, and if the companies can prove it, the case is closed. But many times stolen trade secrets or patent infringements have to be found the old fashioned way—by reverse engineering.

Read More

Japanese Broadcaster Uses LEDs for Underwater TV Transmission

Japan’s public broadcaster Nippon Hoso Kyokai (NHK) wants to broadcast live TV from under the water, but it’s been tripped up by that pesky cable that transfers the camera’s data to the surface. So engineers there are developing an underwater wireless transmission system that uses visible light from LEDs as the method of transmission. Their goal is to enable wireless live underwater TV broadcasting.

Read More

Google Searches About Politics Predict the Stock Market

The number of Google searches related to business and politics can help predict falls in the stock market, researchers at the University of Warwick, in England, say.

Scientists have recently begun investigating what people look for on Google and Wikipedia to help forecast the future. For instance, prior research has shown the rate at which people look up information about the flu helps predict the spread of the disease

In recent work, "we found evidence that data on Google searches for financially related words and views of financially related pages on Wikipedia could have provided early warning signs of stock market moves," says Suzy Moat, a data scientist at the University of Warwick. "However, the financial markets constitute a large, complex system, which influences and is influenced by many different aspects of modern society. We therefore wondered if searches for other topics might also provide insight into subsequent stock market moves."

Read More

RadioShack to Sell Kits for IoT Connectivity

New York City startup littleBits, which makes snap-together electronic modules for budding tinkerers, is wading into the ever-deepening sea of hardware configured for the Internet-of-Things. Those who want to investigate this hardware first hand should have no trouble making an impulse purchase, because the company’s $99 kit of modules for assembling Internet-connected gizmos will soon start selling at Radio Shack stores.

The heart of this kit is what littleBits calls the “cloud bit” module, which snaps together with the company’s other modules using special magnetic connectors. So, you can quickly add a cloud-bit module to something you've created with the company's other input and output modules—buttons, lights, motors, and so forth. The point of the cloud-bit module is to connect what you have assembled to the Internet in a way that allows you to control your creation using a littleBits cloud account.

How exactly does the cloud-bit module work? And how is if different from, say, the Electric Imp, a similar device that’s allowed tinkerers to connect hardware to the Internet since its introduction in 2012?

A little digging on the littleBits website reveals a few more pertinent details. The cloud-bit module is a diminutive Linux computer with a non-integral Wi-Fi adapter plugged into it. But it’s not a general-purpose Linux computer like the Raspberry Pi. Rather, it has just one mission: To connect up to littleBits’s servers. Accomplishing that mission requires that you first connect the cloud-bit module to your local Wi-Fi network.

The digital generation is well enough acquainted with connecting to Wi-Fi networks that this should be no big deal, even for a child. The challenge here is that the cloud-bit module has no user interface—no touch screen or keyboard. Actually, it does have a very minimal interface: a setup button and a colored LED indicator light. But that’s enough to do the job. You merely press the setup button, and the cloud-bit module configures its on-board Wi-Fi adapter to become an access point, meaning that when you scan the airwaves with your computer or phone, you’ll see a new wireless network created by the cloud-bit module. You can now connect to the cloud-bit module and, using just a browser, give it the SSID and password it needs to connect to your usual Wi-Fi network as a client device.

It’s a clever solution for a common problem—configuring a wireless device to connect to a Wi-Fi network when that device has no real user interface. The Electric Imp makes use of a different, and in my view more clumsy, strategy, requiring a special phone app to flash the screen of your phone while you hold it against the Imp to convey the needed setup information.

Each of the littleBits cloud-bit modules has its own unique code, which you no doubt have to provide when you sign on for a cloud account with the company. This allows the company’s servers to associate you with the hardware you have purchased, and you can start to issue it commands over the Internet.

Although this capability in itself would add an additional level of enjoyment to a littleBits project, more serious fun, I would think, could be had by taking advantage of the partnership that littleBits has forged with IFTTT, a Web service for connecting other Web services. Properly set up, you could have, say, your collection of littleBits modules play your favorite team’s fight song every time ESPN posts breaking news for your team.

San Diego Comic-Con: Where Tech Goes Pop

The San Diego Comic-Con was last weekend: the convention is the highlight of the pop culture calendar: about 130,000 attendees flock to southern California to check out science fiction and fantasy productions ranging from big upcoming studio movie blockbusters to hand-made comic books from independent artists.

I was there to help promote IEEE Spectrum’s upcoming science fiction anthology Coming Soon Enough e-book (it will be available in the first week of August, but you can get a sneak peek now at a story from award-winning author Nancy Kress), and to moderate a panel of Hollywood writers, producers, and science advisors about some of the issues involved with portraying science and technology in science fiction.

The panelists were Jessica Cail, a neuroscientist at Pepperdine University and a science consultant; Kevin Grazier*, a former JPL mission scientist and consultant to productions such as Battlestar Galactica, Defiance, and Gravity; Andrea Letamendi, a clinical psychologist and creator of Under The Mask, a website devoted to providing insights into superheroes, villains, and their fans; Jaime Paglia, the co-creator of Eureka and currently a producer and writer for upcoming superhero TV show The Flash; Nicole Perlman, co-writer of the movie Guardians of the Galaxy; Phil Plait, the science communicator behind the Bad Astronomy blog; and the writing and producing team of Ashley Miller and Zack Stentz, whose credits include Fringe, X-Men: First Class, and a new, as-yet-untitled, TV series that’s part of the Terminator franchise.

Speaking to a standing-room-only crowd of over 500, the panel was focused on discussing how the representation of scientists and engineers has evolved in recent years. While the stereotype of the scientist as a white male awkward nerd, Einstein-esque saint, or super villain is still around (and alienating to female and minority viewers possibly interested in pursuing STEM careers), newer characters such as Stargate’s Samantha Carter, Eureka’s Allison Blake, and even Fringe’s resident “mad scientist” Walter Bishop are complex, humanized figures. The reasons for this evolution are, in part, due to the larger presence of science advisors in TV and movie productions, facilitated with programs such as the National Academy of Sciences’ Science and Entertainment Exchange, which connect scientists with writers looking for answers to technical questions.

Another reason given by writers and producers on the panel is that an increasing number of writers are becoming aware of the dramatic possibilities inherent in a character struggling with a scientific or engineering challenge. Screenplays which have been able to mine this drama well have thus been able to offer something fresh to audiences weary of stereotyped characters. This translates to a competitive advantage during a period that has been dubbed “the golden age of television,” due to the advent of high-quality, highly serialized shows such as Mad Men, Game of Thrones, Breaking Bad, The Walking Dead and Battlestar Galactica.

Other panels and the exhibition floor at the San Diego Comic-Con also provided a chance to see how new technologies are beginning to filter into popular entertainment. A number of media companies, including Fox Studios and Warner Brothers, offered attendees the chance to wear prototype Oculus Rift virtual reality headsets that immersed them in environments such Professor X’s Cerebro device from the X-Men franchise or a dangerous storm from the yet-to-be released movie Into The Storm. At the booth of special effects house Weta, which has already begun using 3-D printing to create movie props, 3-D printing company 3DS Systems was promoting its technology, including its customized Star Trek figure service (which will allow a customer to put his or her own face on an action figure) that it announced at the Consumer Electronics Show earlier this year. Meanwhile, the impact of new production technologies in comics was an object of existential debate: if a creator starts using digital enhancements to create moving or animated elements, at what point does a comic book stop being a comic book and starts being a jerkily animated cartoon?

But perhaps my favorite thing at Comic Con was an example of how deeply a certain Serbian inventor has wormed his way into popular culture: a comic book from Red Giant Entertainment devoted to the fictionalized adventures of international superhero Nikola Tesla.

Follow Stephen Cass on Twitter: @stephencass

*Disclosure: Kevin Grazier and I are co-authors on an upcoming book called Hollyweird Science

Photos: Stephen Cass

Red Planet Seeks a Better Data Plan

In a move likely to both incite critics and excite supporters of the agency, last week NASA issued a request for proposals for a possible commercial communications network around Mars

The request comes as the space agency mulls its options for future unmanned—and ultimately manned—Mars missions. NASA currently operates two orbiters around the red planet that also serve as relay stations for other Mars missions, most notably the celebrated Mars rover program. On 21 September 2014, the Mars Atmosphere and Volatile Evolution (aka MAVEn) orbiter will add one more node in the communications network, making three NASA-operated Mars relay satellites.

Read More

Can Computing Keep up With the Neuroscience Data Deluge?

human os iconToday's neuroscientists have some magnificent tools at their disposal. They can, for example, examine the entire brain of a live zebrafish larva and record the activation patterns of nearly all of its 100,000 neurons in a process that takes only 1.5 seconds. The only problem: One such imaging run yields about 1 terabyte of data, making analysis the real bottleneck as researchers seek to understand the brain.

To address this issue, scientists at Janelia Farm Research Campus have come up with a set of analytical tools designed for neuroscience and built on a distributed computing platform called Apache Spark. In their paper in Nature Methods, they demonstrate their system's capabilities by making sense of several enormous data sets. (The image above shows the whole-brain neural activity of a zebrafish larva when it was exposed to a moving visual stimulus; the different colors indicate which neurons activated in response to a movement to the left or right.)

The researchers argue that the Apache Spark platform offers an improvement over a more popular distributed computing model known as Hadoop MapReduce, which was originally based on Google's search engine technology. Here's how Spectrum described these conventional systems in an article on "DNA and the Data Deluge":

While Hadoop and MapReduce are simple by design, their ability to coordinate the activity of many computers makes them powerful. Essentially, they divide a large computational task into small pieces that are distributed to many computers across the network. Those computers perform their jobs (the “map” step), and then communicate with each other to aggregate the results (the “reduce” step). This process can be repeated many times over, and the repetition of computation and aggregation steps quickly produces results.

But the Janelia Farm researchers note that with MapReduce, data has to be loaded from disk for each operation. The Apache Spark advantage lies in its ability to cache data sets and intermediate results in the memory of many computers across the network, allowing for much faster iterative computations. This caching is particularly useful for neural data, which can be analyzed in many different ways, each offering a new view into the brain's structure and function.

The researchers have made their library of analytic tools, which they call Thunder, available to the neuroscience community at large. With U.S. government money pouring into neuroscience research for the new BRAIN Initiative, which emphasizes recording from the brain in unprecedented detail, this computing advance comes just in the nick of time.


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More