Tech Talk iconTech Talk

How to Catch a Memory Copycat

In 2008, allegedly, a technician left SanDisk with a particularly good gift for his new employer—proprietary details about memory chips made by SanDisk and its partner on the project, Toshiba. Last Monday Toshiba revealed that it was suing the alleged receiver of that gift, SK Hynix, for US $1.1 billion and demanding that the company remove any chips from the market that use the trade secrets.

The chips in question are NAND flash memory chips, the nonvolatile memory of smartphones, tablets, USB drives, you name it. In this case the lawsuit is clear-cut: An employee allegedly downloaded and passed on files, and if the companies can prove it, the case is closed. But many times stolen trade secrets or patent infringements have to be found the old fashioned way—by reverse engineering.

Read More

Japanese Broadcaster Uses LEDs for Underwater TV Transmission

Japan’s public broadcaster Nippon Hoso Kyokai (NHK) wants to broadcast live TV from under the water, but it’s been tripped up by that pesky cable that transfers the camera’s data to the surface. So engineers there are developing an underwater wireless transmission system that uses visible light from LEDs as the method of transmission. Their goal is to enable wireless live underwater TV broadcasting.

Read More

Google Searches About Politics Predict the Stock Market

The number of Google searches related to business and politics can help predict falls in the stock market, researchers at the University of Warwick, in England, say.

Scientists have recently begun investigating what people look for on Google and Wikipedia to help forecast the future. For instance, prior research has shown the rate at which people look up information about the flu helps predict the spread of the disease

In recent work, "we found evidence that data on Google searches for financially related words and views of financially related pages on Wikipedia could have provided early warning signs of stock market moves," says Suzy Moat, a data scientist at the University of Warwick. "However, the financial markets constitute a large, complex system, which influences and is influenced by many different aspects of modern society. We therefore wondered if searches for other topics might also provide insight into subsequent stock market moves."

Read More

RadioShack to Sell Kits for IoT Connectivity

New York City startup littleBits, which makes snap-together electronic modules for budding tinkerers, is wading into the ever-deepening sea of hardware configured for the Internet-of-Things. Those who want to investigate this hardware first hand should have no trouble making an impulse purchase, because the company’s $99 kit of modules for assembling Internet-connected gizmos will soon start selling at Radio Shack stores.

The heart of this kit is what littleBits calls the “cloud bit” module, which snaps together with the company’s other modules using special magnetic connectors. So, you can quickly add a cloud-bit module to something you've created with the company's other input and output modules—buttons, lights, motors, and so forth. The point of the cloud-bit module is to connect what you have assembled to the Internet in a way that allows you to control your creation using a littleBits cloud account.

How exactly does the cloud-bit module work? And how is if different from, say, the Electric Imp, a similar device that’s allowed tinkerers to connect hardware to the Internet since its introduction in 2012?

A little digging on the littleBits website reveals a few more pertinent details. The cloud-bit module is a diminutive Linux computer with a non-integral Wi-Fi adapter plugged into it. But it’s not a general-purpose Linux computer like the Raspberry Pi. Rather, it has just one mission: To connect up to littleBits’s servers. Accomplishing that mission requires that you first connect the cloud-bit module to your local Wi-Fi network.

The digital generation is well enough acquainted with connecting to Wi-Fi networks that this should be no big deal, even for a child. The challenge here is that the cloud-bit module has no user interface—no touch screen or keyboard. Actually, it does have a very minimal interface: a setup button and a colored LED indicator light. But that’s enough to do the job. You merely press the setup button, and the cloud-bit module configures its on-board Wi-Fi adapter to become an access point, meaning that when you scan the airwaves with your computer or phone, you’ll see a new wireless network created by the cloud-bit module. You can now connect to the cloud-bit module and, using just a browser, give it the SSID and password it needs to connect to your usual Wi-Fi network as a client device.

It’s a clever solution for a common problem—configuring a wireless device to connect to a Wi-Fi network when that device has no real user interface. The Electric Imp makes use of a different, and in my view more clumsy, strategy, requiring a special phone app to flash the screen of your phone while you hold it against the Imp to convey the needed setup information.

Each of the littleBits cloud-bit modules has its own unique code, which you no doubt have to provide when you sign on for a cloud account with the company. This allows the company’s servers to associate you with the hardware you have purchased, and you can start to issue it commands over the Internet.

Although this capability in itself would add an additional level of enjoyment to a littleBits project, more serious fun, I would think, could be had by taking advantage of the partnership that littleBits has forged with IFTTT, a Web service for connecting other Web services. Properly set up, you could have, say, your collection of littleBits modules play your favorite team’s fight song every time ESPN posts breaking news for your team.

San Diego Comic-Con: Where Tech Goes Pop

The San Diego Comic-Con was last weekend: the convention is the highlight of the pop culture calendar: about 130,000 attendees flock to southern California to check out science fiction and fantasy productions ranging from big upcoming studio movie blockbusters to hand-made comic books from independent artists.

I was there to help promote IEEE Spectrum’s upcoming science fiction anthology Coming Soon Enough e-book (it will be available in the first week of August, but you can get a sneak peek now at a story from award-winning author Nancy Kress), and to moderate a panel of Hollywood writers, producers, and science advisors about some of the issues involved with portraying science and technology in science fiction.

The panelists were Jessica Cail, a neuroscientist at Pepperdine University and a science consultant; Kevin Grazier*, a former JPL mission scientist and consultant to productions such as Battlestar Galactica, Defiance, and Gravity; Andrea Letamendi, a clinical psychologist and creator of Under The Mask, a website devoted to providing insights into superheroes, villains, and their fans; Jaime Paglia, the co-creator of Eureka and currently a producer and writer for upcoming superhero TV show The Flash; Nicole Perlman, co-writer of the movie Guardians of the Galaxy; Phil Plait, the science communicator behind the Bad Astronomy blog; and the writing and producing team of Ashley Miller and Zack Stentz, whose credits include Fringe, X-Men: First Class, and a new, as-yet-untitled, TV series that’s part of the Terminator franchise.

Speaking to a standing-room-only crowd of over 500, the panel was focused on discussing how the representation of scientists and engineers has evolved in recent years. While the stereotype of the scientist as a white male awkward nerd, Einstein-esque saint, or super villain is still around (and alienating to female and minority viewers possibly interested in pursuing STEM careers), newer characters such as Stargate’s Samantha Carter, Eureka’s Allison Blake, and even Fringe’s resident “mad scientist” Walter Bishop are complex, humanized figures. The reasons for this evolution are, in part, due to the larger presence of science advisors in TV and movie productions, facilitated with programs such as the National Academy of Sciences’ Science and Entertainment Exchange, which connect scientists with writers looking for answers to technical questions.

Another reason given by writers and producers on the panel is that an increasing number of writers are becoming aware of the dramatic possibilities inherent in a character struggling with a scientific or engineering challenge. Screenplays which have been able to mine this drama well have thus been able to offer something fresh to audiences weary of stereotyped characters. This translates to a competitive advantage during a period that has been dubbed “the golden age of television,” due to the advent of high-quality, highly serialized shows such as Mad Men, Game of Thrones, Breaking Bad, The Walking Dead and Battlestar Galactica.

Other panels and the exhibition floor at the San Diego Comic-Con also provided a chance to see how new technologies are beginning to filter into popular entertainment. A number of media companies, including Fox Studios and Warner Brothers, offered attendees the chance to wear prototype Oculus Rift virtual reality headsets that immersed them in environments such Professor X’s Cerebro device from the X-Men franchise or a dangerous storm from the yet-to-be released movie Into The Storm. At the booth of special effects house Weta, which has already begun using 3-D printing to create movie props, 3-D printing company 3DS Systems was promoting its technology, including its customized Star Trek figure service (which will allow a customer to put his or her own face on an action figure) that it announced at the Consumer Electronics Show earlier this year. Meanwhile, the impact of new production technologies in comics was an object of existential debate: if a creator starts using digital enhancements to create moving or animated elements, at what point does a comic book stop being a comic book and starts being a jerkily animated cartoon?

But perhaps my favorite thing at Comic Con was an example of how deeply a certain Serbian inventor has wormed his way into popular culture: a comic book from Red Giant Entertainment devoted to the fictionalized adventures of international superhero Nikola Tesla.

Follow Stephen Cass on Twitter: @stephencass

*Disclosure: Kevin Grazier and I are co-authors on an upcoming book called Hollyweird Science

Photos: Stephen Cass

Red Planet Seeks a Better Data Plan

In a move likely to both incite critics and excite supporters of the agency, last week NASA issued a request for proposals for a possible commercial communications network around Mars

The request comes as the space agency mulls its options for future unmanned—and ultimately manned—Mars missions. NASA currently operates two orbiters around the red planet that also serve as relay stations for other Mars missions, most notably the celebrated Mars rover program. On 21 September 2014, the Mars Atmosphere and Volatile Evolution (aka MAVEn) orbiter will add one more node in the communications network, making three NASA-operated Mars relay satellites.

Read More

Can Computing Keep up With the Neuroscience Data Deluge?

human os iconToday's neuroscientists have some magnificent tools at their disposal. They can, for example, examine the entire brain of a live zebrafish larva and record the activation patterns of nearly all of its 100,000 neurons in a process that takes only 1.5 seconds. The only problem: One such imaging run yields about 1 terabyte of data, making analysis the real bottleneck as researchers seek to understand the brain.

To address this issue, scientists at Janelia Farm Research Campus have come up with a set of analytical tools designed for neuroscience and built on a distributed computing platform called Apache Spark. In their paper in Nature Methods, they demonstrate their system's capabilities by making sense of several enormous data sets. (The image above shows the whole-brain neural activity of a zebrafish larva when it was exposed to a moving visual stimulus; the different colors indicate which neurons activated in response to a movement to the left or right.)

The researchers argue that the Apache Spark platform offers an improvement over a more popular distributed computing model known as Hadoop MapReduce, which was originally based on Google's search engine technology. Here's how Spectrum described these conventional systems in an article on "DNA and the Data Deluge":

While Hadoop and MapReduce are simple by design, their ability to coordinate the activity of many computers makes them powerful. Essentially, they divide a large computational task into small pieces that are distributed to many computers across the network. Those computers perform their jobs (the “map” step), and then communicate with each other to aggregate the results (the “reduce” step). This process can be repeated many times over, and the repetition of computation and aggregation steps quickly produces results.

But the Janelia Farm researchers note that with MapReduce, data has to be loaded from disk for each operation. The Apache Spark advantage lies in its ability to cache data sets and intermediate results in the memory of many computers across the network, allowing for much faster iterative computations. This caching is particularly useful for neural data, which can be analyzed in many different ways, each offering a new view into the brain's structure and function.

The researchers have made their library of analytic tools, which they call Thunder, available to the neuroscience community at large. With U.S. government money pouring into neuroscience research for the new BRAIN Initiative, which emphasizes recording from the brain in unprecedented detail, this computing advance comes just in the nick of time.

Madrid Begins Electric Bike Sharing

For something that took years to arrive, Madrid’s public bicycles sure get off to a fast start. Pedal once and the 36-volt, 10-ampere, electric motors will give you a sudden boost. Going up one of Madrid’s many hills, it is a welcome aid. Downhill, the burst jars. But riders can disable the boost by not pedaling, and moderate it with electric controls on the handlebars. With a little practice, the bikes begin to feel like underpowered motor scooters. “Our major goal is to move journeys that are now done by car to the bicycles,” says Elisa Barahona, Madrid’s director of sustainability and environment.

During the program’s first week, however, almost nobody could sign up through the touchscreen UNIX computers at the bike docking stations. While first approved years ago in an effort to combat air pollution, economic concerns and then logistical problems delayed the launch by years, building up attention and demand. After the program finally launched, online attacks to the payment system blocked registration. In the first two weeks, the company’s information technology engineers racked up a 20 gigabyte log from Internet attacks, says Miguel Vital, director of Bonopark, the contractor operating the system on behalf of the city of Madrid. Other attacks were less sophisticated: Bonopark left some of the docking station computers' screen resolutions at the wrong size, allowing at least one naughty user to access a web browser and leave pornography visible in the place of the user registration screen (The Local).

Read More

World War I: The War of the Inventors

Illustration: Alamy

Big Gun: Germany's “Paris Gun” had an astounding range of 130 kilometers, it wasn’t terribly accurate, and so the effect it had was mainly psychological.

One hundred years ago, as the international conflict that became known as World War I began, most Europeans were predicting a quick victory. Within a few months, it became clear their optimism was unrealistic. As the fighting spread and grew more deadly, the role of engineering and invention took on new urgency. Eventually, the Great War became known in certain circles as an “inventor’s war.” To be sure, many of the inventions people now associate with World War I—submarines, torpedoes, fighter and bomber aircraft—had actually been conceived earlier. However, the pressures of war pushed their advancement. Here are four such technologies that still influence our world today.


SONAR: Making the Sea Safe for Democracy

Illustration: Imperial War Museum
U-Boat Casualty: On 5 September 1914, a German torpedo sank the British cruiser HMS Pathfinder. Technology to detect U-boats eventually led to the development of SONAR.


Photo: NOAA
Sea Sounds: Canadian radio pioneer Reginald Fessenden conceived his electric oscillator in response to the sinking of the Titanic. The acoustic device receive echoes from the ocean’ bottom as well as from any obstructions in the water.

In the years leading up to the war, navies that had submarines used them mainly for coastal defense. Germany changed that by developing its U-boats into long-range offensive weapons. That shift in military strategy compelled the Allies to 1) also begin using submarines offensively and 2) develop countermeasures to protect cross-Atlantic shipping.

The work of Reginald Fessenden proved crucial. After an iceberg sank the RMS Titanic in 1912, the Canadian radio pioneer began conducting underwater acoustic experiments in search of a way to protect ships from submerged obstacles. This led him to invent an electro-mechanical oscillator, a device carried aboard a ship that would transmit sound through the water at a specified frequency and then listen for reflections from any objects in the vicinity. He developed the technology first as a means of communicating with (friendly) submarines and later as a warning device that could be attached to navigation buoys to alert approaching ships of shoals and other hazards. In October 1914, the British Navy purchased Fessenden oscillator sets for underwater signaling, and in November 1915 decided to equip all of its submarines with them.

French physicist Paul Langevin designed an electronic version of Fessenden’s device that was much better at detecting moving objects. It included a quartz transmitter and receiver, which greatly improved the range and clarity of the signal. In February of 1918, he achieved a transmission range of 8 kilometers and clear echoes from a submarine.

The Fesseden oscillators continued to be used as late as World War II for detecting stationary objects such as mines. And Fessenden’s and Langevin’s inventions laid the foundations for what would become SONAR (Sound Navigation and Ranging). For more on Fessenden’s oscillator, see the IEEE Global History Network’s “Inventors’ Responses to the Sinking of the RMS Titanic.”]


The Superheterodyne Receiver: Better Tuning for Radio

Image: RCA
Peacetime Receiver: RCA’s Radiola AR-812 radio, the first commercially produced superheterodyne radio receiver, was introduced in 1924. The invention of the superheterodyne during World War I made it much easier to tune a radio and to pick up distant signals.

Radio technology existed before the war, but two wartime inventors greatly improved them. In 1917 and 1918, respectively, a French officer named Lucien Lévy and an American officer named Edwin H. Armstrong independently came up with what would become known as the superheterodyne receiver—a way to tune radios and to allow them to pick up distant signals. The receiver basically superimposed one radio wave on another and greatly amplified and filtered the resulting intermediate frequency, which was then demodulated to generate an audio signal, which was in turn amplified for output to loudspeakers or earphones.

Initially, Lévy sought a way to increase the secrecy of radio transmissions. He had been working at the Eiffel Tower—which the French military began using for radio experiments when the war broke out. Lévy had the idea that a supersonic wave could be superimposed upon a radio frequency carrier wave, which would itself be modulated by an acoustic wave. He refined that idea, producing the supersonic wave in the receiver and then heterodyning the received signal against a local oscillator. He applied for a French patent on 4 August 1917.

Armstrong was made a captain in the U.S. Army Signal Corps shortly before he was sent to France in 1917 to work on Allied radio communications. By then, he was already famous in the radio world for his regenerative feedback circuit (a device that greatly amplified a signal), for which he received the first Medal of Honor from the Institute of Radio Engineers. While in Paris in early 1918, Armstrong witnessed a German bombing raid. He thought that the accuracy of the antiaircraft guns could be improved if there were a way of detecting the extremely short electrical wavelengths emitted by the ignition systems of the aircraft engines. That led him to invent his superheterodyne receiver, for which he filed a French patent application on 30 December 1918.

After the war, Armstrong’s and Lévy’s competing claims on the superheterodyne receiver did not prevent it from being used widely, helping transform the radio into a hugely popular consumer product. [For more on Lévy, Armstrong, and the controversy surrounding their inventions, see Alan Douglas’s “Who Invented the Superheterodyne?”]


Air-to-Ground Communication: Radiotelephony Takes to the Skies

Photo: AT&T Archives and History Center

Voices on High: AT&T employees (some of whom had joined the U.S. Army Signal Corps during World War I) listen in on an early trial of air-to-ground voice communication.

As early as 1910, experimenters demonstrated wireless transmissions between aircraft and the ground. These trials all involved the pilot tapping out Morse code on a transmitter held in his lap. There were a few problems, however. Engine noise tended to drown out any received messages. And pilots were usually far too busy to be operating a code key.

Clearly, voice radio would be necessary for wireless communication to become practical in the air. But voice transmissions required higher frequencies than did Morse code, and the radios and their power sources were too big and heavy to fit into the aircraft of the time.

Engineers on both sides of the conflict succeeded in making those improvements. In 1916 the French successfully tested air-to-ground voice communication during the battle of Verdun; one year later, they demonstrated air-to-air voice communication at Villacoublay. Transmitters became standard aboard German aircraft in 1916 and, by the end of that year, so were receivers. On 17 May 1918 a U.S. airplane squadron was successfully commanded by voice from the air for the first time. [For more on early airborne radio, see George Larson’s “Moments and Milestones: Can You Hear Me Now?”]


Analog Fire-Control Calculators: Precursors to Digital Computing

As the range of large caliber guns increased, aiming them became more difficult. The World War I naval engagements of Coronel (off the coast of Chile) and Dogger Bank and Jutland (both in the North Sea) saw gunnery ranges from 13,000 to 15,000 meters. To hit another ship from those distances required precise calculations of the target ship’s range, course, and speed, as well as the wind's speed and direction, which in turn were used to determine the gun’s elevation and direction, the wind’s effect on the shell in flight, and any corrections for the motion of the ship doing the firing.

Illustration: Admiralty Library
Point and Shoot: The British Navy’s Dreyer Tables were mechanical calculators used to determine the range and deflection of artillery guns. Such analog machines gave rise to the first electrical computers, like the ENIAC.

In 1912, the British Royal Navy pioneered a system in which all the guns on a ship were directed from a single position (usually the highest part of the ship). The fire-control officer and rangetakers used a T-shaped optical rangefinder containing prisms to ascertain the distance, bearing, and change-of-bearing to the target by means of triangulation. The fire-control officer then communicated—usually via telephone, but with voice tubes as backup—this information to the sailors in the control center deep in the ship. They in turn moved cranks and levers to input the information into large mechanical calculators (some the size of three or four refrigerators), which used this constantly changing data to plot firing solutions for the guns. The guns would then be fired in salvoes, with a slightly different trajectory from each gun, thereby increasing the chance of hitting the target.

During the course of the war, the navies of the Allies and the Entente made significant improvements to these fire-control calculators, and there is still scholarly debate as to which navy had the most advanced system. The British Navy’s Dreyer Tables were probably the best documented of these devices, while the German battlecruiser SMS Derfflinger was widely regarded for the accuracy of its gunnery at sea. Derfflinger was scuttled at Scapa Flow in 1919, and what is known about its fire-control system emerged mostly through Allied intelligence interviews with its gunnery officers.

The range of land artillery also increased significantly during World War I. By the end of the war, for instance, the Germans were bombarding Paris with a massive gun mounted on a railroad car. Known as the Paris Gun or the Kaiser Wilhelm Geschütz, it had a range of 130 kilometers. Although it was not very accurate, it could hit something the size of a city, and so its effect was primarily psychological.

The analog mechanical calculators used to target artillery guns led directly to electronic computers. In fact, one of the most famous of the early electronic computers, ENIAC, did essentially the same tasks during World War II as the analog fire-control calculators of World War I.

About the author: Robert Colburn is the Research Coordinator at the IEEE History Center

Mimicking the Super Hearing of a Cricket-Hunting Fly

Ormia ochracea is a little, yellow fly of the American south whose breeding strategy has an outsize ick factor. It deposits its larvae on the bodies of male crickets. The larvae then eat their way into their unwilling hosts, and devour them from the inside.

What is most remarkable, though, is that the female fly locates the crickets by sound, homing in on the he-cricket’s stridulations (the chirping that results from the wings rubbing together) with uncanny accuracy. The cricket’s chirp is a smear of sound across the scale from the 5 kilohertz carrier frequency to around 20 kHz. And, as anybody who has tried to evict a passionate cricket from a tent or cabin knows, the sound is maddeningly hard to pinpoint.

With an auditory apparatus—let’s call them ears—only 1.5 millimeter across, ochracea pulls off a major feat of acoustic location; a number of engineering groups are working on devices to duplicate the fly’s sensitivity.

Now, a team at the University of Texas at Austin has built a prototype replica of O. ochracea’s ear. Michael L. Kuntzman and Neal A. Hall, researchers in the school’s electrical and computer engineering department, describe the device and its performance in Applied Physics Letters.

The fly’s ears are very different from those of humans. Human ears are typically separated by about 21 to 22 centimeters—about 625 microseconds apart at the speed of sound (though of course it varies with temperature and humidity). We judge sound direction by assessing ear-to-ear differences in phase and volume. We can distinguish time differences of as little as 10 microseconds, and the phase-difference calculation is useful mainly for lower-frequency sounds—those with wavelengths longer than 21 cm. In this range, we can locate a sound source to within about 1 degree if it is dead ahead, or 15 degrees if it is off to the side.  As frequencies rise above about 1600 Hz (a very sharp G above high C), the wavelength is shorter than the ear-to-ear separation, and we fall back on using the volume difference alone to approximate the source’s position.

The fly’s ear, on the other hand, is 4.3 microseconds wide at the speed of sound, and it can distinguish phase delays much smaller than that in sounds coming in from nearly ahead or behind.

The secret is the physiology of the ochracean ear, whose centerpiece is an elastic plate that pivots on a central support. This structure responds to incoming sound waves by resonating in two distinct modes. You can picture them, albeit on an obviously much larger scale, by standing with your arms outstretched to either side. First, raise one arm while lowering the other, like the ends of a see-saw; that’s the first mode. Now move your hands up and down together, flapping like a bird; that’s the second mode.

In the fly’s ear—and in the Kuntzman-Hall device—each mode responds to a different parameter of the incoming wave. The see-saw action responds only to the x-component of the incoming pressure gradient, indicating, say, a how a high-pressure compression crest at the tips of your fingers shades into a low-pressure trough at your elbow. It tracks the changes only along the one dimension of your arm, though, and reveals nothing about the omnidirectional strength and structure of the wave. The flapping mode, on the other hand, responds only to omnidirectional pressure—the sound volume, for example, or the overall pressure on your body—and reveals nothing about the wave's direction. In the fly's ear, both modes superpose to create a composite displacement of the membrane, so the trick is to break this signal down into its see-saw and flapping components..

The fly can individually quantify the displacements of the right and left sides of its pivoting-beam auditory membrane. Then the fly's neural network subtracts the displacement of the left-side channel from the displacement of the right channel to extract the first-mode see-saw signal; this shows the incident angle of the incoming sound. At the same time, the fly's brain adds the left- and right-channel signals to yield the second-mode flapping displacement. This reveals the omnidirectional sound pressure (a clue to distance).

The UT researchers etched a spring-loaded, 1.5-mm-by-2.5-mm pivoting beam into silicon, with a lead-zirconate-titanate piezoelectric film painted on the supporting springs to sense displacement. In experiments, Kuntzman and Hall have read and analyzed the output just as the fly’s brain does. The prototype can resolve the direction of high-frequency sound sources to within 0.35 degrees for sounds in its directional “sweet spots,” and to within about 6 degrees in its less sensitive zones. (The imprecision, their paper says, is mainly due to some imperfections or asymmetries in the prototype.)

“Synthesizing the special mechanism with piezoelectric readout is a big step forward towards commercialization of the technology," said Hall, an assistant professor. There are sure to be defense applications—after all, the research is funded by the U.S. Department of Defense's Advanced Research Projects Agency (DARPA)—as well as potential for commercial products like hearing aids. 


Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More