Tech Talk iconTech Talk

Norwegian Army Drives Tanks Using Oculus Rift VR Goggle

Virtual-reality goggles and camera systems are giving Norwegian soldiers the ability to "see through" their armored vehicles with a 360-degree view. That means drivers of trucks and tanks will be able see all around their vehicles on future battlefields without having to poke their heads out. It's something Odin the one-eyed Norse god could appreciate.

Read More

“Sparse Arrays” Cut Costs for Terahertz Imaging

There are some stumbling block along the path to workable terahertz wave imaging. Building antenna arrays is one. Handling the information the arrays generate is another.

While some researchers have turned to metamaterials to cut down the bulk of terahertz antennas, a team at MIT is working on ways of doing more with less—fewer antennas and less computation—to smooth the road to low-cost mobile radars and sensitive detectors for explosives and firearms.

James Krieger, Yuval Kochman (now at the Hebrew University of Jerusalem), and Gregory Wornell report on strategies for faster, less expensive antenna design, signal-analysis, and error-correction in IEEE Transactions on Antennas and Propagation.

Terahertz radiation falls into the range between microwaves and visible light, at frequencies of 300 billion Hertz to 10 trillion Hz and wavelengths of 1000 to 30 micrometers. In a traditional phased array, antenna elements must be no farther than one-half a wavelength apart. A real-world application (such as an automotive collision-avoidance system that Krieger and his colleagues use as an example) might require an aperture of roughly 2 meters to properly resolve moving, vehicle-size objects. So a conventional array built to image signals at 100 GHz (with a 3 mm wavelength) would require on the order of 1000 antennas, while a 1 THz array would need about 10 000. Building such arrays is costly and complex in its own right. And the computational power needed to resolve the phased array’s signals into a 2-D image increases with the number of antennas—a demand that “quickly becomes impracticably large,” according the MIT group.

But one-antenna-every-half-wavelength resolution is only truly necessary if all objects of interest stand shoulder to shoulder at the same distance from the array. In the real world—a parking lot, say—targets are “sparse,” generally few and far between at any range or azimuth.

"Think about a range around you, like five feet," said  Wornell, the team leader. "There's actually not that much at five feet around you. Or at 10 feet. Different parts of the scene are occupied at those different ranges, but at any given range, it's pretty sparse. Roughly speaking, the theory goes like this: If, say, 10 percent of the scene at a given range is occupied with objects, then you need only 10 percent of the full array to still be able to achieve full resolution."

The trick is to come up with a method for deciding which half, quarter, fifth, or tenth of the possible antenna locations to populate. The MIT group breaks the array down into number of “periods,” each with the same number of lattice points one-half wavelength apart. (Periods with prime numbers of lattice points make for the easiest calculations.) The key is to select positions so that the range of distances between pairs of antennas covers the range of possible separations as evenly as possible. This is fairly easy to calculate directly with periods of 7 or 11 lattice points; it’s trickier for periods of 37, 47, or 57 nodes, but an iterative tactic (like the Markov Chain Monte Carlo method)  produces workable positioning patterns.

The periodic approach cuts down computing overhead by breaking the antennas down into “cosets” within each period. Coset data are collected, compared, and analyzed together, so the computational demand goes up with the number of antennas in a period, not with the total number of antennas.

In the 100 GHz parking lot simulation, a conventional phased array would require 987 individual antennas to attain the necessary 2-meter aperture. With the addition of algorithms for detecting and filtering out errors, the Wornell group’s multi-coset sparse array built usable images with as few as 105 antennas. (Remember, these are linear arrays producing a 2-D image. So a two-dimensional arrays to generate 3-D images would square the number of antennas needed—to about a million for a conventional array versus about 10 000 for a multi-coset sparse array.)

Magnetic Fields in Electric Cars Won't Kill You

When hybrid and electric cars first hit the road, a small group of skeptics worried about the unknown dangers of electromagnetic fields that might affect drivers and passengers. Now a seven-country study has found that the magnetic fields in electric vehicles pose no danger whatsoever at a time when electric car ownership continues to rise.

Read More

When the Evidence is on the Cell Phone

On Tuesday, the United States Supreme Court heard arguments in two cases in which information found on cell phones, obtained by searching those phones without a warrant, led to convictions: United States v. Wurie and Riley v. California. At issue is whether the Fourth Amendment’s rules on unreasonable searches and seizures apply to cell phones.

Civil liberty groups and others argue that because phones can contain virtually every detail of a person’s life, including photos, videos, bank account information, and medical information, they are far different than the paper address books or notepads that previously might have been found on a suspect and legally searched without a warrant. Law enforcement agencies argue that if they don’t search the devices immediately, they could be wiped remotely.

Can they? Or can law enforcement officials prevent them from being wiped? This was a matter of debate during the Supreme Court arguments, with Deputy Solicitor General Michael Dreeben, representing the United States, listing a number of technical obstacles to delaying a search of a cell phone.

First on that list: Encryption, which Dreeben said, “kicks in automatically…and then…you need the password to open it….And law enforcement’s forensic labs aren’t going to be able to get around it except with extraordinary efforts and extraordinary time.”

In “my experience,” he said, “from the people that I had spoken with is that a lot of phones are arriving at the lab in a locked and encrypted state and it’s very tough to deal with that.”

I turned to Richard Mislan, a visiting assistant professor in the Rochester Institute of Technology’s computer security department, for his thoughts about just how real those technical obstacles, including encryption, are. Mislan wrote about the use of cell-phone based evidence in “Cellphone Crime Solvers,” published in the July 2010 issue of IEEE Spectrum.

Mislan agreed that more and more phones are password-protected these days, and new security tools like fingerprint access add a layer of difficulty. But he, says, “there are tools that are well known to the [law enforcement] community to deal with passwords.”

Dreeben also expressed concern over the fact that cell phones can be wiped remotely. “Even if an officer has a cell phone in his hand, he cannot guarantee, unless it's disconnected from the network or somehow protected from the network, that there won't be a remote wipe signal sent to the phone that will wipe its data,” Dreeben told the Court.

Indeed, Dreeben is correct that wiping a phone remotely isn’t hard—for example, Apple’s iOS devices include a simple tool, “Find My Iphone” and Android 2.2+ users can set up the “Device Policy” app to allow them to remotely erase all the data on their phones.

But law enforcement officials need to preserve the evidence on phones even when they are searched on the scene, not just later with a warrant, and tools exist to allow them to do so. The basic solution is a Faraday bag, a simple sleeve with a layer of conductive material, either a solid or a mesh, that prevents wireless signals from getting in or out. A sleeve is widely available and cheap, around $30. More sophisticated versions specifically designed to preserve evidence cost in the hundreds of dollars, hold multiple phones, and keep them charged while they are being transported.

The Justices quizzed Dreeben about the practicality of using such cases. Dreeben said, “If you throw a phone into a Faraday bag, which is supposedly going to be able to block network signals, when you open it up, it has to be similarly shielded or it will pick up a signal from a cell tower, and that will wipe the phone.” This, he pointed out, doesn’t always work:  “The F.B.I. tried to build a Faraday room in a building that they later discovered Verizon had put up a cell tower on it, and that cell tower put out a strong enough signal to go right through the Faraday room.”

Mislan indicated that most police departments understand the need to open phones in a protected enclosure and know where they can do so. They also know where the cell towers are.

Justice Sonia Sotomayor suggested that arresting officers simply put phones into airplane mode, preventing them from receiving calls or data. Dreeben’s counter-argument, essentially, was that phones are just too complicated. “It is not always possible to find airplane mode on all the 500, 600 models of phones that are out there,” he said. And, he said, consider that “The officer has a lot of things to do when he arrests suspects. Say he arrests five suspects in a car and they each have three cell phones. Trying to find and put each one of them into airplane mode and go the further step and...” (Sotomayor cut Dreeben off at this point.)

Five suspects might indeed have three cell phones each, in order to hide their identities when making certain calls. But, points out Mislan, these days, putting a phone into airplane mode is not rocket science. In recent years the wild world of phone operating systems has settled down to a dominant four—Apple iOs, Android, Windows Mobile, and Blackberry—and these are well understood. An arresting officer should have little trouble finding airplane mode.

The court is expected to issue its rulings on these cases in July.

Follow me on Twitter @TeklaPerry.

The Golden Age of Basic

IEEE Spectrum isn’t the only thing celebrating its 50th anniversary this year. On this day in 1964, the first software written in Basic was successfully run on a GE-225 mainframe at Dartmouth College. As critical a moment as that was to the history of computing, I want to skip ahead twenty years and talk about what Basic meant to a generation of neophyte coders in the 1980s, of which I was one.

Today, programmers can begin their journeys into the world of code in quite a few ways. 8 to 12-year olds can use MIT’s Scratch, manipulating colorful blocks on screen to build programs. Older kids can tinker with writing HTML and Javascript, have a go at writing Python scripts on a Raspberry Pi or just go straight to downloading free compilers and development environments for languages like C or C++.  Visual and musical artists can try their hand at Processing. There’s even the option of learning how to build insanely elaborate devices in the virtual world of Minecraft. Online tutorials and courses, written and video-based, abound.

But in the 1980s, most kids didn’t have access to the Internet, integrated development environments, rich graphics, or even a choice of languages. What we had were 8-bit home computers, a blinking cursor, and Basic.

And it was wonderful. God, it really was.

Well, at least to those of us who persisted. Let’s not kid ourselves in a haze of nostalgia—there are very good reasons why things like Scratch and Processing were created, the same reasons why many, if not most, of those 8-bit machines wound up being used solely to play games. Tapping out Basic programs often meant a lot of effort with nothing to show for it other than that Great Sphinx of computer messages: “SYNTAX ERROR.”

But for those of us who did stick with it, Basic opened the door to something that had never happened before—a generation of children and teens programming general-purpose computers. We may not have been online (much) but we were the first digital generation, absorbing the Tao of pixels, bytes, and FOR… NEXT loops alongside other important lessons, such as the names of various national capitals and how to minimize zits.

I remember my first program, by which I mean one that I cobbled together myself, not simply typing in a complete listing from the manual. I was twelve, the year was 1985, and the computer a Texas Instruments TI-99/4A (a machine which was actually the first 16-bit  home computer). My program was a very simple text adventure game, created by chaining together as many IF… THEN GOTO statements as I had patience for.

Even to my own taste, the game was terrible, with instant death lurking beyond almost every wrong choice. What changed the world for me was a bit at the end, where I rewarded a winning player with a rising scale of musical notes, generated by a FOR… NEXT loop wrapped around a SOUND statement. Despite after-school classes, I had failed to master even the simplest of musical instruments due to my lack of interest in practicing or, before long, attending the classes. But now I could create “music” to order with three lines of code. I didn’t even have to program in individual notes—I could tell the computer to work out the notes for me. I was sold.

A notable feature of this era was the extent to which programming was in the air. Everyone, from politicians and celebrities on the TV right down to your own parents, was declaring that microcomputers were the future. It was practically a moral imperative that families should furnish their children with a home computer for educational purposes. Computer labs (and computer clubs) appeared in schools for the first time.

To meet the demand fostered by millions of anxious parents, an enormous menagerie of companies sprung up around the world in the early 1980s, resulting in a sort of Cambrian explosion of home computers. The Apple II made the first big splash of course, at least in America, but it was rapidly joined by such fondly-remembered competitors as the TRS-80, the Commodore 64, the Sinclair ZX Spectrum, and the Dragon 32, not to mention some of the more obscure also-rans, such as the Sord M-5, the Oric-1, and the Coleco Adam.

They all came with Basic, usually on a built-in ROM chip.

And with this lingua franca to hand, television programs stepped viewers through simple programs. Shelves were stocked with magazines bulging with programming tips and tricks along with listings of Basic programs for readers to type in. There was even a line of young-adult adventure books that included type-in Basic programs as part of the narrative.

However, there was a flaw in this 1980s Basic paradise. Even though most home computers were built around one of two CPUs (either a Zilog Z80 or a MOS Technology 6502) and all ran Basic, the maddening fact was that each machine had its own particular dialect of the language. Amendments and alterations were made to the syntax to address machine-specific details related to memory, graphics, external storage, and the mental state of the developers as release dates loomed.

The result was that two computers often had different commands for, say, plotting a pixel on screen or ways of handling floating-point numbers. Magazine publishers trying to cater to the largest possible market would often include “conversion boxes” at the end of each printed Basic program, listing the alterations required to make a program run on several of the more popular computers. If the manufacturer of your computer happened to be a loser in the brutally Darwinian environment of the mid-1980s computer market, you were left to figure it out on your own, something that was mostly deeply frustrating but occasionally deeply enlightening.

The quality of these manufacturers' computers, and their implementations of Basic, varied enormously. I was lucky enough to wind up doing the bulk of my teenage programming on a machine that, while not well known outside the United Kingdom and Ireland, was noted as being among the best on both counts: Acorn’s BBC Micro.

The BBC was released in 1981 as part of a national computer literacy campaign in Britain. Between the tight standards insisted upon by the British Broadcasting Corporation and the immense skills of the developers (many of whom would go on to create the ARM mobile processor architecture now running in 95 percent of the world’s smartphones) the result was a sturdy workhorse that could still be found usefully employed in odd corners well into the 1990s. The BBC Micro’s Basic interpreter offered a number of advanced features for structuring programs and accessing the computer’s hardware. Acorn’s exceptionally well-written user manual and programming guide became something of a Bible for my teen years, alongside the diaries of Adrian Mole.

But best of all, the BBC Micro’s Basic supported an inline assembler, allowing you to mix and match blocks of machine code with Basic statements. It was a tremendously easy way to get started with programming on the bare metal. I found this out when, returning to my inclination to reduce musical effort, I entered a pure Basic program that read the keyboard as if it was a piano into an annual computer fair. The program would emit the corresponding note, and plot it on a blank onscreen musical staff. The user could then play back their composition while looking at their score.  It was a nice idea, but it suffered from the fatal flaw of being incredibly slow. Press a key, and it would take several seconds to update the screen.

Despite a kind reception, I was somewhat embarrassed by its performance. So I dived into the user manual and returned the next year with all the critical loops converted to assembly. Blam. Real-time responsiveness (as long as you didn’t play too fast.)

That was the beginning of my growth beyond Basic. A few months later I would get hooked up with a machine running the Forth language, and be faced with the icy cold shock of abandoning line numbers and GOTO commands altogether.  Around the same time, many manufacturers were going out of business, as general purpose home computers were replaced in the late 1980s by game consoles for those interested only in entertainment, or by PCs and Macs for those looking for more serious applications.

None of these replacements were easily programmable. And while cheap and easy ways to code would eventually become generally available again for children and teens, learning how to program would never regain the place in popular culture it held in the 1980s. The Golden Age of Basic was over.

But it will be fondly remembered by a generation of geeks. What were some of your 8-bit Basic war stories? Let us know in the comments below!

Hand-Held Spectroscopy Tool Lets You Examine the Molecular Composition of Your Food

Up on Mars, the Curiosity Rover uses its spectrometers to examine the chemical composition of the rocks and dirt on the red planet. Now the Israeli startup Consumer Physics has developed a tool that lets you conduct similar experiments in your own backyard or at your dinner table.

The company launched a Kickstarter campaign today with the goal of raising US $200 000 to fund the first production run of its hand-held spectrometer, which is about the size of a flash drive. Using near-infrared spectroscopy, the device, called SCiO, shines light on a sample and measures the light absorption patterns of the molecules therein. It then identifies the object by its optical signature, and gives the user information about the object's molecular composition. It uses bluetooth to link to the user's smartphone, where an app manages the data.

In a product demo for IEEE Spectrum last week, company CEO Dror Shalon held up a prototype SCiO to a block of Gouda he had just bought at Walgreens. After a brief flash of light and a few seconds of analysis, Shalon's smartphone reported that the item was: cheese. While that might not seem like the most useful announcement for a functional human being with eyes, the SCiO can provide more info. After following a prompt to enter the portion size (which was on the cheese's packaging), the app displayed that cheese's calories per serving, as well as fats, carbohydrates, and proteins. Of course, all that info was also on the cheese's packaging. But Shalon says the SCiO's abilities will shine forth at restaurants and anywhere people are grabbing food on the go. 

Shalon thinks that consumer spectroscopy is an idea who's time has come. Today's quantified self gadgets let us track our bodies with unprecedented precision—we can count our steps taken, calories burned, and much more. But people who want that kind of information about the materials around them have been out of luck so far. "When we came up with this vision, I went to Amazon to search, because I was sure that it must exist already," says Shalon. When his search came up empty, he and his team started working.

Kickstarter backers who donate $149 will get the SCiO in December, along with free downloads of all the SCiO-specific apps that Consumer Physics will develop over the next two years. The company has already developed apps for identifying and authenticating pharmaceuticals, for checking the hydration of plants, and for measuring the sugar content of fruit. Shalon is hoping that third-party developers will find many other things to do with SCiO. "That's the major reason we’re doing the Kickstarter campaign, to get the device into the hands of the early adopters," he says.

Liquid Metal Reconnects Severed Nerves in Frogs

Severed nerves in the body can lead to the loss of muscle function and muscle atrophy, unless the severed nerve endings get reconnected. One possible new solution devised by Chinese researchers uses liquid metal to create an electrical conduit capable of transmitting  signals between the severed nerve ends.

Read More

New 3-D Printer Makes Soft Interactive Objects

A new type of 3-D printer can turn yarn into soft, cuddly objects. 3-D printing typically uses metals and plastics, although researchers are also using them to make food stuff, tissue, and body parts. Researchers at Carnegie Mellon University who designed the new printer say they wanted to extend 3-D printing to a new class of materials for making objects that people would interact with.

The device is a kind of printer-sewing machine hybrid. It takes designs from a computer and converts them into 3-D objects using a loose felt material. The resulting objects are similar to hand-knitted versions, said CMU computer science professor Scott Hudson in a press release. Hudson developed the new printer with Disney Research support, and presented it at the ACM CHI Conference on Human Factors in Computing Systems.

Most common low-end 3-D printers are based on Fused Deposition Modeling, a process in which melted plastic is extruded to create objects layer by layer. The felting printer works in a similar fashion, but emits yarn instead of melted plastic. A barbed felting needle at the printer head repeatedly pierces the yarn, entangling new fibers into the yarn layers below and bonding the layers together.

A key difference, of course, is that because yarn is thick, the printer doesn’t  achieve the same dimensional accuracy as conventional 3D printers. So the printed objects don’t look exactly like the computerized design.

The printer could be used to make clothes, scarves, and plush toys. It could also be used to make parts for soft robots that are designed to work near or with people. Whats more, Hudson said it should be possible to design a printer that could produce both fabric and plastic elements in a single fabrication.

Quantum Cryptography Done Over Shared Data Line

Researchers have sent quantum keys over a "lit" fiber-optic network, a step towards using quantum cryptography on the networks businesses and institutions use every day.

A group of U.K.-based research groups last week said the demonstration opens the door to more research that will make the technology more commercially viable. The researchers were from Toshiba Research Europe, BT, ADVA Optical Networking, and the U.K.'s National Physical Laboratory (NPL). 

In quantum cryptography, the keys to unlock the contents of communications are represented with photons. It starts with a laser that sends a pair of photons over a fiber-optic network. The polarization of photons—whether they’re oscillating horizontally or vertically, for example—can be detected by a receiver and read as bits, which are used to generate the same encryption key at both ends of the network connection. If an interloper attempts to intercept the keys to decrypt a message, the receiver will be able to detect a change, according to the laws of quantum mechanics. If that happens, the receiver can reject the keys and the message stays encrypted.

Until now, quantum key distribution (QKD) has been done over dark fiber, or unused optical fiber lines, which means that a separate fiber optic line is needed for transmitting other data. But dark fiber networks are not always available and are expensive. Being able to transmit quantum keys over a lit fiber network means that institutions and businesses will be able to run quantum cryptography over their existing networking infrastructure, the researchers said.

"Using techniques to filter out noise from the very weak quantum signals, we've shown that QKD can be operated on optical fibers installed in the ground and carrying conventional data signals," said Andrew Shields from Toshiba Research Europe in a statement

The National Physics Laboratory developed a series of measurements for identifying individual particles of light from the stream of photons sent over a fiber-optic line. That will allow the system to detect attempts to intercept the transmission of keys, which should improve customer confidence in quantum cryptography, said Alastair Sinclair from the National Physics Laboratory in a statement.

The test was conducted over a live BT fiber link between its research campus in Suffolk and another BT site in Ipswich, U.K. In an interview with Nature, Toshiba's Shields said the quantum key distribution was done alongside data transmitted at 40 gigabits per second, the fastest multiplexing of regular data with quantum keys to date. But he notes that implementing QKD in the "real world" is more challenging than a laboratory environment because there are environmental fluctuations that can cause data loss in fiber lines.

Another technical challenge facing widespread use of QKD is the distance keys can be sent. Light pulses sent over a fiber optic line fade, which means that key distribution can only be done at a distance of about 100 kilometers. (See Long-Distance Quantum Cryptography.) But as governments and companies seek out the most secure ways to send data, quantum cryptography could become an appealing option.

Christopher Yoo Thinks Net Neutrality's End Might Not Be So Bad

Net neutrality's principle of treating all Internet traffic equally may no longer hold under a new U.S. regulatory proposal for broadband providers. But one law and technology expert doesn't think that the sky is falling just because Comcast or Verizon could charge Internet content providers extra for faster delivery of Internet services.

Read More
Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Advertisement
Load More