A new, stretchable transparent touch pad can be used to write words and play electronic games, and it may even one day be implanted inside the body, its inventors say.
Touch pads and touch screens are on nearly every smart device these days. But they can’t go on anything flexible, such as the human body. Scientists have explored stretchable touch panels based on carbon nanotubes, metal nanowires, and other advanced materials, but the performance of these stretchable touch panels fell off sharply when they were stretched. Just as bad, they also fell apart over time when repeatedly stretched.
To overcome these problems, scientists at Seoul National University created a touch pad made of the same kind of soft and very stretchable hydrogel used to make soft contact lenses. The hydrogel involved contains lithium chloride salts, which are electrically conductive and help the hydrogel hold onto the water it needs to stay soft.
Istvan’s popularity is likely due to a combination of his quirky campaign style (he drives around in a bus painted to resemble a coffin with “Science vs. The Coffin” written above the bumper) and an unconventional platform that pushes for gene editing, human life extension, and morphological freedom (the right to do anything to your body so long as it doesn’t harm others). As a broader movement, transhumanism focuses on leveraging science and technology toward the ultimate goal of overcoming death, largely through as-yet-unproven methods such as mind uploading, in which a person’s entire consciousness would be transferred to a digital system or machine.
After having been called out by Georgia Power, the utility that delivers electric power to its Atlanta hub, Delta Airlines finally came clean. It admitted that the crash of its computer network at 2:30 a.m. EDT on Monday, 8 August, had nothing whatsoever to do with the power company—after Georgia Power came forward and confirmed that none of its other customers in that area had experienced a power outage.
It took 70 years before the existence of Bose-Einstein condensates (BECs), so-called superatoms, could be proven in the lab. BECs are collections of particles called bosons (such as the photon), that have integer spin and can occupy the same space with the same quantum state, forming a condensate that behaves like a single atom. At first, these condensates—first predicted by Satyendra Nath Bose and Albert Einstein in the mid-1920s—could only be obtained by freezing out collections of particular particles with temperatures close to absolute zero. Room-temperature BECs appeared in the lab within two years. And more recently, researchers have been looking to turn these superatoms from a laboratory curiosity into lasers and other practical things, with mixed results.
Among these attempts at a practical BEC are those composed of exciton polaritons. These quasiparticles are amalgams of photons and electron-hole pairs (excitons), and they can carry information both in the form of optical polarization, the property of the photon, and in the form of spin, associated with the exiton. However, what made polaritons impractical was that their spin could only be controlled by light or by strong magnetic fields.
In yesterday’s issue of Nature Materials, a team of researchers led by physicist Jeremy Baumberg at the University of Cambridgereport that they could use low-energy voltage pulses to read and write data in a BEC formed by polaritons trapped in several thin layers of semiconductor material.
If you had plans to travel on Delta Airlines today or tomorrow, you had plans. At this point, you might want to consider another carrier, a rental car, a bus, or even a bicycle with a basket on the handlebars. That’s because, according to Delta, a power outage that wreaked havoc on its hub in Atlanta brought down the airline’s computer system. And one expert says it’s just the latest data point in a string of airline industry IT problems.
The mission: to detect and patch as many software flaws as possible. The competitors: seven dueling supercomputers about the size of large vending machines, each emblazoned with a name like Jima or Crspy, and programmed by expert hacker teams to autonomously find and fix malicious bugs.
These seven “Cyber Reasoning Systems” took the stage on Thursday for DARPA’s Cyber Grand Challenge at the Paris Hotel and Conference Center in Las Vegas, Nev. They were competing for a $2 million grand prize in the world’s first fully autonomous “Capture the Flag” tournament. After eight hours of grueling bot-on-bot competition, DARPA declared a system named Mayhem, built by Pittsburgh, Pa.-based ForAllSecure as the unofficial winner. The Mayhem team was led by David Brumley. Xandra, produced by TECHX from GammaTech and the University of Virginia, placed second to earn a $1 million prize; and Mechanical Phish by Shellphish, a student-led team from Santa Barbara, Calif., took third place, worth $750,000.
DARPA is verifying the results and will announce the official positions on Friday. The triumphant bot will then compete against human hackers in a “Capture the Flag” tournament at the annual DEF CON security conference. Though no one expects one of these reasoning systems to win that challenge, it could solve some types of bugs more quickly than human teams.
Darpa hopes the competition will pay off by bringing researchers closer to developing software repair bots that could constantly scan systems for flaws or bugs and patch them much faster and more effectively than human teams can. DARPA says quickly fixing such flaws across billions of lines of code is critically important. It could help to harden infrastructure such as power lines and water treatment plants against cyberattacks, and to protect privacy as more personal devices come online.
But no such system has even been available on the market. Instead, teams of security specialists constantly scan code for potential problems. On average, it takes specialists 312 days to discover a software vulnerability and often months or years to actually fix it, according to DARPA CGC host Hakeem Oluseyi.
“A final goal of all this is scalability,” says Michael Stevenson, Mission Manager for the Deep Red team from Raytheon. “If [the bots] discover something in one part of the network, these are the technologies that can quickly reach out and patch that vulnerability throughout that network.” The original 2005 DARPA Grand Challenge jumpstarted corporate and academic interest in autonomous cars.
The teams were not told what types of defects their systems would encounter in the finale, so their bots had to reverse engineer DARPA’s challenge software, identify potential bugs, run tests to verify those bugs, and then apply patches that wouldn’t cause the software to run slowly or shut down altogether.
To test the limits of these Cyber Reasoning Systems, DARPA planted software bugs that were simplified versions of famous malware such as the Morris worm and the Heartbleed bug. Scores were based on how quickly and effectively the bots deployed patches and verified competitors’ patches, and bots lost points if their patches slowed down the software. “If you fix the bug but it takes 10 hours to run something that should have taken 5 minutes, that's not really useful,” explains Corbin Souffrant, a Raytheon cyber engineer.
Members of the Deep Red team described how their system accomplished this in five basic steps: First, their machine (named Rubeus) used a technique called fuzzing to overload the program with data and cause it to crash. Then, it scanned the crash results to identify potential flaws in the program’s code. Next, it verified these flaws and looked for potential patches in a database of known bugs and appropriate fixes. It chose a patch from this repository and applied it, and then analyzed the results to see if it helped. For each patch, the system used artificial intelligence to compare its solution with the results and determine how it should fix similar patches in the future.
During the live competition, some bugs proved more difficult for the machines to handle than others. Several machines found and patched an SQL Slammer-like vulnerability within 5 minutes, garnering applause. But only two teams managed to repair an imitation crackaddr bug in SendMail. And one bot, Xandra by the TECHx team, found a bug that the organizers hadn’t even intended to create.
Whether humans or machines, it’s always nice to see vanquished competitors exhibit good sportsmanship in the face of a loss. As the night wound down, Mechanical Phish politely congratulatedMayhem on its first place finish over the bots’ Twitter accounts.
This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.
Light detection and ranging, or lidar, is a sensing technology based on laser light. It’s similar to radar, but can have a higher resolution, since the wavelength of light is about 100,000 times smaller than radio wavelengths. For robots, this is very important: Since radar cannot accurately image small features, a robot equipped with only a radar module would have a hard time grasping a complex object. At the moment, primary applications of lidar are autonomous vehicles and robotics, but also include terrain and ocean mapping and UAVs. Lidar systems are integral to almost all autonomous vehicles and many other robots that operate autonomously in commercial or industrial environments.
Lidar systems measure how far away each pixel in a 3D space is from the emitting device, as well as the direction to that pixel, which allows for the creation of a full 3D model of the world around the sensor. The basic method of operation of a lidar system is to transmit a beam of light, and then measure the returning signal when the light reflects off of an object. The time that the reflected signal takes to come back to the lidar module provides a direct measurement of the distance to the object. Additional information about the object, like its velocity or material composition, can also be determined by measuring certain properties of the reflected signal, such as the induced Doppler shift. Finally, by steering this transmitted light, many different points of an environment can be measured to create a full 3D model.
Most lidar systems—like the ones commonly seen on autonomous vehicles—use discrete free-space optical components like lasers, lenses, and external receivers. In order to have a useful field of view, this laser/receiver module is mechanically spun around, often while being oscillated up and down. This mechanical apparatus limits the scan rate of the lidar system while increasing both size and complexity, leading to concerns about long-term reliability, especially in harsh environments. Today, commercially available high-end lidar systems can range from $1,000 to upwards of $70,000, which can limit their applications where cost must be minimized.
Applications such as autonomous vehicles and robotics heavily depend on lidar, and an expensive lidar module is a major obstacle to their use in commercial products. Our work at MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries.
Whether or not you understand the recent drive to fill the world around you with obnoxious animated characters that you can only see as long as you hold your phone up in front of your face at all times, augmented reality does have the potential to enhance our world in ways that are occasionally useful. However, the AR experience is currently a sterile one, with augmentations overlaid on top of, but not really a part of, the underlying reality.
A pair of security researchers recently uncovered a Nigerian scammer ring that they say operates a new kind of attack called “wire-wire” after a few of its members accidentally infected themselves with their own malware. Over the past several months, they’ve watched from a virtual front row seat as members used this technique to steal hundreds of thousands of dollars from small and medium-size businesses worldwide.
“We've gotten unprecedented insight into the very nitty-gritty mechanics of their entire operation,” says James Bettke, a researcher at SecureWorks, a subsidiary of Dell focused on cybersecurity. Bettke and Joe Stewart, who directs malware research for SecureWorks, are presenting the details of their findings this week at the annual Black Hat security conference in Las Vegas.
Yesterday afternoon, BitFinex, a Bitcoin exchange in Hong Kong, disabled its customer deposits and withdrawals functions and replaced the trading engine on its website with notification of a major security breach. Later in the day, Zane Tackett, the “Director of Community and Product Development” took to Reddit (under the username “zanetackett”) to confirm that an attack had occurred and that nearly 120,000 bitcoins had been stolen from individual customer accounts.
This latest hack, which amounts to a loss of around US $72 million, is the biggest plundering of a Bitcoin exchange since 2014 when 850,000 bitcoins disappeared from the books during Mark Karpeles’s tenure as CEO of Mt. Gox. As was the case in 2014, the value of the currency is now crashing. The market price of bitcoin, which had begun to steadily increase at the beginning of the summer, fell 15 percent on news of the BitFinex hack.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.