Bitcoin is heading into a wholly avoidable crisis, according to one camp of developers. It is being forced to evolve, according to the other. How it’s all happening is as much at issue as whether it will all work out.
Perhaps you’ve heard that Bitcoin is forking. In fact, a fork is only one possible outcome of the current situation: A faction of the core development team has splintered off, proposed a new and controversial version of Bitcoin, and is now standing back to see whether people will adopt it. That’s dangerous because if these developers to have their way, the Bitcoin blockchain would have to bifurcate into two competing, incompatible chains, and thus two distinct currencies. And if a split like this does not happen in a clean, organized fashion, it could potentially cause chaos for every participant in the Bitcoin network.
(To get up to speed on how the Bitcoin blockchain works, watch this video. We’ll wait right here until you’re done.)
Optogenetics is a marvel of our age, enabling neuroscientists to turn brain cells on and off with pulses of light. But until now there’s been an obvious difficulty: How do you deliver that light to brain cells that are tucked inside an animal’s skull?
Today we get the best answer yet, from the Stanford lab of Ada Poon. She and her colleagues have invented a tiny, wireless LED device that can be fully implanted beneath the skin of a mouse. The device lets researchers turn on the light and stimulate neurons when the mouse is scampering around, behaving more or less normally. This system, described today in the journal Nature Methods, seems a big improvement over previous technology, which used wires or bulky head-mounted devices to activate the light switch.
Here’s a quick optogenetics primer, in case you need it. The technique makes use of neurons that have been genetically altered to respond to light, often with the introduction of genes from a strain of green algae. Researchers can control which part of a mouse brain contains these light-sensitive neurons, and they can then study the function of that brain region by activating the neurons—essentially turning them on and off—while watching the animal’s behavior. Using this method, scientists can learn about basic brain anatomy or study dysfunctions seen in human diseases.
The first optogenetics systems used fiber optic cables to deliver the light, which meant the mice had wires coming out of their heads and couldn’t move around much. Over the past five years, researchers have worked on wireless systems, in which a head-mounted device receives the signal to stimulate and triggers an implanted LED. However, some of these receivers are heavier than the mouse’s actual head, according to Poon’s paper, and they interfere with the animal’s freedom of movement and interactions with other mice.
The new device, consisting of a power receiving coil, IC, and LED, weighs in at 20 to 50 milligrams (a mouse’s head is approximately 2 grams). Its tiny dimensions mean it can be implanted not only in the brain, but also in the spine or the limbs, allowing researchers to experiment with optogenetic stimulation of the spinal cord and peripheral nerves. The researchers note that the implant can be built “with readily available components and tools,” and they express hope that it will be quickly adopted by the scientific community.
In Poon’s system, the mouse is placed in a small chamber called a “resonant cavity.” The implant has a power receiving coil that extracts RF energy as it resonates with the electromagnetic field inside the chamber. Because the mouse receives the same amount of power at all locations within that space, there’s no need for tracking systems that accompany directional antennae.
Once the researchers built this system, they performed three proof-of-concept experiments to show off its abilities to stimulate brain, spinal cord, or peripheral nerves. In the brain, they stimulated the right premotor cortex, causing the mice to walk in circles around the chamber. In the spine, they demonstrated that stimulating neurons toward the top of the spinal cord affected the activity of neurons farther down the line.
Finally, for the peripheral nervous system, the researchers implanted the LED devices in the hind limbs of mice to stimulate pain-sensing neurons in the limbs. They placed the mice in a two-chamber setup, where the mice could freely move between the resonant chamber where the stimulator could be turned on, and an adjoining “safe” chamber. When the light was on, the mice showed a marked preference for the room where their legs wouldn’t hurt.
The point of these experiments were not to hurt mice, of course, but to demonstrate that neuroscience has a cool new tool, and a new way to illuminate the mysteries of the brain.
When you start talking about big splashy space exploration plans—say sending the first humans to Mars on a private mission supported by a reality TV deal, eager (theoretical) billionaires, and burials in space—things can get surreal pretty fast.
So it was last week, when Bas Lansdorp, CEO of Mars One set out to debate two MIT aerospace engineers on what should have been a simple question: is the company’s plan to put humans on Mars feasible? By the end it was not at all clear how Mars One defines the word “plan” and why, after publicly admitting they won’t stick to the schedule they’ve outlined on their website, they’ve been so specific about timelines and budgets.
In case you haven’t been following the saga, Mars One is a private campaign to send volunteers on one-way trips to the Red Planet, where they will live out the rest of their lives in a permanent settlement and send video dispatches of their activities back to Earth. Announced in 2012, the company has very publicly hunted for volunteers for this mission and recently whittled down that list to 100 candidates.
The Mars One plan begins with robotic missions to help set up the habitat and deliver supplies to the Red Planet. The first crew of four would arrive on Mars in 2027 (originally pegged for 2023). Additional four-person crews would follow every two years after that. To sustain this growing settlement, the mission would likely rely on a mix of supply missions and in situ resource utilization, baking Martian soil to extract water and oxygen and pulling nitrogen from the Martian atmosphere to add to the settlement’s supply.
Skepticism has been a running theme since Mars One was announced. In 2012, Wired gave the company’s plan a plausiblity rating of 2 out of 10. And last year, researchers from MIT’s department of aeronautics and astronautics performed an independent technical analysis of the Mars One plan. They found multiple problems. For one thing, spare parts would take up an increasingly large fraction of the available launch mass. And, if the astronauts grow all their own food, the plants could create unsafe levels of oxygen and the rapid depletion of the habitat’s nitrogen gas supplies within a matter of a couple of months. News stories led with the suffocation angle. Mars One didn’t react well to the criticism; one team member called the findings “made up and fake”.
Last week’s event took place at an annual meeting of The Mars Society in Washington, DC. The debate pitted two of the MIT study authors, Sydney Do and Andrew Owens, against Lansdorp and aerospace consultant Barry Finger of Paragon Space Development Corporation, which recently conducted a study (pdf) of Mars One’s life support needs.
The company has said it will need $6 billion to get the first humans to Mars by 2027. Do and Owens focused their analysis on those numbers, asking whether the company’s plan could be accomplished by that date with that amount of money, in adherance to the “iron triangle” of project management.
Mars One claims the major technology needed to accomplish the company’s plan already exists. But Do and Owens laid out a rather daunting list of things that still need to be developed in order for the mission to succeed. The company will land masses at least twice as heavy as NASA’s Curiosity rover, the heaviest thing yet landed on the surface of the planet. The crew’s habitat must have life support that can survive for the 26 months between resupply missions, a level of endurance they say is 23 months beyond that of the International Space Station’s systems. And the robotic spacecraft that will arrive before humans must have an unprecedented level of capability. Mars One aims to use an intelligent rover to set up the habitat for the humans to follow. “Right now we can’t do this on Earth and this is expected to be done on Mars,” Do said.
Here’s the slide showing the team’s comparison between the Mars One plan and the first 8 years of the Apollo program. Their full presentation can be found here:
In response to these criticisms, Lansdorp said that “Mars One’s goal is not to send humans to Mars in 2027 with a $6 billion budget and 14 launches. Our goal is to send humans to Mars, period.” Then, more cryptically, he added, “For that reason I actually consider the study that Andrew and Sydney did a confirmation of Mars One’s plan.”
Lansdorp’s presentation contained a single slide, showing how the concept for Apollo launch vehicles changed over time.
“We’re not going to do, I think, the current design of the mission,” Lansdorp said. He noted that the organization’s plans were based on preliminary work and would change with additional study findings. As an example, Lansdorp cited the recent Paragon study, which found that the mass of the life support system would be higher than expected.
The company is currently seeking $15 million to finance the buildup of its team and commission additional studies—in particular one by Lockheed Martin on the entry, descent, and landing stages of the mission. He added that it’s not impossible that a billionaire might call up and offer to finance the whole endeavor, which would speed the work along.
It was hard not to come away from this debate thinking the two sides were talking at right angles to one another. Owens and Do took Mars One’s numbers seriously in their analysis. Lansdorp seems to consider the company’s cost estimates and launch dates as notional, or aspirational, figures.
In discussions with others at the meeting, I’d wondered aloud what repeated delays might do to the image of the company. But Do voiced an even bigger concern after the debate: if Mars One deflates, what will happen when the next plan to go to Mars comes along? Even if the new effort is deemed technically sound and eminently accomplishable, will anyone pay it any mind?
Apple’s Siri and Microsoft’s XBox video game consoles still sometimes struggle to hear their owners in a noisy room. A 3-D printed sensor prototype could solve that problem by giving electronic devices the sensitivity to pick out a single voice or sound.
Ever since our first experience with a prototype of the Oculus Rift, we’ve been getting more and more excited about high quality consumer virtual reality hardware. The first production version of the Rift is almost here, and when it arrives (probably in early 2016), you might even be able to justify its rumored $1,500 cost.
Good as the Rift is (and it’s very, very good), it’s taken this long for Oculus to get the hardware ready because fooling your eyes and brain to the extent that the Rift (or any other piece of VR hardware) does is a very tricky thing to pull off. The vast majority of us have an entire lifetime of experience of looking at the world in 3-D, and we notice immediately when things aren’t quite right. This can lead to headaches, nausea, and a general desire never to try VR ever again. A big part of what makes VR so difficult is that it’s continually trying to convince your eyes that they should be focusing on a scene in the distance, when really, they’re looking at a screen just a few inches away.
The current generation of VR displays use a few different techniques to artificially generate images that appear to have depth despite being displayed on flat screens. But there’s one that they're missing out on—one that could make VR displays much more comfortable to use. The same sort of 4-D light field technology that allows the Lytro camera to work its magic could solve this problem for VR as well.
Interconnections in powerful computers and linking "blades" in data centers will increasingly rely on optical communication links. Currently, this still requires an individual laser with individual control circuitry for each channel. Now researchers at Purdue University have developed a new technology that allows a single laser to transmit data over a number of individually controlled channels, at different frequencies, simultaneously. They published this research online in the 10 August edition of the journal Nature Photonics.
The key component of this technology is a tiny microresonator. It’s a 100-micrometer-wide optical waveguide loop or microring made from silicon nitride. Because it is as thin as a sheet of paper, it can easily be integrated on silicon chips. The microresonator replaces a whole tabletop studded with the complement of optical components and resonators that are now required to create a mode-locked laser.
In the experimental setup, a pump laser is connected to the resonator. The researchers pump the resonator with a continuous-wave laser at one frequency, explains Minghao Qi, an associate professor of electrical and computer engineering at Purdue. The resonator, though small, can hold a huge amount of power, which leads to non-linear interaction. “Normally, if we pump anything into the resonator, and the interaction is linear, the input and output frequencies are the same,” says Qi. “When the interaction is non-linear, it basically generates higher-order harmonics—new frequencies.”
Qi adds that, because the spacing between the different frequency peaks are the same, the resonator is called a frequency comb. The frequencies can be tuned by changing the resonance frequency of the resonator. This is achieved by an electric heater, a tiny gold wire overlaying the resonator. Changing the temperature changes the resonator’s refractive index, which in turn changes the resonance frequency.
While the experimental setup works well with discrete light pulses, the researchers also noted the presence of “dark pulses,” or very short intervals where no light is transmitted. These intervals can occur every one or two picoseconds, which is a hundred times faster than the switching speed of the most advanced microprocessors. “The advantage of a dark pulse is that this can be repeatedly generated and that means it is very reliable and we can control it. If you want a bright pulse, then it is a very tricky process,” says Qi.
According to the Purdue researchers, they showed that dark pulses can be converted into bright pulses. “So by creating a dark pulse first, you have a process that is robust and controllable,” says Qi.
Besides facilitating high-volume optical communications in computers, microresonators could also be used in optical sensors and in spectroscopy. If you want to probe a compound at many different wavelengths you can use a tunable laser to excite the molecule at those different wavelengths. With conventional lasers, you have to tune the laser to a different frequency for every measurement, which takes time. What’s more, tunable lasers are expensive, explains Qi. But with the Purdue team’s improved laser, “If your probe light itself has many, many frequencies, you are basically doing a spectral scanning, with all the frequencies in one shot,” Qi adds.
For the moment, the scientists have yet to put microresonator on a chip with all the other components. “This will be our next step,” says Qi.
National Football League (NFL) playbooks are the size of telephone books. They’re filled with dozens and dozens of plays, each designed so that a team can play to its strengths while taking advantage of its opponents’ weaknesses. Despite the endless variations, they all basically boil down to two options for the offense: pass or run. No matter how intricately designed an offensive play is, if the defense can sniff out whether the ball will be tossed down field or toted along the ground, it gains a tremendous advantage. (Yes, we know that teams punt and kick field goals and extra points after touchdowns. But we’re not talking about that right now.)
William Burton, an undergraduate who is majoring in industrial engineering and minoring in statistics, and Michael Dickey, who graduated in May with a degree in statistics, used a listing of actual NFL offensive plays from the 2000 through 2014 seasons that had been compiled by a company called Armchair Analysis to figure out the ratio of passes to runs. They showed empirically what fans already understood anecdotally: the aerial attack is being utilized ever more frequently. Pass plays were called on 56.7 percent of the time in 2014, compared with 54.4 percent in 2000.
But what makes a team decide whether to run or throw? Burton and Dickey looked at a host of factors that affect a team's play selection. Among these are: the distance to the first-down marker, whether it’s first, second, third or fourth down, how much time is left on the game clock, the team’s score in relation to its opponent’s, and field position. For example, there’s a high probability that the coach will opt for a passing play if the other team is leading by three points, there’s a minute left in the fourth quarter, the offense is facing third down at its own 30-yard line, and needs to advance 7 yards to pick up a fresh set of downs. On the other hand, a team that’s leading by 7 points, facing the same down and distance at the same point in the game, might very likely run the ball (to avoid an interception and to take time off the clock so the other team can’t mount a score-tying drive before time runs out).
For their system, Burton and Dickey developed logistic regression models—methods used to, for example, predict if someone will default on a mortgage—and random forest models—a machine learning method. But they quickly realized that teams’ strategies differ significantly in each of a game’s quarters. To account for that, they produced six separate logistic regression models: one each for the first, second, and third quarters, plus one for the fourth quarter if the offensive team is winning, another if it is losing, and a third for when the score is tied. They tested their models on 20 randomly selected games. Overall, the models accurately predicted pass or run on 75 percent of downs. The models’ best performance was related to a 2014 game between the Jacksonville Jaguars and Dallas Cowboys. Their predictions proved correct on 109 out of 119 offensive plays—a 91.6-percent accuracy rate.
Burton and Dickey say that anyone, including NFL coaches and fans rooting for their teams at home, can use the tool to make educated guesses about what will happen each time the ball is snapped.
The complicated mess of code in image, voice, video and even electrocardiogram data provide the perfect carrier for hidden messages. At the Network Security Group at Warsaw University of Technology, in Poland, Wojciech Mazurczyk disguises data the same way cybercriminals do in order to beat them at their own game.
The Space Shuttle was originally intended to make getting to space easy, inexpensive, and routine, with an initial goal of a launch nearly every week. It didn't quite play out that way, and we’re now back to tossing things into orbit on top of massively expensive rockets that are good for only a single one-way trip. It’s a system that works (most of the time), but it's not a system that’s efficient.
IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.