Tech Talk iconTech Talk

BAE Systems, UK Gov't Invest $120 Million in Skylon Space Plane Engine Prototype

Part of the reason that getting to space is so stupendously expensive is that we go about it very inefficiently. We use rockets, which spend the vast majority of their thrust to lift their own fuel and oxidizer—neither of which we care very much about, at least not as far as the end product of getting mass into orbit is concerned. Airplanes, on the other hand, are very efficient, because they take advantage of air, which helpfully provides both lift and as much oxidizer as an engine can suck down.

For the last quarter century or so, a British company called Reaction Engines has been making slow, steady progress toward a hybrid system that has the potential to bind aircraft and spacecraft together. Reaction’s Synergetic Air-Breathing Rocket Engine (SABRE) could power a safe, efficient, and very cool looking single-stage-to-orbit vehicle. It's an enormous technical challenge, but one that may now be realistically solvable thanks to massive new investment from BAE Systems and the British government. BAE just ponied up £20.6 million for a 20-percent stake in the company. And with £60 million in grants coming from the UK government, the company thinks it should have the resources it needs to stage a full-scale ground test of SABRE by 2020, and unmanned test flights around 2025. ​

Read More

The Mt. Gox Bitcoin Debacle: An Update

This story was corrected on 4 November. Due to an editing error Kraken’s response to our request for an interview was omitted.

More than 18 months after the MtGox bitcoin exchange filed for bankruptcy in February 2014, little is still known about what happened to the 850,000 missing bitcoins. The now defunct Tokyo-based company claimed hacker malleability attacks—illicit alterations of transaction ID numbers—were responsible for the disappearance. MtGox users who traded the virtually currency for fiat money suspected fraud.  Whatever the reasons, the fallout appears to have been a financial calamity for Bitcoin investors: the value of a bitcoin dropping from a peak of over $1,000 prior to the exchange’s collapse to around $232 today.

Although investigators remain tight lipped about their findings, Tokyo Metropolitan police took Mark Karpeles, the CEO of MtGox, into custody in August on charges of manipulating company accounts and stealing from exchange users. Then on 11 September prosecutors issued a warrant for his arrest, accusing him of embezzling US $2.7 million of clients’ money. Karpeles, 30, a French national, has reportedly denied wrongdoing.

Yet these charges represent only a tiny fraction of the 850,000 bitcoins worth around $200 million at today’s exchange rate, or about half-a-billion dollars at the time of the MtGox collapse. So the wait to hear what really occurred continues.

“It is only natural for law enforcement, trustee and the forensics team not to give reports when there is an ongoing criminal investigation,” says Pauline Reich, director of Asia-Pacific Cyberlaw, Cybercrime and Internet Security Research Institute in Tokyo. “It will take time. Patience is needed.”

Investors had hopes raised for a quicker explanation when Kraken Bitcoin Exchange, a leading San Francisco-based exchange, was selected last November by the trustee to help the investigation and aid in the distribution of MtGox’s remaining assets to creditors. So far, though, Kraken has remained silent and refused to comment for this story. In September Kraken said it would not release any additional information.

One entity not happy to wait for answers is WizSec, a bitcoin security firm established last year in Tokyo by three former MtGox bitcoin investors. The company began conducting its own independent investigation in spring 2014 based on leaked MtGox transaction data published online by hackers, non-public leaked sources, interviews with former MtGox staff and others connected with the company.

Kim Nilsson, head of WizSec, spoke to the foreign press in Tokyo on 14 September and shed some light on the difficulties the authorities are facing, though he pointed out that because a substantial portion of his sources are unverifiable leaked data, he could not claim it to be one hundred percent reliable. However he believes it likely gives a good indication of the state of MtGox customer accounts at the time. 

“MtGox had very bad accounting to the point where it might have been non-existent,” said Nilsson. “This has left the case full of holes, which the police will have to extrapolate to fill.”

A major problem, he said, was that clients’ bank accounts and company accounts had been comingled, at least early on after the company’s launch in 2010. “So company funds and clients’ deposits were stored in a single account and used for company expenses.”

WizSec has published two reports on its findings, the latest this February. According to the report’s executive summary:

 Most or all of the missing bitcoins were stolen straight out of the MtGox hot wallet over time, beginning in late 2011. As a result, MtGox was technically insolvent for years (knowingly or not) and was practically depleted of bitcoins by 2013.

Christian Decker of the Swiss Federal Institute of Technology Zurich, and co-author of Bitcoin Transaction Malleability and MtGox study [pdf] with colleague Roger Wattenhofer disagrees.

“While it’s possible that at the change of ownership [when Karpeles purchased the exchange around March 2011], MtGox was not completely covering its liabilities, it is very unlikely that it was missing a major part of its funds,” Decker told Spectrum. “This is backed by the fact that some of the bitcoins sold on the platform did not enter the Bitcoin economy until later, i.e., they had not been mined then and couldn’t have been stolen then.”

The malleability study also discounts MtGox’s claim that malleability attacks were responsible for the loss of 850,000 bitcoins. The study concludes “…barely 386 bitcoins could have been stolen using malleability attacks from MtGox or from other businesses.”

But there are areas where the experts are in full agreement. “The main problem with MtGox was not with the bitcoin technology, but with how the company was run,” said Nilsson. “It doesn’t matter if you use the strongest bank vault in the world if you leave the keys out.”

Reich concurs. “This is about the bookkeeping at MtGox and not about the technology.”

“The alleged theft is due likely to insecure handling of funds by MtGox in their internal systems,” says Decker. “This would have been the case even if their allegations that transaction malleability was to blame, since they were using faulty network nodes internally.”

As for future expectations, “I believe the technology that powers bitcoin is strong and solid and will definitely make it into the financial industry before the (bitcoin) currency itself does,” said Nilsson. And Decker notes that while Bitcoin technology is still new and experiencing growing pains, “Academia and the industry are continuously working on improving the security of systems built on top of it.”

Why Every GPS Overestimates Distance Traveled

Runners, mariners, airmen, and wilderness trekkers beware: Your global positioning system (GPS) is flattering you, telling you that you have run, sailed, flown, or walked significantly farther than you actually have. And it’s not the GPS’s fault, or yours.

Blame the statistics of measurement. Researchers at the University of Salzburg (UoS), Salzburg Forschungsgesellchaft (SFG), and the Delft University of Technology have done the math to prove that the distance measured by GPS over a straight line will, on average, exceed the actual distance traveled. They also derive a formula for predicting how big the error will be. The open-access paper was published in the International Journal of Geographical Information Science; an earlier version is available on Arxiv.

GPS course calculations are subject to both interpolation error (a function of the sampling interval) and measurement error (the everyday orneriness of real-world physical systems). The Salzburg team—including first author Peter Ranacher of UoS and senior author Siegfried Reich of SFG—discovered a systematic bias in distance measurement errors.

Measurement errors have many causes. The paper specifically cites: propagation delay (atmospheric fluctuations affect the speed of the GPS signal); ephemeris error (uncertainty in the precise position of the GPS satellite); satellite clock drift; hardware error (the shortcomings of the terrestrial GPS unit); signal reflections (which can increase the length of the signal path); and unfavorable satellite geometry (available GPS satellites are too low in the sky or too close together or too few, for example).

Put them together and you have readings that scatter around the true position. The Salzburg researchers found that distances derived from position measurements with randomly distributed errors will, on average, come up longer than the actual separation between two points. There are three components to their calculation:

  • The reference distance (d0): the actual Euclidean distance between two points
  • The variance (Vargps): the “mean of the square minus the square of the mean” of the position error, an index of how accurate the position measurement is. Variance is the square of the standard deviation, σ2
  • The autocorrelation (C, perhaps more properly the autocovariance) of the measurement error. This can vary from a maximum of Vargps (if the errors are closely covariant) to 0 (if they are random) to -Vargps (if there is an inverse correlation). 

The Salzburg formula for the average Overestimation of Distance (OED) is then,

OED = (d02 + Vargps - C)1/2 - d0

The variance is always positive, so if the autocorrelation is lower than the variance, the overestimation of distance will always be positive. And the autocorrelation is generally lower than the variance.

The problem becomes particularly acute when the user (or the GPS) calculates the total distance traveled by adding together the lengths of multiple segments. The differences between the true and measured distance will fluctuate—sometimes short, but more often long. Because the GPS-measured distance skews long, though, the total GPS distance error will tend to grow with each added segment.

Not content with mere calculation, Ranacher, Reich, and their colleagues went on to test their findings experimentally. In an empty parking lot, they staked out a square course 10 m on a side, reference-marked each side at precise 1-m intervals, and set a GPS-equipped pedestrian (a volunteer, one hopes) to walk the perimeter 25 times, taking a position reading at each reference mark.

The researchers analyzed the data for segment lengths of 1 meter and 5 meters. They found that the mean GPS measurement for the 1-m reference distance was 1.2 m (σ2 = 0.3) and the mean GPS measurement for the 5-m reference distance was 5.6 m (σ2 = 2.0).  They also ran a similar experiment with automobiles on a longer course, with similar results.

Now, that pedestrian-course error of 10 to 20 percent is exaggerated because of the low-cost GPS receiver used and the short reference distances. But it is big enough that your GPS watch could tell you you’re crossing the finish line of a 42,195-meter marathon while the real terminus is more than 400 meters ahead.

That’s not a hypothetical example. For years, runners have complained that their GPS watches and other devices have mismeasured the distances they’ve run over supposedly verified courses, or suddenly finding that they set personal record times the first time they use a GPS to measure their course. There have been a number of confident explanations. Most involved either interpolation error (measuring the distance between successive plots as a straight line, which will likely report a shorter-than-actual distance over a twisty course) or the runner’s non-optimal choice of routes (adding to the verified distance on each leg, and reporting a longer actual distance traveled). Maybe they’ll like this explanation better.

The Ranacher team’s results do not mean that measuring the lengths of complex courses by GPS is futile. They point out that moment-by-moment GPS velocity measurement is not subject to the same sources of error, so that calculating distance traveled by integrating velocity should yield reasonably accurate results.

This post was edited on 6 November 2015 to correct the reported overestimation of distance reported from the GPS meaasurements on a 10-meter-square course. The correct average overestmates were 1.2 m on a 1-m reference distance and 5.6 m on a 5-m reference distance. (The incorrect values in the original post were 1.02 and 5.06.) 

China Plans Enormous Particle Collider

What comes after the Large Hadron Collider?

The main successor concept is the International Linear Collider (ILC), which would smash together electrons with a “center of mass energy” of up to 1 teraelectronvolt. It is currently in an advanced state of discussion between scientists mainly from American, European, and Japanese particle physics institutes. Though the collision energies would be but a fraction of those induced by the LHC, the proposed machine would be a "Higgs factory", performing experiments with large numbers of Higgs bosons, allowing a better understanding of the still enigmatic particle.  

But China may build it’s own successor system. Scientists there have reportedly completed the initial conceptual design for a much larger circular collider that would smash together protons and be housed in a tunnel twice the size of the LHC’s. Particles would ultimately collide with energies of up to 70 TeV—five times as great as those that produced Higgs particles in the LHC. They hope to complete the conceptual design by the end of 2016.

Read More

Australians Invent Architecture for a Full-Scale Silicon Quantum Computer

It’s looking more and more like future super powerful quantum computers will be made of the same stuff as today’s classical computers: silicon. A new study lays out the architecture for how silicon quantum computers could scale up in size and enable error correctioncrucial steps toward making practical quantum computing a reality.

Read More

Flexible Sensors Measure Blood Flow Under the Skin

Today’s best medical devices for measuring blood flow require patients to first show up at a clinic or hospital, then stay very still during the imaging procedure. But an experimental sensor that clings to skin like a temporary tattoo could enable 24-hour monitoring of blood flow wherever a patient goes.

Read More

Mind-Reading with Infrared Light

An optical sensor attached to the forehead could do the work of both an EEG monitor and an MRI, allowing portable monitoring of brain activity in patients and better control of hands-free devices for the physically disabled.

That’s the hope, anyway, of Ehsan Kamrani, a research fellow at Harvard Medical School who presented the idea at the recent 2015 IEEE Photonics Conference in Virginia.

“So far there is no single device for doing brain imaging in a portable device for continuous monitoring,” he says. Instead of a brief set of readings taken in a hospital, a stroke victim or epilepsy patient could get a set of readings over hours or days as she goes about her normal life. The readings could be transferred to her smartphone, then sent to her doctor, or even alert her if another problem was imminent.

Read More

NASA Offering Patents to Startups, No Money Down

This month NASA launched a special patent deal for startups, offering licenses on any of its patents with no money down and no royalty payments for the first three years.

The program waives upfront fees and provides a streamlined process for applying for a license. It is an outgrowth of NASA’s compliance with a memorandum issued by President Obama in October 2011 to accelerate and increase the success of transfer of technology from government agencies to the commercial sector, according to Dan Lockney, NASA’s technology transfer program executive.

According to the license agreement, the offer is only for companies formed within the last year with the express intent of commercializing the licensed NASA technology. However, Lockney says there’s wiggle room. “This is a template, a starting point,” he says. “We are flexible.”

The program is attracting more interest than NASA expected, says Lockney, noting that in the first three days there were three million downloads of the application form. He declined to specify how many applications have been received so far. NASA will take its time evaluating each application, he says. “We want to know that [the startup] has the technical chops, and that it’s not just tying up the IP,” he says. “We also want to ensure [the startup] has the market savvy and business skills to bring the technology to market.”

By year three of the license, a startup will have to start paying royalties even if it has not yet brought the technology to market. The royalties will be “lower than industry standard, lower than government standards, and lower than NASA’s usual standard,” Lockney says.  If the company still has no product after five years, NASA will terminate the license.

The licenses are non-exclusive, and NASA may license similar rights to other companies, but that is also flexible. “We want to make sure we don’t saturate the market, so we won’t allow such a run on technology that it is a disservice to the current licensee,” explains Lockney. NASA will also consider exclusivity if the startup wants to negotiate.

In the four years since Obama’s memorandum, NASA has centralized its tech transfer program, creating a one-stop shop for all of its patents online, grouping them into 15 categories, including clear, concise descriptions of each, and adding meta-data to enable keyword searches. In the electrical and electronics category, for example, there is a patent for a “Graphene-Based Reversible Nano-Switch/Sensor Schottky Diode Device,” described as a microsensor that detects toxic gases or explosives

Last year, the agency created an online software catalog of over 1,000 design tools that are available for free. “Some of it is esoteric space-specific stuff but others are very practical, like scheduling tools for large projects and design tools that have been used on everything from automobiles to musical instruments,” says Lockney.

NASA is the only federal agency that has its entire IP portfolio, including software, easily accessible in one place, he adds.

The agency hopes that the startup program will produce at least a few “home runs,” says Lockney. After all, NASA has been the inventor of technologies that, once commercialized, became part of everyday American life, he notes.  Eric Sossum, a NASA scientist at the Jet Propulsion Lab, invented the CMOS sensor that is now in every smartphone. And it was a NASA researcher that found a way to produce Omega 3 and Omega 6 fatty acids by growing algae, which is why our yogurt and milk now comes with “added Omega 3 and 6.”

Leap Second Heads Into Fierce Debate

When Earth’s rotation gets far enough out of sync with the drumbeat of atomic time, a leap second is added to Coordinated Universal Time (UTC) and the world’s clocks count off 59, then 60, then 00 seconds.

The fix is intended to pair two very different ways of keeping time, one grounded in the unchanging world of atomic physics and the other pinned to Earth’s spin, which is slowing due to tidal friction with the Moon.

Some say the leap second is a good compromise. It’s a way to link atomic clocks to the position of the sun in the sky. Others argue it’s an inconvenience and potential danger to modern systems. The leap second has been called “Y2K’s distant cousin” and “a crude hack paper over the fact that planets make lousy clocks compared with quantum mechanical phenomena.”

Whatever the leap second is, it will not be ignored. Next week, its fate will come up for debate before the International Telecommunication Union’s World Radiocommunication Conference (WRC), which will run nearly an entire month, from 2 to 27 November in Geneva.

Many countries are strongly split over what to do: some favor keeping the leap second while others want it dropped from the definition of UTC. “I’m expecting difficult discussion,” says Vincent Meens of France’s National Center for Space Studies, who chairs the study group in the telecommunications union that’s responsible for the topic.

In the first week of the conference, a group will be spun off to focus on the leap second, says Brian Patten of the U.S. National Telecommunications and Information Administration. He anticipates the question won’t be resolved quickly: “I am predicting this will go all the way through the conference. There won’t be a conclusion until the last week.” 

Patten will represent a regional group of countries in the Americas that includes Canada, Mexico, and the United States. Together they’re advocating the elimination of the leap second, with a grace period to allow time for legacy hardware and software to be updated. “The most fundamental thing that we’re proposing is to stop using the leap second in UTC, as the most economically viable and simplest method to implement,” he says.

“The world has changed a lot since 1972,” when the leap second was first introduced, Patten says. At the time, the addition helped celestial navigation. Now, satellite navigation systems offer far better accuracy, Patten says. At the same time, new vulnerabilities have emerged: “There is a huge underlying infrastructure of computer networks and telecommunication systems and all these other machines all talking to each other all over the world all the time.”

Since the rotation rate of the Earth doesn’t slow at a steady, entirely predictable rate, leap seconds aren’t scheduled at regular intervals. Each time a leap second is announced, system administrators must plan ahead to ensure there is no problem. Sometimes there is: past leap seconds have caused hiccups in web services and an outage in 2012 of an airline reservation system used by Qantas. “There hasn’t been a huge disaster but there could be,” Patten says, “and we’re being proactive in trying to prevent a future problem.”

The Americas aren’t going it alone. Other regional groups have weighed in. The Asia-Pacific Telecommunity, which includes Australia, China, and Japan, advocates dropping the leap second. Two others regions, one that includes Russia and a number of other formerly Soviet countries and the Arab Spectrum Management Group advocate preserving the leap second.

Notably missing from the regional proposals is Europe, which could not get the needed support for a proposal to drop the leap second. “We had intense debate,” says Alexander Kühn, who chaired the conference preparatory group for the region. At the last meeting in Norway in September, 20 countries voted in favor of dropping the leap second, he says, but the U.K., Russia, and six other countries opposed the proposal, which was enough to quash it.

A key concern for Russia seems to be the impact to the country’s GLONASS satellites. Kühn says he’s consulted with an engineer who says there is logic to Russia’s argument, but that the problem could be overcome by a software fix.

The U.K., home of the place where the sun is overhead at noon UTC, strongly advocates keeping the leap second. “It is our view that the technical problems associated with the insertion of leap seconds have been overstated and do not justify this radical change to the world’s time-scale,” the U.K. and several other countries state in their break-out proposal to the WRC.

Decoupling civic time from the Earth’s rotation might eventually mean—absent other changes—that the sun will reach its noon-time peak at 8 p.m. But this isn’t something that will happen any time soon: over the next hundred years, the drift between UTC and Earth-tracking time UT1 expected to be on the order of a minute.

Still, the U.K. and its proposal co-signers advocate an approach that would keep UTC the same but make it clear that International Atomic Time, the leap-second-less time that UTC is based on, could be used when someone needs a continuous time scale.

That wouldn’t make implementing leap seconds in UTC any easier of course, and it could complicate matters. “UTC was formulated to be the real-time and distributable reference clock for the world,” Patten says, while TAI is synthesized from many atomic clocks around the world, and “is not readily available for distribution like UTC.” “It would be another information distribution problem which turns out to be very difficult in the timekeeping area,” says Ron Beard, who chairs Working Party 7A, a group that has studied the technical issues associated with eliminating the leap second. 

This isn’t the first time the question of the leap second has come before the WRC. In 2012, a decision on the leap second was postponed to allow for more study.

It’s an open question whether an agreement will be reached this time around. “Theoretically you need a majority,” Kühn, of the European group, says. “In practice it’s a common habit of the WRC that they try to reach a concensus where everyone is so-called ‘equally unhappy’.”

If those in favor of eliminating the leap second are successful, it could be the last time questions about UTC come before the diplomats of the International Telecommunication Union (ITU).

The ITU has been responsible for UTC because, early on, the time signals were primarily transmitted by radio. As metrologist Terry Quinn has noted, this is no longer the case: “These days, time is disseminated by many other means, notably by satellite navigation systems and the internet but also by optical fibres, coaxial cable as well as by many systems related to satellite communications.”

In the future, control over defining UTC could go to the International Bureau of Weights and Measures just outside Paris, which is already responsible for maintaining UTC and International Atomic Time. 

Stretchable Antenna Boosts Range for Wearable Devices

Imagine a flexible antenna attached to a sports shirt wirelessly sending health and fitness data from sensors on the body to a smartphone hundreds of meters away. Such a vision for wearable devices has proven impractical—that is, until now. But a new antenna design has proven its ability to withstand the bending and stretching that garments endure, while steadily communicating via Wi-Fi.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More