Tech Talk iconTech Talk

Bell Labs Looks at Claude Shannon’s Legacy and the Future of Information Age

As the doors of Hamming Hall swung open, a parade of tech-world notables headed inside. Internet pioneers and Silicon Valley executives crammed into the front rows, along with 16 IEEE Fellows, of which four were also IEEE Medal of Honor recipients. A Nobel laureate walked in and squeezed himself into a seat between them.

Those brilliant minds gathered at Bell Labs in Murray Hill, N.J., last Thursday to celebrate a man who also stood out for his extraordinary genius: Claude Shannon, the mathematician and electrical engineer who founded the field of information theory and whose groundbreaking ideas on computing, cryptography, and communications have had a lasting impact on the world of technology. Shannon, who worked at Bell Labs from 1941 until 1956, when he joined the MIT faculty, died in 2001 at age 84. This year marks the 100th anniversary of his birth.

Read More

IBM Puts a Quantum Processor in the Cloud

IBM announced today that it is making one of its superconducting quantum processors accessible over the Internet. Those itching to try out such hardware will be able to get hands-on experience through a new quantum computing platform—at least, the experience will be as hands-on as it can be with hardware sealed inside a remote dilution refrigerator and cooled to a fraction of a degree above absolute zero. 

With just five qubits, the chip won’t let you rapidly factor large numbers in order to break encryption. In fact, a classical simulation of this system takes less time to run, says Jay Gambetta, manager of the Theory of Quantum Computing and Information Group at IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. 

But the goal of this tool, says Gambetta, “is to get people to start thinking quantum, to start thinking in terms of how a quantum computer works. Most people think quantum is hard or it’s spooky or it’s different. And yes it’s different, but it’s actually not hard.”

The portal, dubbed the IBM Quantum Experience, includes tutorials and a visual programming interface. Users can learn about algorithms and perform real quantum operations on the five-qubit chip, which is housed at the Watson facility. The aim is to have the chip be available 24-7, although it will go offline intermittently so that automated calibrations can be performed. 

The chip is part of a family of superconducting quantum processors created at IBM that have their qubits arranged in lattices instead of in a line. Gambetta says this arrangement offers better connectivity between qubits for error correction, a key ingredient in the effort to scale quantum systems up to create a “universal quantum computer” that can solve certain problems far faster than conventional, classical computers.

Access to the chip will be free but not frictionless. “We don’t want a billion bots launching things and stopping real people from wanting to use it,” Gambetta says. He says that the group is developing a system that can assign coins to different types of users to grant access. 

This is not the first time that a team has offered public access to quantum computing hardware. In 2013, a group at the University of Bristol launched a site that lets users practice programming and connect to a small optical system with two qubits. The ability to command that real-world hardware is currently offline, but the Bristol team says it aims to relaunch soon with a new chip and an upgraded user interface.

An astronaut floats inside a space station with a sophisticated camera.

Review: A Beautiful Planet Takes Full Advantage of IMAX’s Switch to Digital

“There’s no way you can match what you see with your own eyes—but this is close,” says Barry “Butch” Wilmore. As part of the crew of the International Space Station, Wilmore was one of the astronauts-turned-cinematographers who captured footage for the latest IMAX documentary, A Beautiful Planet, which opens today, 29 April.

A Beautiful Planet takes full advantage of the vantage point of the ISS and its multi-windowed viewing cupola. The mesmerizing glowing veils of the aurorae, the night-time splatter of cities across the continents, and swirling storm systems fill the giant IMAX screen in 3-D, leaving a lasting impression of wonder.

The movie also includes many scenes of life on board the ISS, giving some of the best views to date about what a working day is really like in a cluttered collection of floating boxes. The moviemakers’ goal in juxtaposing the interior shots with those aimed outward toward the blue ball we call home is to reinforce A Beautiful Planet’s environmental theme. The point: Just as considerable effort is required to keep a space station habitable, effort also needs to be put into limiting ecological damage on spaceship Earth so as to keep it habitable too.

Of course, A Beautiful Planet is not first IMAX movie shot in space. Indeed, Toni Myers, the movie’s director and producer also helmed 2002’s Space Station 3D and 2010’s Hubble 3D. But what’s different is that A Beautiful Planet was shot entirely using digital cameras, dramatically changing the nature of the filmmaking process.

Myers explained that the earlier IMAX cameras and their film packs were so bulky that, “You got seven minutes of footage on a mission, in 30 second takes. And every take was the first and only take.” In contrast, the digital cameras allowed as many takes as were needed; footage was stored on small external hard drives sent back to Earth with returning astronauts or Dragon cargo ships.  In addition, relatively low resolution versions could be directly downlinked to the producers on the ground. This allowed collaboration and consultation about how a shot might be reframed or exposure adjusted, or discussion on suggestions for previously unplanned shots.

However, the moviemaking secret of A Beautiful Planet is that the astronaut filmmakers actually took very little video of the Earth. It was realized that the digital video cameras currently available don’t have the resolution needed for the enormous IMAX screen. Instead, says James Neihouse, the film’s director of photography, still digital cameras such as the Canon EOS C500 EF were used. These captured 4K images at four frames per second in daylight, and two frames per second in the Earth’s shadow. About 250,000 still frames—totaling about 11 terabytes of raw data—were captured in this way. Back on Earth, software was used to interpolate additional in-between frames to create video files with smooth motion. The result was roughly 18 terabytes of footage that had to be edited down for the finished movie.

Opting for digital cameras over film also allowed images to be captured with very high dynamic ranges, so that dimmer features like stars or the aurorae were not washed out by bright objects in the foreground. The result is spectacular scenes of the sort that would have been previously impossible to see on a screen without resorting to post-production editing wizardry like compositing separately shot footage together or using computer generated graphics.

The astronauts who captured the footage worked in their spare time outside their official scientific and engineering jobs. Neihouse led the team that got the astronauts up to speed on handling the cameras. He notes that the training involved breaking the astronauts of some of the videography habits that NASA had trained into them, such as ensuring that everyone in a shot was oriented with their feet pointing the right direction. This “1-g mode” is good for a press conference beamed from space, but doesn’t convey the true topsy-turvy nature of life onboard the station. Neihouse laughs that he’s “the only [director of photography] in the world who has to train their first unit how to shoot.” 

Movidius Puts Neural Network on a USB Stick

Neural networks are artificial intelligence systems that excel at interpreting images. This makes them promising for helping drones and robots navigate, or for analyzing surveillance footage. But they are typically power hungry, which has limited their use so far. Vision processor company Movidius of San Mateo, Calif., hopes to change that with a low-power chip designed to run neural networks. The neural net accelerator, called Fathom, comes on a USB stick, uses only 1 watt of power, and can run most visual neural nets.

Making a low-power system that can run computationally intensive neural networks is a challenge. Neural networks make sense of images in a way that’s analogous to the human brain. They’re built during a training period that uses millions of labeled images to train an algorithm to recognize human faces, dogs, or trees, for example. These systems analyze images in several steps or layers, first finding objects, then identifying them as, say, trees or people, then identifying a known person, then figuring out, say, if that person is angry or distressed. This typically requires a lot of power-draining data transfers.

The Fathom, which holds Movidius’s Myriad 2 chip, does things differently. The Myriad 2 uses twelve parallel processors, each with a dedicated memory bank. “We have a mindset of not pushing the clock speed,” says Brick. Instead of running one or a handful of processors harder to do calculations faster, they chose to use twelve running in parallel. The chip also saves power by minimizing data transfers. “We keep the data really close to where it’s being processed—sometimes moving the data can use more power than processing the data,” says Brick. This strategy is similar to those used by research groups designing mobile processors of this type—but it will be the first of its kind to market, says Brick.

The accelerator can run neural networks like GoogLeNet at 1 watt. NVIDIA’s TX1 runs on a minimum of 4 watts, by comparison, and draws 1 W even when it’s idling. The Myriad 2 idles at 0.12 W. What’s more, the Fathom doesn’t require a heat sink or any other cooling systems, which is part of why it’s small enough to fit on a USB.

Cormac Brick, head of machine learning at Movidius, says this mobile-friendly system should make it practical to run neural networks in more places. Brick says a Fathom stick loaded with the right neural net could help individual surveillance cameras flag problems—alerting a home user that his father has fallen and can’t get up or alerting airport security staff that someone is acting aggressively—something it usually takes a human watching the footage to notice.

Other companies like Nervana Systems want to put deep learning in the cloud. For people concerned about privacy, running neural nets in a mobile system will be preferable to uploading video footage of their home into the cloud for analysis, says Brick. The speed of on-board neural nets will also help robots and drones navigate. The Fathom system could help them more quickly respond when an obstacle moves into their path, without any of the inescapable latency resulting from data being sent back and forth.

In a Movidius press release, Yann LeCun, director of AI research at Facebook, praised the device, saying, “every robot, big and small, can now have state-of-the-art vision capabilities.”

The Fathom is not aimed at the consumer who wants a less clumsy drone or a smarter home security system, but at the at companies and researchers developing such products. Users will have to know something about building embedded systems. Brick says it’s compatible with Raspberry Pi, drone kits with open API, as well as kits for security cameras and robots. It can run neural nets based on open-source software libraries TensorFlow and Caffe. Users developing new neural nets can also use the Fathom with their personal computers to more quickly test their prototypes.

Movidius will send about 1000 of the Fathom sticks to researchers at universities and companies, and encourages university teams who want to try it out to get in touch through their website. After distributing the neural-net-on-a-stick on a case-by-case basis for a while, they’ll launch the product more broadly. The company says the USB stick will sell for under $100 at launch.

Indium Tin Oxide Might Be the Material Photonics Has Been Waiting For

There are plenty of reasons why it’s useful to transfer information through photons or use light particles to carry out tasks within a system or device, speed chief among them. But in order to use photons with even greater dexterity in the future, researchers will need to control the way light behaves as it passes through a material.

One way to do this is by adjusting the material’s refractive index to cause light to travel faster or slower through it. This is a particularly good option for materials that naturally alter their refractive index according to the intensity of light to which they are exposed.

Such materials behave differently depending on whether the light passing through comes from a low-power source or a high-powered laser. These materials are known as optically nonlinear. In the world of photonics, having a higher degree of optical nonlinearity is considered an attractive trait.

Now a team led by Robert Boyd, a physicist at the University of Ottawa and the University of Rochester, has found that a transparent metal called indium tin oxide (ITO), which is often used in touchscreens and on airplane windows, can achieve a particularly high degree of optical nonlinearity—making it a good candidate for future photonics applications.

Read More

Alarming Security Defects in SS7, the Global Cellular Network—and How to Fix Them

The global network that transfers calls between mobile phone carriers has security defects that permit hackers and governments to monitor users’ locations and eavesdrop on conversations. As more reports of these activities surface, carriers are scrambling to protect customers from a few specific types of attacks.

The network, called Signaling System 7, or SS7, is a digital signaling protocol that mobile phone carriers including AT&T, T-Mobile, and Sprint use to send messages to each other about who is a subscriber, where subscribers are located, and how calls should be routed to reach them.

SS7 began as a closed network shared among a few major mobile phone carriers, but grew porous as more carriers joined. Hackers and governments can now gain access by purchasing rights from a carrier (which many are willing to provide for the right price) or infiltrating computers that already have permission.

Once they’re in, hackers and government intelligence agencies have found ways to exploit security defects to monitor users or record calls. Experts who study SS7 have found some individuals are tracked by as many as nine entities at once. While the average citizen isn’t likely to be a target, it’s impossible for consumers to know whether or not they’re being watched.

The problem

The sheer scale of SS7 means that these flaws present a massive cybersecurity problem that could theoretically affect any mobile phone user in the world. “Technically speaking, more people use the SS7 than use the Internet,” says Cathal McDaid, chief intelligence officer at network security firm AdaptiveMobile. “It’s the majority of the world’s population.”

To inspire a solution, Karsten Nohl, a computer scientist at Security Research Labs in Berlin, has exposed several methods through which governments and hackers could conduct surveillance and monitor calls using SS7. He recently appeared on 60 Minutes to show that he could hack a cellphone provided to U.S. congressman Ted Lieu using only Lieu’s phone number (Lieu agreed to participate in the demonstration). It’s a stunt Nohl had executed before, once hacking a German senator’s phone.

In an interview with IEEE Spectrum, Nohl describes a few ways that hackers and governments that have gained access to SS7 can manipulate the network to listen to calls or track users:

1. Impersonate a network

When a customer places a call, the phone company sends digital packets of information along dedicated channels within SS7 to find the recipient. Along the way, the company receives information from other carriers about where the recipient is located and which cell tower the call should be routed through.

To make sure incoming calls can find them, phones periodically send messages to nearby towers identifying a user’s location.

Hackers can hijack this process by flooding the system with their own messages pretending to be a network that contains a specific phone. This can cause some confusion since the original phone will continue to transmit its actual location, but hackers can usually overcome true signals.

“Your phone only says ‘Hi’ once every six hours where we can say ‘Hi’ every minute so we can dominate that ping pong game,” Nohl says. 

In this way, hackers can intercept all calls destined for a certain number and send the calls through their computers first. Then, they can instruct their system to connect the call to the number the caller originally dialed. A hacker can listen in while the caller talks with the recipient, oblivious to the third party on the line.  

2. Intercept a forwarded call

Each mobile phone carrier also operates a Home Location Register, which is the primary database of information about its subscribers. Hackers can use this register to re-route requests or instructions placed by a particular phone.

For example, when a customer sets up call forwarding to send calls directly to voicemail, to a secretary, or to another phone, that transfer is coordinated through the register. The customer’s phone sends out digital packets to their carrier’s register that effectively say, “Mary would like her calls to go to this new number.”

A hacker can divert this message and insert instructions, called supplementary service codes, to again route the call to their own computers. Then, they can connect the call to the number that the caller intended to reach and record the conversation, unbeknownst to anyone else on the call.

3. Fake out CAMEL

Mobile carriers rely on a protocol called CAMEL to make sure the people using their network are real subscribers who have paid their bills. The protocol essentially manages permissions for each registered phone number, but comes with some built-in capabilities that are extremely convenient for hackers.

One such function is that when a user dials a phone number, their phone sends out a request, asking, “Is Mary permitted to call this number?” Normally, a carrier might respond via the CAMEL protocol with a simple “Yes” or “No,” (or perhaps “Yes, but only for three minutes” if a user is running low on prepaid credits).

However, CAMEL also allows carriers to basically say, “Yes, but the number Mary really wants to call is XXX-XXX-XXXX.” Such a function could come in handy if, for example, a caller forgot to dial a country code.

But it also allows hackers to pose as a carrier by sending out their own message that routes every phone call originating from a specific number through their system first. Or, as Nohl says, “We can make it so that every number you dial is us.”

The solution

The growing number of attacks has captured the attention of mobile carriers and governments around the world. McDaid of AdaptiveMobile estimates that each day, an average-sized carrier that serves 1 to 5 million customers might be subject to thousands of simple attacks, and a few dozen sophisticated ones.

So what can carriers do to protect customers?

Many have already begun to install protections. AdaptiveMobile has developed firewalls and software for 70 or 80 carriers since 2013. Nohl compares this shift in awareness to the early days of the Internet, when companies and consumers first realized they needed to protect computers from viruses. 

McDaid says carriers don’t have any other choice. “The network, it’s really not going to be going anywhere. It’s a multi-billion dollar system that allows mobile carriers to be mobile carriers, basically,” he says. “There really is no alternative to protecting it.”

In some countries, regulators have compelled companies to install certain protections, saying communications is as essential to public infrastructure as water and power. After the 60 Minutes episode, the U.S. Federal Communications Commission said it would study SS7 design flaws and Lieu also asked the House Oversight Committee to examine the network.  

In addition to describing the hacks, Nohl and McDaid spoke to IEEE Spectrum about a couple of the most popular protections implemented by mobile carriers today:

1. Checking the plausibility of requests

One way to fend off would-be hackers is to deny requests that don’t make sense based on what a carrier knows about a particular user. This is similar to automatic denials that many credit card companies have in place. Requests or messages that claim a user is in Europe, for example, can be thrown out if the user was detected just five minutes ago in the U.S. Nohl estimates that about 39 percent of SS7 hacks could be prevented if carriers instituted so-called plausibility checks.

2. Blocking “anytime interrogation”

Carriers can also weed out illicit requests known as “anytime interrogations,” which Nohl admits is a “very creepy name” for a “very creepy functionality.” Carriers send these requests to inquire about a user’s whereabouts, but the requests are also frequently exploited for government surveillance.

Nohl says the ability to conduct an anytime interrogation was only supposed to permit carriers to locate their customers, and never meant to be shared. Therefore, blocking all such requests that originate outside of a carrier’s network is an easy way to prevent outside monitoring. He says installing a firewall that denies anytime interrogations as well as a range of other suspicious messages could prevent another 60 percent of SS7 attacks.

In Privacy Versus Security, End-to-End Encryption Is Definitely Winning

While the U.S. Federal Bureau of Investigation publicly feuds with Apple over access to the iPhones of criminals, a quiet but monumental shift in mobile security could upend the agency’s plans to keep private lines of communication pried open. Mobile messaging companies are embracing end-to-end encryption, which puts conversations permanently out of reach of both law enforcement and the companies themselves.

This month, Viber and WhatsApp announced end-to-end encryption as a default setting, protecting the communications of 1.7 billion combined users worldwide. End-to-end encryption is a security mechanism that fully encrypts a message from the moment it is composed through its final delivery.

With this method, the key required to decrypt messages is only shared between sender and receiver. It is not known or stored by the company that shuttles messages between two parties. That means there’s no way for law enforcement to force a company to decrypt messages, because the company itself does not hold and cannot access the key to decode them.  

The widespread use of this protection on popular messaging apps propels the privacy versus security debate into new terrain. In the United States, the FBI claimed earlier this year that it needed Apple to provide access to an iPhone owned by a man who committed a mass shooting in San Bernardino, Calif., so that the agency could recover information for its investigation.

But iPhone access does not unlock the data held within apps, especially if that data was protected by another passcode or exchanged using end-to-end encryption. Even if law enforcement gains access to iPhones in future investigations, they will likely run up against these barriers. Though WhatsApp and Viber do not have built-in passcode protection, users can download third-party apps to add a password to any app on their phones.

To fight back, several countries including the U.K. and U.S. are weighing legislation and proposals to prohibit companies from using end-to-end encryption. Security experts have argued that these measures are nearsighted, since companies elsewhere could easily build apps that use end-to-end encryption and offer them to users anywhere in the world.

Recent developments reflect a prediction shared with IEEE Spectrum by Matthew Green, a cryptography expert at Johns Hopkins University, in Baltimore: that instant messaging services would be first to roll out end-to-end encryption, even ahead of email providers. Both Google and Yahoo have invested resources into developing end-to-end encryption for email, but the technical challenges are greater than for instant messaging. Implementing this protection would also clash with business priorities such as Google services that automatically schedule flights or meetings by perusing users’ emails.

Ultrasonic Signals Transmit Data Through Meat at HD Video Quality

Modern medical implants are hobbled by slow download speeds. Most medical devices transmit data over radio frequencies at the relatively glacial pace of fewer than 50 kilobytes per second. At that rate, it can take ages for physicians to transfer data or reprogram devices.

New evidence suggests ultrasonic signals could speed up that process. Researchers who achieved higher data rates in one early test say that these signals may eventually enable doctors to live stream high-quality video from inside of a patient. 

A team led by Andrew Singer, an electrical engineer at the University of Illinois at Urbana Champaign, found that they could use ultrasonic signals to transfer data through meat at speeds up to 30 megabits per second. That’s the highest ever recorded data rate for signals transmitted through animal tissue, Singer says. For comparison, streaming Netflix in Ultra HD requires 25 megabits per second.

Most implants installed today broadcast on radio frequencies. But those signals must remain below a maximum bandwidth of 300 kilohertz. More powerful signals might cause the resulting electromagnetic waves to interfere with nearby devices. They could also harm patients, since the body absorbs some waves as the signals pass through. Or as Singer puts it, “We are a bag of salt water, and electromagnetic waves heat up salt water.”  

In addition to being dangerous, this absorption also makes data transfer quite inefficient. Singer wanted to use ultrasonic signals to transmit data at higher rates and keep more of it intact along the way.

Before he could begin his experiment, Singer had to pick up some pork loin and beef liver from County Market in Champaign, Ill. He threw some pig knuckles in the shopping bag, too because he thought it might be neat to try to transmit wireless signals through bone and gristle.  

Back at his lab, colleagues rejected the pig knuckles. But the remaining slabs were carefully strung, one at a time, between two 5-megahertz transducers in a tank filled with water. One transducer converted binary digital data to ultrasonic signals that were transmitted through the meat. Its twin measured the resulting data rates.

Singer was happy to see that, in addition to the signal traveling fast, most of the original data arrived intact, with little lost during its journey through the meat.  “We were a little bit surprised that the meat really did not provide much in terms of additional dispersion or attenuation,” Singer says. “The signals coupled extremely well and we had almost the entire bandwidth available to us for data transmission.”

Jeremy Dahl, a radiologist at Stanford University who focuses on ultrasonic devices, points out that the transducers in Singer’s study were positioned only 5.86 centimeters apart, with the meat in the middle. Therefore, it’s not clear that medical devices embedded deep in the body could achieve similar rates while transmitting to, for example, a physician’s kiosk across the room.

And since they didn’t test the pig knuckles, Singer’s group still doesn’t know how ultrasonic signals will stack up as a method of data transmission when when signals have to travel through bone or skin.

Another potential issue is that ultrasonic signals travel in a beam instead of broadcasting in all directions at once. A physician would have to know how a device is oriented within a patient’s body in order to catch the signal it emits.

“If you’re trying to receive from a different location from where that beam is directed, you’re not going to receive any signal,” Dahl says. Radio frequency is omnidirectional, so an external device can capture it from any direction.

Next, Singer hopes to pursue animal studies. In the future, he thinks higher data rates achieved through ultrasonic signals could allow physicians to ask a patient to swallow a camera and watch a live stream as it passes through the digestive tract.

Or, it might enable practical functions that have so far eluded the medical device industry, such as making software upgrades to devices that are already in place. “At today's rates for communicating with implantable devices, you'd never even consider the possibility to change the software on these things,” Singer says.  

They’re Alive! Vintage Computer Fans Keep the Great Machines of the Past Running

A monochrome glow spilled out into the room, produced in the old fashioned way: by hurling electrons at a phosphorescent screen. The high-pitched rasp of a dot-matrix printer pierced the air. For a second I was back in the 1980s, the 8-bit age when computers stopped being things that people only saw in movies and magazines and started cluttering up their homes. Then someone jostled against me and I returned to the present and the crowded exhibition hall of the Vintage Computer Festival East (VCF East).

The festival took place 15-17 April at the InfoAge Science Center in Wall, New Jersey. The center itself has an interesting place in technological history, stretching back to its origins as part of Marconi’s radio empire and including decades as a top secret communications research facility for the military. An 18-meter radio dish that was used as the ground station for the pioneering Tiros weather satellite, launched in 1960, is being restored to full operation at the site. 

The InfoAge center is home to a permanent collection of vintage computers, covering the years from 1945 to 1986, but it’s also home to the annual festival where enthusiasts gather to exhibit their personal collections of vintage computers and related items. Most of the machines still function, sometimes only thanks to heroic restoration efforts.

On display at this year’s festival was a working Apple 1, a rarity easily worth hundreds of thousands of dollars. It had been painstakingly restored for the owner by exhibitor Corey Cohen, who is now often employed by auction houses looking to verify the authenticity of such machines. My favorite moment was when he loaded a computer program into the Apple via the original cassette tape interface—with a sound file on his iPhone standing in for the cassette player.

Going back further in time, Brian Stuart demoed his emulator of the fabled and immensely influential World War II–era ENIAC computer. Stuart’s emulator not only reproduces most of the internal workings of the behemoth machines on a PC, but he’s taken the time to recreate the panel displays from old photographs so that they light up exactly as they would have done when the real machine was running. When I arrived, Bill Mauchly, son of ENIAC co-creator John Mauchly, was looking over the emulator with obvious delight. Mauchly pointed out that one of the original programmers seen tending to the giant machine in a photograph was his mother, Kathleen Kay McNulty, whom Mauchly senior had married in 1948. “ENIAC is sort of like my step-brother,” he joked.

Other displays included things like a collection of Apple II clones from around the world, including a fascinating Bulgarian machine that also housed a Z80 processor in addition to the Apple’s standard 6502 CPU. The user can switch between processors, allowing them to run a much wider range of software than either CPU alone. Another switch lets the machine’s display alternate between the Roman and Cyrillic alphabets (all programming had to be done using the Roman alphabet).

Speakers at the conference included John Blankenbaker, creator of the Kenbak-1, a little known non-microprocessor-based educational machine that has a good claim to being the first commercial personal computer. Ted Nelson, the man who coined the words hypertext and hypermedia (among other contributions to our modern digital lexicon), walked attendees through some of his alternative vision for what computing could be. Nelson’s original system design for hypertext, called Xanadu, included both “jump links”—now known as the hyperlinks that glue the Web together—and a system for visually presenting relationships between documents. Said Nelson:  “The World Wide Web is a fork of Xanadu,” one that kept the jump links but left out what Nelson considers the most important part: being able to visualize the connections between documents. He’s still working on a prototype of the full system, but as he nears his 80th birthday he ruefully admits, “all my plans involve being younger.” (Look out for the video of IEEE Spectrum’s interview with Nelson soon).

Evan Koblentz, the author of Abacus to Smartphone: The Evolution of Mobile Computers and president of the Vintage Computer Federation, a non-profit umbrella organization to a number of festivals, explains that one of his goals is to build bridges between historians of computer science and the enthusiasts and collectors who keep and tend early machines. “I think that [academic] researchers need to get their hands dirty, and hobbysists need to understand that research isn’t just looking things up on Wikipedia.”

If you missed this year’s show in New Jersey, you still have several chances to revisit the vintage world of computing. VCF Europa takes place in Munich from 30 April to 1 May, and VCF West will take place at the Computer History Museum in Silicon Valley from 6-7 August.  

Video produced by Kristen Clark.

Mobile Forensics CEO Proposes Controversial Access Tech for Smartphones

The FBI may have unlocked the iPhone 5C held by a San Bernardino shooter without Apple’s help, but the agency and the world’s largest tech company are still at odds over whether law enforcement should be granted access into the smartphones of suspects and criminals.

On Tuesday, a U.S. House of Representatives subcommittee will hear arguments from Apple and the FBI on how best to weigh the privacy and security of citizens in such cases. Amid the dispute, the CEO of a mobile forensics company has proposed a controversial “backdoor” solution based in public key cryptography that he says represents the best possible compromise between the two.

However, several cybersecurity and computer science experts interviewed by IEEE Spectrum disagree, saying that this type of access creates vulnerabilities and is of limited value to law enforcement.

Read More

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More