Tech Talk iconTech Talk

A man in a helmet lowers a yellow torpedo-shaped submersible over the side of a boat

NATO Unveils JANUS, First Standardized Acoustic Protocol for Undersea Systems

Aquatic robots are busier than ever. They have seabeds to mine, cable pathways to plough, and marine data to gather. But they and their aquatic brethren—including submarines and scuba divers—still struggle to communicate.

For decades, global standards defining Wi-Fi and cellular networks have allowed people to exchange data over the air. But those technologies are worthless below the waves, and no such standards have existed for underwater communications.

Aquatic systems have instead used a mishmash of acoustic and optical signals to send and receive messages. However, manufacturers sell acoustic modems that operate at many different frequencies, which means those systems often can’t speak to each other.

“We live in a time of wild west communications underwater,” says João Alves, a principal scientist for NATO.

Now, Alves and other NATO researchers have established the first international standard for underwater communications. Named JANUS, after the Roman god of gateways, it creates a common protocol for an acoustic signal with which underwater systems can connect.

Acoustics has long been a popular medium for underwater communications. Generally, optical signals can deliver high data rates underwater at distances up to 100 meters, while sound waves cover much greater distances at lower data rates.

The main role of JANUS is to bring today’s acoustic systems into sync with one another. It does this in part by defining a common frequency—11.5 kilohertz—over which all systems can announce their presence. Once two systems make contact through JANUS, they may decide to switch to a different frequency or protocol that could deliver higher data rates or travel further.

In this way, Alves compares JANUS to the English language—two visitors to a foreign country may speak English to one another before realizing they are both native Spanish speakers, and switch to their native tongue.

Chiara Petrioli, a specialist in underwater sensors and embedded systems at La Sapienza, the University of Rome, says JANUS could be the first step toward an “Internet of Underwater Things"—a submerged digital network of sensors and vessels. 

In addition to designating a frequency, JANUS also provides a modulation encoding scheme to describe how data should be encoded onto a sound wave, and describes the particular waveform that should be used (known as FH-BFSK). It also spells out which redundancies should be added to the data stream to minimize transmission errors.

In order to use JANUS, a system would first emit three optional tones to indicate that it intends to broadcast a JANUS data packet hitched to a sound wave. Then, the system would pause for about 400 milliseconds to allow other devices in its vicinity to “wake up.” Next, the system would broadcast a fixed series of tones to ensure both systems were properly synchronized to the JANUS protocol. Finally, the system would send the JANUS packet, consisting of 56 bits followed by a redundancy check, which tests for transmission errors.

The JANUS standard was developed by Alves’ team at NATO’s Centre for Maritime Research and Experimentation in La Spezia, Italy and sponsored by NATO’s Allied Command Transformation. It is the first underwater communications standard to be defined by an international body. 

Milica Stojanovic, an expert in oceanic engineering at Northeastern University, expects other standards will soon follow. She says the 11.5 kHz frequency used by JANUS is great for transmitting data between 1 and 10 kilometers, but a lower frequency, perhaps 1 kHz, would be better for sending data over longer distances of 10 to 100 km.  

Even with JANUS and other standards, any future underwater Internet will probably be cursed by far lower data rates than modern Wi-Fi or cellular networks. Sound travels at much lower frequencies, and on much longer waves, than the signals used for consumer electronics. Though sound waves travel faster in water than on land, they still travel more slowly through water than radio waves through air.

To develop JANUS, Alves’ team relied on the Littoral Ocean Observatory Network, a collection of tripods that NATO researchers have placed on the seafloor in the harbour of La Spezia, Italy. Each tripod emits acoustic signals to other tripods, which send performance reports to researchers through undersea cables. Those reports helped the team understand how fluctuations in water temperature, and other environmental changes, will affect JANUS signals.

The tripods also allowed researchers to build a JANUS receiver, advanced versions of which could minimize decoding errors and account for the Doppler effect. The Doppler effect describes shifts in sound waves caused by motion, such as the whirl of an ambulance siren as it drives by.

In another series of tests, researchers aboard the research vessel Alliance, a NATO ship operated by the Italian Navy, measured the performance of JANUS signals along the surface of the ocean.

Once deployed, aquatic systems could use JANUS to send data directly to each other, or to “gateway buoys” bobbing on the water’s surface. The buoys could then use radio waves to relay that data to nearby control centers.

In one demonstration, Alves’ group helped the Portuguese Navy set up a buoy that converted data about the positions and speeds of nearby ships to JANUS. The buoy rebroadcast this information to Portuguese submarines lurking below.

Based on their work, Alves says submarines could also use JANUS to issue calls for help to ships and rescue crews. “Using an open scheme like JANUS to issue distress calls would increase incredibly the chances of those being picked up,” he says.

Now that JANUS is available, manufacturers of aquatic systems must decide whether or not to adopt it. Alves is confident they will, and Petrioli, who contributed feedback to the development of JANUS, agrees that adoption is essential to the industry’s future.

But Stojanovic is not so sure. “If there starts to develop a serious market, then everybody will have to play to the same tune,” she says. “If not, and everybody finds their own niche market with their own protocols, then they will do that.”

An abridged version of this post appeared in the September 2017 print issue as “A Language for the Internet of Underwater Things.”

two laptops facing each other with blue conversation bubbles above them

In FutureLearn's MOOCs, Conversation Powers Learning at Massive Scale

“Personalized learning” is one of the hottest trends in education these days. The idea is to create software that tracks the progress of each student and then adapts the content, pace of instruction, and assessment to the individual’s performance. These systems succeed by providing immediate feedback that addresses the student’s misunderstandings and offers additional instruction and materials.

The Bill & Melinda Gates Foundation has reportedly spent more than US $300 million on personalized learning R&D, while the Chan Zuckerberg Initiative—the investment and philanthropic company created by Facebook CEO Mark Zuckerberg and his wife, Priscilla Chan—has also signalled its commitment to personalized learning (which Zuckerberg announced on Facebook, of course). Just last month, the two groups teamed up for the first time to jointly fund a $12 million program to promote personalized classroom instruction.

But personalized learning is hard to do. It requires breaking down a topic into its component parts in order to create different pathways through the material. It can be done, with difficulty, for well-structured and well-established topics, such as algebra and computer programming. But it really can’t be done for subjects that don’t form neat chunks, such as economics or psychology, nor for still-evolving areas, such as cybersecurity.

What’s more, this latest wave of personalized learning may have the unintended consequence of isolating students because it ignores the biggest advance in education of the past 50 years: learning through cooperation and conversation. It’s ironic that the inventor of the world’s leading social media platform is promoting education that’s the opposite of social.

Interestingly, one early proponent of personalized learning had a far more expansive view. In the 1960s, Gordon Pask, a deeply eccentric British scientist who pioneered the application of cybernetics to entertainment, architecture, and education, co-invented the first commercial adaptive teaching machine, which trained typists in keyboard skills and adjusted the training to their personal characteristics. A decade later, Pask extended personalized learning into a grand unified theory of learning as conversation.

For the layperson and even for a lot of experts, Pask’s Conversation Theory is impenetrable. But for those who manage to grasp it, it’s quite exciting. In essence, it explains how language-using systems, including people and artificial intelligences, can come to know things through well-structured conversation. He proposed that all human learning involves conversation. We converse with ourselves when we relate new experience to what we already know. We converse with teachers when we respond to their questions and they correct our misunderstandings. We converse with other learners to reach agreement.

This is more than an abstract theory of learning. It is a blueprint for designing educational technology. Pask himself developed teaching machines that conversed with students in a formalized language, represented as dynamic maps of interconnected concepts. He also introduced conversational teaching methods, such as Teachback, where the student explains to the teacher what has just been taught.

Pask’s theory still has relevance today. I know, because for the past four years, I’ve helped develop a new MOOC (Massive Open Online Course) platform based on his ideas. The platform is operated by FutureLearn, a company owned by The Open University, the UK’s 48-year-old public distance learning and research university.

As Academic Lead for FutureLearn, I was determined not to copy existing MOOC platforms, which primarily focus on delivering lectures at a distance. Instead, we designed FutureLearn for learning as conversation, and in such a way that learning would improve with scale, so that the more people who signed up, the better the learning experience would be.

Every course involves conversation as a core element. Each teaching step, whether video, text, or interactive exercise, has a flow of comments, questions, and replies from learners running alongside it. The steps make careful use of questions to prompt responses: What was the most important thing you learned from the video? Can you give an example from your own experience?

There are also dedicated discussions, in which learners reflect on the week’s activity, describe how they performed on assessments, or answer an open-ended question about the course. And online study groups allow learners to work together on a task and discuss their learning goals.

Even student assessment has a conversational component. Learners write short structured reviews of other students’ assignments, and in return they receive reviews of their assignments from their peers. Quizzes and tests are marked by computer, but the results come with pre-written responses from the educator.

When we began designing FutureLearn, previous research suggested that students don’t like to collaborate and converse online. Other online learning platforms that provide forums to discuss a course find these features are generally not well used. But that may be because these features are peripheral, whereas we put conversation at the heart of learning.

From the start, the conversations took off. In June 2015, the British Council ran the largest ever online MOOC, on preparing for the IELTS English language proficiency exam. Some 271,000 people joined the FutureLearn course, including many based in the Middle East and Asia. Just one video on that course attracted over 60,000 comments from learners. By then, we had realized that the scale of conversation needed to be tamed by using the social media techniques of liking and following. We also encouraged course facilitators to reply to the most-liked comments so that learners who were following the facilitators would see them.

We had expected to deal with abusive comments on courses like “Muslims in Britain” and “Climate Change.” That hasn’t happened, and we aren’t entirely sure why. The initial testers of FutureLearn were Open University alumni, so perhaps they modelled good practice. Comments are moderated to remove the occasional abusive remark, but most of the conversation streams are so overwhelmingly positive that dissenters get constructive responses rather than triggering flame wars.

To be clear, students aren’t required to take part in a discussion to complete a FutureLearn course, but the learning is definitely enriched when students read the responses of other learners and join in. On average, a third of learners on a FutureLearn course contribute comments and replies.

FutureLearn is now a worldwide MOOC platform, with more than six million total registrations. We’re continuing to consider new conversational features, such as reflective conversations where learners write and discuss annotations on the teaching material, and experiential learning where learners share their personal insights and experiences.

FutureLearn has taken the path of social learning and proven that it can work at scale. Going forward, the big challenge for FutureLearn and for educational technology in general will be to find ways of combining the individual pathways and adaptive content of personalized learning with the benefits of learning through conversation and collaboration.

About the Author

Mike Sharples is Professor of Educational Technology at The Open University and Academic Lead at FutureLearn. He is Associate Editor in Chief of IEEE Transactions on Learning Technologies and a Senior Member of IEEE.

A close-up of a University of Washington researcher holding a prototype of a battery-free phone made from a printed circuit board.

Building a Battery-Free Cellphone

Batteries can be a real drag. They’re expensive and must be constantly recharged. Though some battery-free sensors can passively transmit small amounts of data, most consumer electronics today still rely on bulky batteries to store power.

A team from the University of Washington has built a battery-free cellphone that can harness power from radiofrequency (RF) waves sent to it from a nearby base station. The phone not only harnesses the power it needs to operate from those waves, but can also place a voice call by modifying and reflecting the same waves back to the base station, through a technique known as backscattering.

The UW team has shown their device (built from off-the-shelf components) can use harvested power to place a call from a distance of 9.4 meters away from a customized base station. They also built a version outfitted with photodiodes that collect ambient light to passively power the device, allowing them to place a call from a distance of 15.2 meters.

Read More
stylized computer-drawn chat bubble shown over 1s and 0s

How Bots Win Friends and Influence People

Every now and then sociologist Phil Howard writes messages to social media accounts accusing them of being bots. It’s like a Turing test of the state of online political propaganda. “Once in a while a human will come out and say, ‘I’m not a bot,’ and then we have a conversation,” he said at the European Conference for Science Journalists in Copenhagen on June 29.

In his academic writing, Howard calls bots “highly automated accounts.” By default, the accounts publish messages on Twitter, Facebook, or other social media sites at rates even a teenager couldn’t match. Human puppet-masters manage them, just like the Wizard of Oz, but with a wide variety of commercial aims and political repercussions. Howard and colleagues at the Oxford Internet Institute in England published a working paper [PDF] last month examining the influence of these social media bots on politics in nine countries.

Read More
An illustration of a handshake with patterns from a circuit board projected onto the hands.

The Corporate Blockchain

Hundreds of financiers, Wall Street analysts, and C-suite executives gathered in New York City this week to peer into the future of finance at the CB Insights’ Future of Fintech conference. And on Wednesday afternoon, they took a moment to ponder one of the greatest existential threats to their industry—and how they might turn it to their advantage.

Attendees crammed into a standing-room-only session to hear about the role that blockchains would play in existing businesses. To many in finance, it’s a perplexing topic. After all, the Bitcoin blockchain was long ago predicted to render modern finance—and finacial firms—obsolete.

Instead, many financial firms have embraced blockchain technology, and even become rather bullish about it in the process. But companies have also found that preparing a blockchain to go live, and integrating it with existing systems, can be a daunting process.

Up on stage, and tasked with guiding the crowd through its mixed bag of emotions, were: Marley Gray, principal program manager for Microsoft’s Azure Blockchain Engineering; Joe Lubin, founder of the blockchain consulting firm ConsenSys; and Rumi Morales, executive director of CME Ventures, the investment arm of CME Group which manages the Chicago Mercantile Exchange.

Gray set the tone for the discussion from his vantage point at Microsoft, which offers a platform that it calls blockchain-as-a-service (BaaS) to help companies build their own blockchain-based networks and applications. As a result, Gray has seen how early experiments have fared across many industries.

“One of our goals was to make it ridiculously easy to roll [blockchains] out,” he said. “Now we’re at the next phase of—now I’ve got this blockchain, what do I do with it? So we’re kind of stuck on that piece right now.”   

Many banks and stock exchanges are on the cusp of moving from pilots and proof-of-concepts to actual blockchain implementations. Morales, who has overseen her firm’s investments into Ripple and Digital Currency Group (which owns the cryptocurrency news site CoinDesk and has funded Coinbase, a trading service), suggested the industry is facing a moment of truth.

“Last year, we saw a number of companies announcing that they would be building things, or had a use case, for [the blockchain],” she said. “This is the year they need to prove that.”  

There has been some progress on that front—in May, Nasdaq, Citi, and Chain revealed a blockchain-based payments system for private equity and earlier this week, IBM announced that it was building a system to manage trade finance with seven European banks that would go live by the end of the year.

But there’s a significant back-office bottleneck for people looking to deploy systems. Developers have a limited set of software tools at their disposal, and there is fierce competition for their talent. Consortiums, startups, and incumbents such as IBM and Microsoft are developing dozens of different ways to build blockchain-based networks and applications, without any reference architecture or standards to lean on.

This process can be frustrating, to say the least, said Morales. “For many people I know, they’ve moved on to pulling out their eyelashes because they’ve finished pulling out their hair,” she said. “It can be very painful.”

Even so, Morales and her fellow panelists were not keen on the idea of establishing comprehensive standards anytime soon. “I really think we’re going to have to be very, very specific about the definition of blockchain if we’re going to talk about standards,” she said.

Gray from Microsoft put it more bluntly. “It’s way too early for standards,” he said.

In the end, of course, the agony of blockchain development could very well result in big pay offs. For many, the thrill of the technology is its potential to overturn so many aspects of how business is done today. Throughout the week, I heard attendees and speakers batting around dozens of possible uses for blockchains in sessions and hallway meetings.

On stage, Lubin described one of his favorite projects at ConsenSys—a solar power system in which batteries automatically sell or buy extra juice through a blockchain, thereby improving the efficiency of the entire grid. “It prevents the need to spin up billion-dollar petrol plants to handle peak load in hot days in the summer,” he said.  

And for every discussion of a practical use that has already been identified, there were countless mentions of the technology’s unexplored possibility. “It’s like trying to predict Facebook back in 1995,” Gray said. “Who would have known?”

While everyone else is dreaming about blockchain’s killer app, Gray believes the highest value of the technology will be to bridge industries and simplify all kinds of interactions across companies, individuals, public entities, and real-world events. “The true promise is ultimately getting to a place where we can have business contracts that weave together across verticals,” he said.

This also means that Gray expects the current industry-wide preference for permissioned blockchains—those which are cordoned off from public access—will eventually erode. Instead, he thinks society will gradually embrace the power and functionality of decentralized, public chains, such as the one that underlies Bitcoin.

First, though, public blockchains must prove that they can scale up to handle millions upon millions of transactions every day. Currently, no public blockchains could do this, said Lubin.

Looking ahead, Lubin expects both public and private blockchains to evolve over a long development period that has only just begun. “Blockchains in two, five, and 10 years from now are going to look completely different,” he said.

For all the work ahead, many speakers and attendees at the conference remained optimistic—and at times, positively upbeat—about the future of blockchain technology. For the finance industry, the promise of reducing costs, settling trades, and streamlining transactions is particularly intoxicating. “That gain is hopefully going to be worth the pain,” Morales said.

Editor’s note: This story was updated on 07/10/17 to clarify the roles of CoinDesk, the cryptocurrency news site, and Coinbase, the trading service.

An abridged version of this post appeared in the September 2017 print issue as “Is It Time to Become a Blockchain Developer?”

Three computer screens glow red, showing computer code and a skull and cross bones in white text

‘NotPetya’: Latest Ransomware is a Warning Note From the Future

[Correction: An earlier version of this post inaccurately implied that the NSA did not inform Microsoft about the EternalBlue exploit. They did so once the NSA’s systems had been compromised. It also stated that Windows systems that are updated with latest Windows security updates will not be susceptible to the NotPetya ransomware. In fact, even patched systems could still be exploited via other means like NotPetya’s infection route via phony “updates” to the accounting program MeDoc. We apologize for the error.]

First it “slammed” the Internet and “swept” Europe, then it was “something much worse,” and now it’s a “distraction.” This week’s “NotPetya” malware attack on Windows systems has, depending on who you believe, either spread like a devastating cyber-pandemic or amounted to an over-hyped flash-in-the-pan. 

In the Ukraine, which took the brunt of the attack, NotPetya certainly disrupted government and business operations, affecting hundreds of companies and offices. The Russian government has been suspected as a possible origin for NotPetya, and on Friday NATO said they strongly suspected a “state actor” or private entity with close ties to a state. Yet, amidst speculation about the outbreak’s source, another part of the NotPetya story could be important down the line too: How might it inspire future malware outbreaks?

“It’s very disturbing that ransomware has started to move laterally,” says Mounir Hahad, senior director and head of Cyphort Labs. “You could do a lot of damage this way.” By lateral movement, Hahad means that NotPetya is designed to spread within local networks from computer to computer, devastating organizations.

Hahad and colleagues have been studying samples of NotPetya in their sandboxed network and posted their findings on Cyphort’s blog earlier this week. Hahad says that NotPetya is a kind of mashup piece of malware that takes WannaCry’s ransomware approach and combines it with a 2016 piece of ransomware called Petya. NotPetya’s creators also threw three modules into the mix (one of which was hacked from the NSA) that effectively create a virulent spreading mechanism for the malware.

It’s this last part that Hahad says could be further mutated to make more dangerous attacks still.

WannaCry, he says, encrypted a user’s files in affected computers and on mounted disks attached to those computers. Then it flashed the now famous warning screen that demanded payment in Bitcoin to decrypt the files.

NotPetya does all this too, upon infection of a system through a hacked “update” to accounting software from a Ukranian software company. And if NotPetya were pure ransomware—designed to maximize the number of ransom payments—it might have stopped there. But NotPetya can also, depending on the level of access it has, make the further devastating attack on a system of rewriting a hard drive’s so-called master boot record, which tells the computer what operating system to run and where to find it.

A hacked computer running this second encryption routine will display a misleading boot screen telling the user it’s trying to “repair” the hard drive’s file system. It says, “WARNING: DO NOT TURN OFF YOUR PC! IF YOU ABORT THIS PROCESS, YOU COULD DESTROY ALL OF YOUR DATA! PLEASE ENSURE THAT YOUR POWER CABLE IS PLUGGED IN!”

Users who, understandably, heed the dire warning are unfortunately allowing the computer time to both encrypt the disk and search for ways to infect other systems within the computer’s local area network.

Ultimately the process completes and puts up a text-only screen that tells the user to send $300 in Bitcoin to a fixed address and then to send an email to an address where one can allegedly receive the decryption key.

Hahad says he’s not aware of anyone actually being able to decrypt their systems. In any event, the email address (which reportedly is the same address on every infected computer) was disabled by the ISP soon after NotPetya began spreading.

“The only communication with these threat actors was going to be through that one email account that got terminated pretty quickly by the ISP,” he says. “The second mistake was the fact that there’s a single Bitcoin wallet. That’s the way of tracking who’s making the payments and who isn’t. So if somebody posted a payment to that wallet, anybody else could say, ‘Hey, I’m the one who posted that payment, give me my key.’ There are multiple flaws with the payment method, which clearly indicates that those guys may not have been interested in generating revenue.”

The most original part of NotPetya, Hahad says, is its method of propagating itself within a local network that could infect many other computers within an organization.

“Previous ransomware was mostly targeting the computers they hit via phishing campaigns, and when they got really sophisticated, they started looking for mounted drives on your laptop and encrypted those as well,” he says. “This one goes well beyond that. It’s jumping the gap between your computer and other computers in your organization. So that’s a level above the typical ransomware that we’ve been seeing. So it definitely requires more attention.”

The gap-jumping mechanism, Hahad says, involved three known Windows exploits, including the EternalBlue hack that the NSA allegedly developed but kept from Microsoft until a hack of NSA systems threatened to compromise its secrecy.

Hahad says that if a system has been patched and updated with the latest Windows updates, it won’t be susceptible to the “lateral spreading” he described, via the EternalBlue exploit. And users who do not perform regular backups of their systems will simply lose their files with no recourse to recovering them, he says.

Scientists have built a microchip that can generate two entangled qudits each with 10 states, for 100 dimensions total, more than what six entangled qubits could generate.

Qudits: The Real Future of Quantum Computing?

Instead of creating quantum computers based on qubits that can each adopt only two possible options, scientists have now developed a microchip that can generate “qudits” that can each assume 10 or more states, potentially opening up a new way to creating incredibly powerful quantum computers, a new study finds.

Read More
Two employees of Quantopian sketch algorithms onto a glass surface in the company's headquarters.

Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage

Every day, financial markets and global economies produce a flood of data. As a result, stock traders now have more information about more industries and sectors than ever before. That deluge, combined with the rise of cloud technology, has inspired hedge funds to develop new quantitative strategies that they hope can generate greater returns than the experience and judgement of their own staff.

At the Future of Fintech conference hosted by research company CB Insights in New York City, three hedge fund insiders discussed the latest developments in quantitative trading. A session on Tuesday featured Christina Qi, the co-founder of a high-frequency trading firm called Domeyard LP; Jonathan Larkin, an executive from Quantopian, a hedge fund taking a data-driven systematic approach; and Andy Weissman of Union Square Ventures, a venture capital firm that has invested in an autonomous hedge fund.

Read More
A view of Rigetti Computing's Fab-1 lab designed to rapidly create quantum computing chips.

Rigetti Launches Full-Stack Quantum Computing Service and Quantum IC Fab

Much of the ongoing quantum computing battle among tech giants such as Google and IBM has focused on developing the hardware necessary to solve impossible classical computing problems. A Berkeley-based startup looks to beat those larger rivals with a one-two combo: a fab lab designed for speedy creation of better quantum circuits and a quantum computing cloud service that provides early hands-on experience with writing and testing software.

Read More
A human hand reaches out to a robotic hand

In the General AI Challenge, Teams Compete for $5 Million

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

We owe the success of numerous state-of-the-art artificial intelligence applications to artificial neural networks. First designed decades ago, they rocketed the AI field to success quite recently, when researchers were able to run them on much more powerful hardware and feed them with huge amounts of data. Since then, the field of deep learning has been flourishing.

The effect seemed miraculous and promising. While it was hard to interpret what exactly was happening inside the networks, they started reaching human performance on a number of tasks: such as image recognition, natural language processing, and data classification in general. The promise was that we would elegantly cross the border between data processing and intelligence by pure brute force of deep artificial neural networks: Just give it all the data in the world!

However, this is easier said than done. There are limits to state-of-the-art AI that separate it from human-like intelligence:

● We humans can learn a new skill without forgetting what we have already learned.

● We can build upon what we know already. For example, if we learn language skills in one context we can reuse them to communicate any of our experiences, dreams, or completely new ideas.

● We can improve ourselves and gradually become better learners. For instance, after you learn one foreign language, learning another is usually easier, because you already possess a number of heuristics and tricks for language-learning. You can keep discovering and improving these heuristics and use them to solve new tasks. This is how we’re able to work through completely new problems.

Some of these things may sound trivial, but today’s AI algorithms are very limited in how much previous knowledge they are able to keep through each new training phase, how much they can reuse, and whether they are able to devise any universal learning strategies at all.

In practice, this means that you need to build and fine tune a new algorithm for each new specific task—which is a form of very sophisticated data processing, rather than real intelligence.

Headshot of Marek Rosa, founder of the General AI Challenge

To build a true general intelligence has been a lifelong dream of Marek Rosa, from his days as a teenage programmer until now, when he’s a successful entrepreneur. Rosa, therefore, invested the wealth he made in the video game business into his own general AI R&D company in Prague: GoodAI.

Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Institute and launching the General AI Challenge. The AI Roadmap Institute is an independent entity that promotes big-picture thinking by studying and comparing R&D roadmaps towards general intelligence. It also focuses on  AI safety and considers roadmaps that represent possible futures that we either want to create or want to prevent from happening.

The General AI Challenge is a citizen-science project with a US $5 million prize fund provided by Rosa. His motivation is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.

The $5 million will be given out as prizes in various rounds of the multi-year competition. Each round will tackle an important milestone on the way to general AI. In some rounds, participants will be tasked with designing algorithms and programming AI agents. In other rounds, they will work on theoretical problems such as AI safety or societal impacts of AI. The Challenge will address general AI as a complex phenomenon.

The Challenge kicked off on 15 February with a six-month “warm-up” round dedicated to building gradually learning AI agents. Rosa and the GoodAI team believe that the ability to learn gradually lies at the core of our intelligence. It’s what enables us to efficiently learn new skills on top of existing knowledge without forgetting what we already know and to reapply our knowledge in various situations across multiple domains. Essentially, we learn how to learn better, enabling us to readily react to new problems.

At GoodAI’s R&D lab, AI agents will learn via a carefully designed curriculum in a gradual manner. We call it “school for AI,” since the progression is similar to human schooling, from nursery till graduation. We believe this approach will provide more control over what kind of behaviors and skills the AI acquires, which is of great importance for AI safety. Essentially, the goal is to bias the AI towards behaviors and abilities that we humans find useful and that are aligned with our understanding of the world and morality.

Nailing gradual learning is not an easy task, and so the Challenge breaks the problem into phases. The first round strips the problem down to a set of simplistic tasks in a textual environment. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.

Blue and white logo for the General AI Challenge shows a brain composed of puzzle pieces and circuits.

The Challenge competitors are designing AI agents that can engage in a dialog within a textual environment. The environment will be teaching the agents to react to text patterns in a certain way. As an AI progresses through the set of roughly 40 tasks, they become harder. The final tasks are impossible to solve in a reasonable amount of time unless the agent has figured out the environment’s logic, and can reuse some of the skills it acquired on previous tasks.

More than 390 individuals and teams from around the world have already signed up to solve gradual learning in the first round of the General AI Challenge. (And enrollment is still open!) All participants must submit their solutions for evaluation by August 15 of this year. Then the submitted AI agents will be tested on a set of tasks which are similar, but not identical, to those provided as part of the first-round training tasks. That’s where the AI agents’ ability to solve previously unseen problems will really be tested.

We don’t yet know whether a successful solution to the Challenge’s first phase will be able to scale up to much more complex tasks and environments, where rich visual input and extra dimensions will be added. But the GoodAI team hopes that this first step will ignite new R&D efforts, spread new ideas in the community, and advance the search for more human-like algorithms.

Olga Afanasjeva is the director of the General AI Challenge and COO of GoodAI.

Advertisement

Tech Talk

IEEE Spectrum’s general technology blog, featuring news, analysis, and opinions about engineering, consumer electronics, and technology and society, from the editorial staff and freelance contributors.

Newsletter Sign Up

Sign up for the Tech Alert newsletter and receive ground-breaking technology and science news from IEEE Spectrum every Thursday.

Load More