You are here

Humans take note: Artificial intelligence just got a lot smarter

By Los Angeles Times (TNS) - Dec 13,2015 - Last updated at Dec 13,2015

Photo courtesy of pymesyautonomos.com

Today’s artificial intelligence may not be that clever, but it just got much quicker on the uptake. A learning programme designed by a trio of researchers can now recognise and draw handwritten characters after seeing them only a few times, just as a human can. And can do it so well that people can’t tell the difference.

The findings, published in the journal Science, represent a major step forward in developing more powerful computer programmes that learn in the ways that humans do.

“For the first time, we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners,” study coauthor Joshua Tenenbaum from the Massachusetts Institute of Technology said in a news briefing.

Although computers are excellent at storing and processing data (your calculator is probably much faster than you at, say, finding the square root of a large number), they’re less-than-stellar students. Your average 3-year-old could pick up basic concepts faster than the most sophisticated programme.

“You show even a young child a horse or a school bus or a skateboard, and they get it from one example,” Tenenbaum said. “If you forget what it’s like to be a child, think about the first time you saw, say, a Segway, one of those personal transportation devices, or a smartphone or a laptop. You just needed to see one example and you could then recognise those things from different angles under different lighting conditions, often barely visible in complex scenes with many other objects.”

In short, he said, “you can generalise.”

And while current “deep learning” approaches have resulted in major advances in technologies like facial and speech recognition, they often still require hundreds, even thousands of examples before they can recognise the shared qualities that allow for such a generalisation.

But there’s something else humans can do with just a little exposure — they can break an object down into its key components and dream up something new. For example, if you saw a motorcycle (two-wheeled motorised vehicle) and a unicycle (one-wheeled human-powered vehicle), you might be able to imagine a one-wheeled motorcycle, a combination of each vehicle’s conceptual “parts”.

“To scientists like me who study the mind, the gap between machine-learning and human-learning capacities remains vast,” Tenenbaum said. “We want to close that gap, and that’s our long-term goal.”

Now, Tenenbaum and his colleagues have managed to build a different kind of machine learning algorithm — one that, like humans, can learn a simple concept from very few examples and can even apply it in novel ways. The researchers tested the model on human handwriting, which can vary sharply from person to person, even when each produces the exact same character.

“Handwritten characters are well suited for comparing human and machine learning on a relatively even footing: They are both cognitively natural and often used as a benchmark for comparing learning algorithms,” the authors wrote.

The scientists built an algorithm with an approach called Bayesian programme learning, or BPL, a probability-based programme that is resilient to a certain amount of deviation. This algorithm is actually able to build concepts as it goes, essentially building bits of its own programming as it goes.

In a set of experiments, the scientists tested the programme using many examples of 1,623 handwritten characters from 50 different writing systems from around the world, including Sanskrit, Tibetan, Gujarati and Glagolitic (an ancient Slavic alphabet).

In a one-shot classification challenge, the scientists showed humans and the computer programme a single image of a new character, and then asked them to pick another example of that same character in a set of 20. People were quite good at this, with an average error rate of 4.5 per cent. But BPL slightly edged them out, with a comparable error rate of 3.3 per cent. Two different programmes based on deep-learning methods fared far worse than either people or BPL, with a 13.5 per cent and 34.8 per cent error rate.

The scientists also challenged the programme and some human participants to draw new versions of various characters they presented. They then had human judges take a look at the human-produced and algorithm-produced sets, and try to determine which ones were made by man and which were made by machine. This was a kind of visual Turing test — a method devised by renowned 20th century computer scientist Alan Turing to test a machine’s ability to demonstrate intelligent behaviour that can’t be distinguished from a human’s.

As it turned out, the humans were barely as good as chance at figuring out which set of characters was machine-produced and which was created by humans. And in another experiment, in which the scientists asked both human and machine subjects to create a brand-new character for a given writing system, the humans also found it difficult to distinguish the two. In short, the algorithm appears to have passed a basic Turing test.

The findings could be used to improve a variety of technologies in the near term, including for other symbol-based systems such as gestures, dance moves and spoken and signed language. But the research could also shed fresh light on how learning happens in young humans, the scientists pointed out.

 

“Although our work focused on adult learners, it raises natural developmental questions,” the study authors wrote. “If children learning to write acquire an inductive bias similar to what BPL constructs, the model could help explain why children find some characters difficult and which teaching procedures are most effective.”

up
39 users have voted.


Newsletter

Get top stories and blog posts emailed to you each day.

PDF