20

Researchers developed algorithms that mimic the human brain (and the results don...

 5 years ago
source link: https://www.tuicool.com/articles/hit/BZvMriA
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

A pair of researchers recently developed a method for successfully conducting unsupervised machine learning that closely mimics how scientists believe the human brain works. These biologically-feasible algorithms could provide an alternate path forward for the field of AI.

IBMresearcher Dmitry Krotov and John J. Hopfield, inventor of the associative neural network, developed a set of algorithms that teach machines in the same loose, unfettered way humans learn. Their algorithms allow machines to learn in an unsupervised manner – without using the shortcuts (biologically-infeasible methods) that modern deep learning does.

Join them

A lot of ancient AI research – conducted in the 1980s and 1990s – focused on figuring out how the human brain‘sneural network functions, and how that could be translated for machines. The big idea involved discerning the easiest way to represent how neurons function using math, and then scaling that for machines. Unfortunately this line of inquiry never quite panned out. In fact, most AI research was largely abandoned until the deep learning resurgence in the 2000s.

Krotov and Hopfield’s work maintains the simplicity of the old school studies, but represents a novel step forward in brain-emulating neural networks. TNWspoke with Krotov who told us:

If we talk about real neurobiology, there are many important details of how it works: complicated biophysical mechanisms of neurotransmitter dynamics at synaptic junctions, existence of more than one type of cells, details of spiking activities of those cells, etc. In our work, we ignore most of these details. Instead, we adopt one principle that is known to exist in the biological neural networks: the idea of locality. Neurons interact with each other only in pairs.
In other words, our model is not an implementation of real biology, and in fact it is very far from the real biology, but rather it is a mathematical abstraction of biology to a single mathematical concept – locality.

Modern deep learning methods often rely on a training technique called backpropagation , something that simply wouldn’t work in the human brain because it relies on non-local data. Our brain, for example, can process images without any formal training. We can see and process things we’ve never seen before.

Teaching a machine to learn like a human is difficult, like teaching someone to read by describing the letters of the alphabet but never showing them. Machines don’t have our direct sensory link to the universe. Krotov and Hopfield appear to have avoided this problem by creating algorithms that don’t rely on a bunch of other layers – other parts of a neural network that have different information – to figure things out. According to Krotov:

When we train a deep neural network we often (if the task is supervised) tell the algorithm upfront what it should do – for example, classify handwritten digits. Then the algorithm finds an embedding of the data into a latent space, which depends on this task. In our case, the weights of the first layer of the neural network do not need to know what this task is – you just train them on data itself. Then, when the training is complete, we can specify the task. In this sense the weights of the first layer are agnostic about the task.

This research is a beacon of progress for the field of artificial intelligence from an often-forgotten splinter. Modern deep learning techniques may be the soup of the day, but biologically-feasible algorithms appear to be making a comeback.

As to the implications of this old-is-new-again approach to AI, the researchers say it’s too early to tell. Krotov told TNW the work in the paper was “more like a proof of concept that a good performance can be achieved without supervision and in a biologically plausible setting,” but wouldn’t speculate beyond that.

The simple fact that a biologically-feasible algorithm can operate within the same realm of accuracy and usability as today’s popular techniques is worth getting excited over, especially if you’re not convinced deep learning is the future of AI.

Want to learn more about artificial intelligence from some of the best minds in tech? Come see our  Machine:Learners track speakers at TNW2019!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK