People have always imagined machines that could think, and perhaps feel, like human beings. Art and literature had given us ideas about how these artefacts would look like and behave, long before the fathers of "Artificial Intelligence" named their newfound field with such an ambitious combination of terms at the historical conference at Dartmouth College in 1956.
Since then Artificial Intelligence researchers have tried to study and understand how the human brain works, and how to use computers in order to emulate it. This unique combination of computer science, psychology and neuroscience has produced a number of impressive systems: machines that can perform medical diagnosis better than doctors, beat human players at chess, navigate through unchartered terrains, and understand human language.
The advent of cloud computing has provided the computational power and scalability that Artificial Intelligence needed in order to evolve in leaps and bounds. The technology has now reached the level of maturity and reliability that many economists are forecasting a new, "fourth" industrial revolution with Artificial Intelligence at its heart, disrupting every aspect of human organisation; and introducing new challenges as well as new, amazing, opportunities.
Deep machine learning is inspired by the way biological neural networks in the human brain extract information from our senses.
Take for instance an image: when we see something new multiple layers of neurons analyse the visual input and try to extract information about features in that image. Does it have four legs? Is it made of wood? Is it large enough for someone to sit on? These features are then moved higher and higher in the neural organisation of our brain till they reach our consciousness, whereupon we recognize a "chair". Machine learning does a similar job.
Instead of handcrafting code that accounts for every detail of everything – which is an impossible task – researchers build algorithms that can extract features of things, and then combine them in order to reach a conclusion. Just like in the human brain, machine algorithms are "layered", i.e. feature extraction takes place in stages and often in parallel, the output of the initial layer of processing becoming the input of the next layer of processing, and so on. Also like the human brain, machine algorithms allow "learning" to take place: the more data about something the algorithm processes the more features it extracts, and therefore the more probability it can assign to those features that occur most often.
Deep machine learning has a wide range of applications, particularly when it is necessary to analyse, understand, and discover new patterns and new knowledge hidden in vast amounts of data. Unquestionably, in the era of big data and the forthcoming internet of things, deep machine learning is the singular technology that will enable businesses to extract value from their data, meet the challenges of the fourth industrial revolution, and propel themselves to success.
Visii already has 8 patents pending at the European Patent Office containing 138 claims of novelty.