Artificial Intelligence (AI) could add £654 billion to the UK economy by 2035, but as it marches into the mainstream, the terminology that describes it is causing confusion.
The buzzwords of AI, machine learning and deep learning are often used interchangeably, despite each meaning something different.
Stanford computer scientist John McCarthy is credited with coining the term "artificial intelligence". He defined it in a conference on the subject held in 1956 as: "The science and engineering of making intelligent machines, especially intelligent computer programs."
The ambiguity of the word "intelligent" allows AI to cover a range of applications, but most researchers agree that it broadly refers to something that replicates human thought.
Machine learning is a subset of AI that grants computers a degree of independent thought. This is achieved by giving it large volumes of data that an algorithm can process and then learn from in order to make predictions and decisions for which it hasn't been specifically programmed. The machine is effectively learning to solve new problems from existing examples.
Deep learning, meanwhile, is a type of machine learning inspired by the connections between neurons in the human brain. Researchers developed a man-made imitation of this biological connectivity known as artificial neural networks (commonly known as neural nets).
Deep learning in practice
In human neural networks, billions of interconnected neurons communicate with each other by sending electrical signals that develop into thoughts and actions. In artificial neural networks, nodes take the role of neurons, and collaborate in an organised structure to solve problems through their combined analyses.
For example, deep learning software could be used to understand a complex photograph made up of overlapping items, such as a full laundry basket.
The nodes are arranged in separate layers and each reviews individual elements of the picture and makes computations about that specific element in order to fully understand it. These computations result in a signal being passed on to the other nodes.
All the signals in the layers are then assessed in combination to make a final prediction as to what exactly is in the picture.
The advantage that deep learning has over alternative forms of machine learning is that while the others need to analyse a predefined set of features on which they base their predictions, deep learning can identify the individual features itself.
For example, if a system wanted to identify human faces in a photo it would not need to be first be fed the individual features, such as noses and eyeballs. It could instead be fed an entire image that it can scan to understand the different features in order to make an independent prediction about the content of the images.
Deep learning can be used to predict earthquakes or steer self-driving cars. It can colourise black and white videos, translate text with a phone camera, mimic human voices, compose music, write computer code, and beat humans at board games, as Google DeepMind famously did last year against the South Korean 'Go' champion Lee Sedol.
It also has countless potential applications for businesses, from security systems to sentiment analysis, to optimising manufacturing. Deep learning is particularly proficient at understanding images and audio, and could automate many common professional tasks such analysing x-rays or scanning legal documents.
History of deep learning
"Deep learning is not a new idea," says Sean Owen, the director of data science at software company Cloudera. "It's the rebirth of another idea that people have finally gotten to work well.”
The origins of deep learning go back to the 1950s, and an early attempt to mimic the interconnectivity of neurons in biological brains known as the "perceptron". The machine learning algorithm was developed by American psychologist Frank Rosenblatt in 1957 with funding from the United States Office of Naval Research.
His invention was dramatically described by the New York Times as "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."
The complexities of the technology meant it soon fell out of favour, but it reemerged in 1986 with the publication of a paper entitled "learning representations by back-propagating errors" that offered a more efficient way for neural networks to learn.
In the nineties, the spotlight shifted to a new class of machine learning called 'support vector machine', that offered high-performing algorithms that were comparatively straightforward.
Only in the last decade have researchers truly learned to leverage the vast computation available in the cloud required to make deep learning work at a scale.
In 2011, deep learning pioneer Andrew Ng founded Google Brain. The Stanford University professor had already helped develop autonomous helicopters and multi-purpose household robots, but it was Google's mammoth neural networks research project that made him an icon of AI.
His creation earned a New York Times headline when a cluster of 16,000 computer processors simulating the human brain scanned 10 million images found in YouTube videos to recognise cats within, and also independently discover what it was that made something a 'cat'.
The neural networks developed at Google Brain were used again later that year, albeit to far less fanfare, in the speech recognition software used in Android phones.
Google Brain brought mainstream attention to deep learning and proved that the human brain could provide a model for machine learning at a time when many engineers favoured simple automation masquerading as intelligence.
"Some of the acceleration and uptake of deep learning wasn't due to those research breakthroughs but to the ready availability of software that lets you do this stuff," says Owen, a former senior engineer at Google.
“For example, about two years now Google released a deep learning package called TensorFlow, and that sort of thing is really what has pushed adoption and usage of deep learning forward in the mainstream by leaps and bounds.
"That's really what’s driven the explosion in the last five years. It's the translation of those ideas into free software."
Deep learning often requires special hardware, but this has also become more accessible. More challenging is the knowledge and experience needed to use the various tools and techniques.
Deep learning remains largely unchartered territory, and even experienced machine learning scientists have to learn on the job when they arrive in the field. This has led to a talent war breaking out amongst the biggest tech companies on the planet.
Deep learning limitations
AI has received mixed press coverage of late, and the recent controversy of DeepMind Health's access to NHS patient records has raised privacy concerns. Deep learning raises unique challenges because as its models grow in complexity the results become harder to interpret.
"They're very complicated models that have a huge number of numbers, and it's not clear what they mean, so it's hard to understand why a result is connected to a certain input.
"That can become a problem if we need that kind of transparency in order to spot that the logic of the model is not one we wish to accept. I think the problem is these tools may let us all too easily take latent biases hidden in our data and further enshrine them by building predictive models that suggest future action."
A team of researchers at MIT may have found a solution. By analysing the activity of different neurons in a network, they could understand which individual neurons were responsible for making certain decisions. The discovery could provide a method to uncover algorithmic bias and explain specific actions derived from deep learning algorithms.
Although deep learning began as an attempt to statistically model how neurons work, Owen is keen to emphasise it still doesn't reproduce the same thinking and learning of the human brain.
"I do caution people to take from all this that someone we've figured out how to make machines think. It’s a powerful cross of techniques but it's more statistical models, there's no actual fundamental breakthrough in understanding the human brain here."
Neither does the growth in deep learning render other machine learning algorithms obsolete. Deep learning needs huge datasets and computing power to function effectively, and in many cases, simpler algorithms such as support vector machines will suffice.
The future of deep learning
Deep learning can use both the common supervised learning technique and the more complex and cutting edge alternative of unsupervised learning.
In supervised learning, both the input and output variables are provided and classified. An algorithm only needs to follow an established process to generate new results when further input data is added. This is used in numerous current applications, such as making Amazon recommendations.
In unsupervised learning, the output data is unknown so there are no examples upon which a system can base its conclusions. It can only use the input data to solve the problem. It does this by extracting information from the data to uncover correlations and understand the underlying structure in order to draw its own conclusions. It is the autodidactic alternative to the teacher in a classroom model used in supervised learning.
An example of unsupervised learning would be a system independently classifying animals in a picture without being told what they are. It would do this through a process of description that involves dividing data into categories based on the differences and similarities. It would therefore label dogs and cats as different based on the distinctive features and correlations it finds in their pixels.
Deep learning can be used to transform smartphone photos into paintings that mimic the style and brushstrokes of the great masters, a technique that made Russian mobile application Prisma the leading app in its home country.
The effects of such powerful technology could have ominous consequences. It could be used to generate fake videos that look extremely convincing, for example.
Its potential for media manipulation and disinformation was demonstrated last year when researchers from Stanford University and the University of Erlangen-Nuremberg in Germany unveiled a project called Face2Face.
The programme uses deep learning algorithms and a commercial webcam to reanimate the facial expressions of people talking YouTube videos in real-time. Putting words into the mouths of politicians has never been easier.
Deep learning in the enterprise
For enterprises, any fears over deep learning will be allayed by the potential business benefits. Many of them are already exploiting it in wide-ranging applications, such as online travel agency Expedia.
When customers of the booking website review its hotel listings, their attentions are first drawn to images of the accommodation. Displaying the most attractive photos first would improve the chances of a hotel being chosen, but the company has a total of more than 10 million images from 295,000 hotels. Going through them all of them manually would be an interminable undertaking.
Instead, the data science team is using deep learning to automatically rank the images. A crowd-sourcing product developed by Amazon called Human Turk was used to provide ratings from 1-10 on 100,000 hotel images. Each image was rated twice and classified by traveller types.
The model was then trained on this dataset to classify the images independently. Expedia estimates that it would be able to rank ten million images in one day.
Tech companies are experimenting with diverse deep learning applications. Tesla uses it to help its autonomous cars learn to identify road hazards, DeepMind to detect sight-threatening diseases by analysing digital scans of the eye, and Facebook to feed users content that is tailored to their interests.
Digital-first organisations may remain the leading exponents of deep learning, but the technology is growing in sophistication and becoming more affordable and accessible. The possibilities are growing for deep learning to transform enterprises in every sector.
Find your next job with computerworld UK jobs