Going beyond machine learning
Essentially, deep learning is an advanced, more powerful form of machine learning - something that has been a big part of many business' plans for a while now. Machine learning, which is often incorrectly used interchangeably with AI, is a type of system that enables tools to develop a broader knowledge base as they go about their operations, gathering data and understanding what will happen in specific situations.
This means that instead of a human operator having to program in every parameter in order to teach an AI how to perform a task, the AI figures out the best solution on its own by assessing what works and what doesn't, in much the same way children pick up tasks through repetition, trial and error.
For example, machine learning might be able to forecast future retail sales based on a few key parameters. By looking at historic data and comparing it to external factors, it can make predictions about what items will be popular when. For example, at a very basic level, it might spot that sales of barbecue items go up when it is hot and sunny and deduce the connection on its own, without being fed the data by a programmer.
The issue with this type of machine learning is that in the real world, there are usually far more parameters to consider than these systems can handle. In a retail system, for instance, there are hundreds of factors to consider when trying to forecast sales - time of day, the impact of any promotions or discounts, the type of customer firms are looking to attract, how close it is to payday for buyers, and many, many more.
Traditional, simple machine learning will struggle to take all these factors into account, correctly assess what weighting they should be given in its calculations, and accurately predict the impact of any changes. And this is where deep learning becomes useful.
How deep learning neural networks deliver better insight
As the name suggests, the key difference between normal machine learning and deep learning is the amount of depth the system is able to go into. Deep learning achieves this through the use of a neural network, which is a structured set of operations loosely based on the way the human brain makes decisions.
At heart, this boils down to a large number of steps, or layers, in any operation, each consisting of a series of nodes. This allows different parts of any calculation to be fed into different parts of the network, each assessing one specific feature of the input, before coming to a conclusion.
However, although a neural network is an integral part of deep learning, not every neural network qualifies as deep learning. What separates a true deep learning solution is that many of these layers are hidden within the system, with a complex hierarchy between them that can send data back and forth to be analyzed in different ways, by different nodes, to reach a conclusion.
A good application for neural networking is image recognition. This is an especially complex task because there are an almost infinite number of potential variations for even the simplest-seeming image that we humans don't even notice, but that can quickly confuse a more basic algorithm.
For example, even if an operator has taught an algorithm what a banana looks like - feeding it basic parameters including color, size, texture and shape - no two bananas are alike. Therefore, unless you have a clear, sharp photo that closely resembles the original, it can be easy for a computer to misidentify potential matches.
But with a deep learning neural network, each potential element can be examined by a different node to see how closely it matches. Is the color off? Is it too straight? Has it been peeled? With deep learning, all of these potential factors can be assessed in an instant, before then providing an outcome determining how likely it is that the object is a banana. And the more images it looks at, the better it will get at supplying the right answer faster.
The applications of effective deep learning
Deep learning is useful for much more than correctly identifying pictures of fruit. In fact, the complex, in-depth analysis of huge numbers of parameters all at once, with data being fired between nodes in much the same way our neurons snap data around our brain, opens up a world of possibility that will underpin technologies we may well come to take for granted in the coming years.
Here are just a few of the key real-world applications that could make the potential of AI a reality sooner rather than later:
As discussed above, accurately identifying images in real-time is a key use case for deep learning. But this applies to video as well, which is of particular use in applications such as self-driving cars. Being able to tell if that moving object crossing the car's path is a plastic bag or a cyclist is an obvious first step, but self-driving sensors also need to look for road signs, tell the difference between painted road markings and potholes, and much more.
Plus, image recognition is just the first step. The car then needs to decide in an instant what to do about any scenario it encounters. Should it make a slight adjustment to the steering or slam on the brakes? It also needs to find a balance between being too cautious and too careless to ensure traffic can flow smoothly and safely.
Another major application for deep learning is in healthcare, where there are a few different areas it can help with. For instance, diagnostics could be greatly improved by advanced image recognition, making the review of scans faster and more accurate. For instance, one study found that deep learning was able to identify melanomas in dermoscopic images 10% more accurately than human clinicians.
Elsewhere, it can also assist in the development of new drugs, determine which treatment would be most effective for individual patients, and help predict a patient's prognosis more accurately.
Automatic translation of text from one language to another has been around for a while, but has never exactly impressed with its accuracy. But with deep learning, many of the kinks are being ironed out, as it can understand the context of the input much more clearly, and therefore deliver far more accurate results.
This is not limited to text-based translations. Tools that can listen to audio speech and offer real-time translations are now moving beyond proof-of-concept stages and into the real world, making Star Trek-style universal translators closer than ever.
Voice assistants and chatbots
Holding a real-world conversation between a human and a machine has long been seen as the holy grail of AI, and to do it convincingly requires two parts. Firstly, the machine must be able to understand exactly what is being asked of it, based not only on the words spoken, but the tone of voice and the general context of the conversation, then it must formulate the most appropriate response and deliver it in a way that is both accurate and sounds natural to the listener.
Deep learning greatly assists in this, and is already being seen in a variety of virtual personal assistants, such as Amazon Alexa and Google Home. But as the amount of available processing power increases, such examples will only get more accurate and more lifelike, enabling the development of 'true' AIs that can effectively simulate a full conversation that feels as natural as talking to the person next to you.
To stay ahead of the game, which trends should you be prioritizing this year? Click here to find out.
Work like tomorrow
Kofax intelligent automation solutions help organizations transform information-intensive business processes, reduce manual work and errors, minimize cost, and improve customer engagement. We combine RPA, cognitive capture, mobility & engagement, process orchestration, analytics capabilities and professional services in one solution. This makes it easy to implement and scale for dramatic, immediate results that mitigate compliance risk and increase competitiveness, growth and profitability.
Join the conversation...