AI is taking over the world. Perhaps not literally just yet, but while we wait for Skynet to gain sentience, we will be using artificial intelligence in a huge number of ways over the coming years.
However, the term 'AI' can be rather vague and all-encompassing, and there are a wide range of interconnected technologies that fall under this banner.
Therefore, when you hear someone talking about 'AI', it's not always clear exactly what technology they are referring to. And as each one can have different capabilities and limitations, it's important to know exactly what you're dealing with.
Weak vs Strong AI
When discussing AI, you may hear systems described as being either 'weak' or 'strong', though alternatively, narrow is sometimes used to refer to weak AIs, and 'general' to describe strong AI systems. These are some of the most basic ways in which as AI is defined and are essential when determining how intelligent they actually are.
A weak AI is one that has only be coded to perform a particular task. For example, virtual assistants such as Siri and Alexa are classed as weak, as they only respond to set commands and have no ability to operate outside those parameters. While they may appear to 'think' from an external perspective, the reality is they are operating within very strict guide rails.
Strong AIs on the other hand, have more advanced cognitive abilities and are able to make decisions based on their own determination of what is most appropriate. These can operate without supervision and, when presented with a new and unfamiliar problem, can come up with their own solutions. In this sense, they are much more what people consider as a traditional intelligence.
However, regardless of their capabilities, there are a few common categories of AI that will become increasingly familiar to businesses of all sizes in the coming years. Here are a few terms you should know.
1. Expert systems
This is actually one of the oldest types of what we would consider today as AI, having been around in some form for decades. Expert systems attempt to replicate the decision-making ability of a human being in a certain topic.
They work by gathering facts and information about the subject in question and converting them into a 'knowledge base' that the system can draw from in order to reach conclusions. For example, an expert system in a healthcare environment may be programmed with signs and symptoms of various illnesses in order to help facilitate diagnoses - this is actually something that's been in use for decades.
Today's expert systems, however, can be exponentially more complex than those of years gone by. They can take in much more source data and process it more rapidly so, when combined with the likes of machine learning and natural language processing, could be more important than ever.
2. Machine learning
One of the most-discussed types of AI today, machine learning distinguishes itself from other forms of the technology by being able to build its understanding and take action without human intervention. In other words, where other forms of AI still require a programmer to tell it what to do in a specific circumstance, a machine learning program can make its own decisions and learn from data without any instruction.
In a basic form, this type of technology powers things like recommendations in your Netflix account. The algorithm studies what you watch, compares it to what other people with similar tastes watch and builds an understanding of which shows are likely to appeal to you.
This is the simplest form and has its limitations - as anyone who's ever bought a vacuum cleaner on Amazon and seen their recommendations filled with similar models for months afterward can tell you - but at a higher level, machine learning is set to be the cornerstone of many AI applications.
The ability to review data, interpret it and act immediately without human approval could transform how business is done, greatly reducing the effort involved and improve the accuracy of the outcome. Some examples of where machine learning is used today include detecting fraud in financial services, improving marketing campaigns and finding and stopping malware.
3. Natural language processing
One of the challenges for many firms today when it comes to AI is ensuring it understands the initial conditions and requirements of a task, and it often takes programmers with specialist knowledge to set up an activity. To avoid this, the goal for many is to be able to simply ask the algorithm a question and have it understand what is needed. This is where natural language processing (NLP) is required.
Again, this is something that can be seen in everyday examples such as Alexa and Siri. Being able to use casual speech to ask it a question seems obvious to us, but without NLP, we would have to stick to a carefully-worded and limited syntax and be much more restricted in what we could ask of it.
While the responses seem instant to us, it takes a huge amount of work behind the scenes to turn our everyday speech into a format that's recognized by the binary-brained AI. It first needs to break down each word and sound into its component parts in order to translate sounds into data, then recognize and correct for any non-standard grammar or phrasing, ignore fillers like 'er', and finally understand the context to determine what it is being asked - all so you can find out what the weather will be doing tomorrow without picking up your phone.
Aside from voice-activated queries, NLP is fundamental to translation tools, chatbots and speech-to-text transcription. When combined with other forms of AI like machine learning, it can also quickly scan vast quantities of text to pick out relevant details - something that's useful in hiring, for example, to save HR teams the need to trawl through hundreds or thousands of resumes.
4. Neural networks
One of the most-hyped types of AI, neural networks are a category of machine learning that has been modeled on the human brain. This lets it perform 'deep learning', which uses multiple layers of data processing in order to interpret data and determine which action to take.
For example, if a neural network is tasked with identifying an image, the first layer might analyze the brightness of the image, while another looks for familiar shapes, and another identifies any textures. As the number of layers grows, it ends up with a much more complete picture of what the image is, so it can assign it the correct label or take whatever action has been deemed appropriate.
This method of learning, which is similar to how children pick up knowledge about the world, is particularly useful when dealing with unstructured data such as images.
Another use for this type of AI is in handwriting recognition. This is something that has traditionally been very difficult for machines to do because of the almost infinite variety of human handwriting. While we use our intuition to identify letters, machines will usually spot even the slightest variations and treat them as unique.
But with neural networks, a computer can be trained to identify letters by their common traits (so, for example, any vaguely circular mark made with a single sweep is an O, regardless of how wobbly it is). As the amount of data it analyzes increases, the more accurate it becomes. The same principles can be applied to a wide range of use cases, from predicting stock prices to facial recognition.
There are few applications where any of the above types of AI will be used in isolation - the key to developing effective, powerful machines that can genuinely appear to think for themselves will be in how developers integrate them and enable them to work together.
For example, using NLP to interpret a question, then machine learning and neural networks to go beyond the parameters of their coding to find the best answer can go a long way towards developing AI systems that can converse with and think like humans. Get these all working together, and the next step is passing the Turing test, then it's straight to world domination.