Education logo

AI

about AI

By DineshPublished 3 years ago 11 min read

What is artificial intelligence (AI)?

It depends who you ask.

Back in the 1950s, the fathers of the field, Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for," he said.

"Intelligence is not skill itself; it's not what you can do; it's how well and how efficiently you can learn new things."

It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated 'narrow AI', the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the uses for AI?

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants, such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, spot spam, or spot spam detect credit card fraud.

What are the different types of AI?

At a very high level, artificial intelligence can be split into two broad types:

Narrow AI

Narrow AI is what we see all around us in computers today -- intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

General AI

General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience.

This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today – and AI experts are fiercely divided over how soon it will become a reality.

What can Narrow AI do?

There are a vast number of emerging applications for narrow AI:

Interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines.

Organizing personal and business calendars.

Responding to simple customer-service queries.

Coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location.

Helping radiologists to spot potential tumors in X-rays.

Flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices.

Generating a 3D model of the world from satellite imagery... the list goes on and on.

New applications of these learning systems are emerging all the time. Graphics card designer Nvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead of animating a small number of static images of the caller in a manner designed to reproduce the callers facial expressions and movements in real-time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point is self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk's original timeline for the car's Autopilot system being upgraded to "full self-driving" from the system's more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

What can General AI do?

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called 'superintelligence' – which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCun say society is nowhere near developing AGI. Given the scepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that a general artificial intelligence will disrupt society in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.

What are recent landmarks in the development of AI?

watson-1.jpg

IBM

While modern narrow AI may be limited to performing specific tasks, within their specialisms, these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include:

In 2009 Google showed its self-driving Toyota Prius could complete more than 10 journeys of 100 miles each, setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that is processed to answer human-posed questions, often in a fraction of a second.

In 2012, another breakthrough heralded AI's potential to tackle a multitude of new tasks previously thought of as too complex for any machine. That year, the AlexNet system decisively triumphed in the ImageNet Large Scale Visual Recognition Challenge. AlexNet's accuracy was such that it halved the error rate compared to rival systems in the image-recognition contest.

AlexNet's performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore's Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, with Google training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public's attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that are searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently, Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself and then learned from it. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones: a system trained by OpenAI has defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own language to cooperate and achieve their goal more effectively, followed by Facebook training agents to negotiate and lie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3's ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly, impressive results followed in other areas, with its ability to convincingly answer questions on a broad range of topics and even pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn't pass muster, offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There's still considerable interest in using the model's natural language understanding as to the basis of future services. It is available to select developers to build into software via OpenAI's beta API. It will also be incorporated into future services available via Microsoft's Azure cloud platform.

Perhaps the most striking example of AI's potential came late in 2020 when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system's ability to look at a protein's building blocks, known as amino acids, and derive that protein's 3D structure could profoundly impact the rate at which diseases are understood, and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 determined the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has been heralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

What is machine learning?

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today, they are generally talking about machine learning.

Currently enjoying something of a resurgence, in simple terms, machine learning is where a computer system learns how to perform a task rather than being programmed how to do so. This description of machine learning dates all the way back to 1959 when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset are important for building a system able to carry out its designated task accurately. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

What are neural networks?

The key to machine learning success is neural networks. These mathematical models are able to tweak internal parameters to change what they output. A neural network is fed datasets that teach it what it should spit out when presented with certain data during training. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits -- zeroes and ones -- that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks are very loosely based on the connections between neurons in the brain. Neural networks are made up of interconnected layers of algorithms that feed data into each other. They can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During the training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired. At that point, the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. These deep neural networks have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

There are various types of neural networks with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) -- understanding the meaning of text -- and speech recognition, while convolutional neural networks have their roots in image recognition and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchers refining a more effective form of deep neural network called long short-term memory or LSTM -- a type of RNN architecture used for tasks such as NLP and for stock market predictions – allowing it to operate fast enough to be used in on-demand systems like Google Translate.

how to

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.