Problems posed by the computer are really no different than the problems we have with other products of technology. It's going to take a great deal of wisdom on our part to manage them. But if we do, we're going to make a much better world.
Hi, welcome to another Vocal Story, I’m Zeeshan Mushtaq. Today, our topic is AI. Artificial intelligence or AI has the potential to revolutionize our world the way we do things and how we live.
And you can say that it's already starting to do that. AI will be one of those big tools that propels us into a new future, like computers and the internet did decades ago. Recently, we've seen many examples of neural nets in particular from speeding up video game production and making graphics more realistic to solving age old physics problems, like the three body orbit problem.
So that's all well and interesting, but we have to recognize that today in the field of AI, we're building off the shoulders of giants. It's the question that must be asked. Who were those original giants? How did the AI come to be? Who were the people that first dreamed that computers could think for themselves who are the pioneers of AI?
As soon as computers came into existence, scientists began for fanaticize sizing about how they could revolutionize our world. Even in the 1960s, they fear us that one day computers would be able to think for themselves. There are many pioneers that laid the foundation of AI, even as far back as Aristotle introducing associationism in 300 BC. And this would start our attempt to understand the human brain. But in this episode, we're going to focus more on the more recent notable contributions the so-called fathers of AI.
The first attempt and the beginning of AI all starts with psychologist. Frank Rosenblatt in 1957. In that time he developed what was called perceptron a perceptron was a digital neuro network that was designed to mimic a few brain neurons. Frank's first task for the network was to classify images into two categories.
He scanned in images of men and woman, and he hypothesized that over time network would learn the differences between men and women, or at least see the patents that made men look like men and women like women. Just a year later, the media caught onto the idea and the hype was strong in 1958. The New York times reported that the perceptron was to be quote.
The embryo of an electronic computer that will be able to walk, talk, see right. Reproduce itself and be conscious of its existence and quote. Unfortunately, for Frank, despite the hype, his neural network system didn't work very well at all. This was because he only used a single layer of artificial neurons, making it extremely limited in what it could do and even worse.
There wasn't much that could be done about better at that time. Computers of that day could only handle this simple setup. These problems were never solved. And by 1969, the computer science community had abandoned the idea. And with that, AI was dead. Everyone may have given up on the idea, but decades later, a keen computer scientist by the name of Geoffrey Hinton thought that everyone else was just plain wrong.
He theorized that the human brain was indeed a neural network and the human brain evidently made for an incredibly powerful system. To him this was as much proof as he needed. Artificial neural networks had to work somehow., maybe they just needed some tweaking Hinton, saw the genius in the idea that everyone else missed.
It seems to me, there's no other way the brain could work. It has to work by learning the strengths of connections. And if you want to make a device, do something intelligent, you got two options. You can program it or it can learn. Right. And we certainly weren't programmed. So we had to learn. So this had to be the right way to go. So you have relatively simple processing elements that are very loosely models of neurons. They have connections coming in. Each connection has a weight on it. --- Hinton
Just for clarification, a node is an artificial neuron and await represents the strength of connections between neurons. That weight can be changed to do learning.
And what on your own does this take the activities on the connections times awaits adds them all up and then decides whether to send an output. And if it gets a big enough summit sends an output. If the sum is negative, it doesn't sell anything. And all you have to do is just wire up a gazillion of those. Just figure out how to change the weights and it'll do anything. It's just a question of how you change the weights.--- Hinton
Geoffrey Hinton is a superstar in the AI world. Having authored 200 peer review publications. Hinton was instrumental in the fundamental research that brought about the AI revolution.
After studying psychology Hinton, moved into computer science and pursued his lifelong quest of muddling the brain. Originally from prison in the UK, he moved to the university of Toronto. In Toronto, he would go on to develop multi-layered neural networks. He and his team quickly realized that the problem with Frank Rosenblatt single layer approach was that more layers were needed in the network to allow for much greater capabilities and the computers of the day. This multi-layer approach solve the problem that Frank Rosenblatt had. The neural networks were much more capable. Today, we call this multi-layered approach at deep neural network. In 1985, Hinton coauthored, a paper which introduced the Boltzmann machine. Boltzmann machines of the fundamental building blocks of early deep neural networks.
You can think of them like the Ford model T of neural networks, without getting into the details. The concept is to have groups or layers of neurons communicate in such a way where each artificial neuron learns a very basic feature from any data. For example, each neuron could represent a pixel in an image that the network is trying to learn.
Long story short. The result is a program that can make accurate guesses and predictions about data it's never seen before.
Soon others began innovations based off deep neural networks. A self-driving car was built in the late eighties on neural networks. And later in the nineties, a man by the name of Yan Lee Kuhn would build a program which recognized handwritten digits. This program would go on to be used widely, but Yan Lee Kuhn would also go on to be an AI pioneer in his own right. Lee Kuhn would study under Geoffrey Hinton and would lead the research that made Hinson's theory of backpropagation our reality. Backpropagation in simple terms is the process of computers learning from their mistakes and hence becoming better at a given task much the same way humans learn from trial and error.
However, the idea of AI being used for much more was short lift. The field was stifled by two problems, one slow and inadequate competing power, and two, a lack of data. A burst of investor confidence was eventually met with disappointment and the research money began drying up. Jeffrey would become ridiculed and forced to the sidelines of the computer science community. He was seen as a fool for his longstanding faith in a failed idea, undeterred by the opinion of his colleagues, Hinton pursued his dream with an un-phased obsession
In 2006, and the world had finally caught up to him. Computer processing speed had grown significantly since the nineties. Moore's law observed by Intel's co-founder Gordon Moore, stated that the number of transistors per square inch doubles about every two years, this meant that computers with growing and processing power exponentially.
That's the first problem solved. Meanwhile, thanks to the advent of the internet. Some 15 years earlier, a wealth of data had been acquired and this solved the second problem. Once a interviewer asked to Bill Gates;
[Interviewer] What about this internet thing? Do you know anything about that? What, what the hell is that?
[Bill Gates replied] It's become a place where people are publishing information so everybody can have their own home page companies are there the latest information it's wild what's going on. You can send electronic mail to people. It is the big new thing.
The ingredients of AI were now there. The computers were powerful enough and there was enough data to play with. By 2012, the ridicule Geoffrey Hinton was now 64 years of age. Continuing the work wasn't an easy task. Hinton was forced to permanently stand due to a back injury that would cause a disc to slip out, whenever we sat down.
The Birth of Modern AI
The birth of the modern AI movement can be traced back to a single date, September 30th, 2012. On this day, Jeffrey and his team created the first artificial deep neural network to be used on a widely known benchmark image recognition test called image.net. Hinton's program was called Alex net. And when it was unleashed on this date, it had performance like no one had ever seen Alex net destroy the competition scoring and over 75 success rate, 41% better than the best previous attempt.
This one event showed the world that artificial neural networks were indeed something special. This sent an earthquake through the science community. A wave of neural net innovations began and soon the world took notice. After this point, everyone began using neural networks in the image of benchmark challenge. And the accuracy of identifying objects Rose from Hinton 75% to 97% in just seven years. For context, 97% accuracy is the pausing. The human ability to recognize objects, computers, recognizing objects better than humans has never happened in history. Soon, the floodgates of research and the general interest in neural nets would change the world.
By the late 2010s image recognition was commonplace, even recognizing disease and medical imaging images with just the beginning. As soon neural net AI was tackling video, speech science and even games. Today, we see AI everywhere. Tesla among many companies has created a sophisticated self-driving AI, which is already sharing the road with humans.
It is predicted that self-driving cars will reduce accidents by up to 90% while smart traffic lights would reduce travel time by 26% Netflix and YouTube even uses AI to learn what shows you watch and recommend new ones. Uber uses machine learning, AI to determine surge pricing, your rides, estimated time of arrival, and how to optimize the services to avoid detours.
There's also a knee interesting hide and seek AI as shown here by the YouTube channel two-minute papers in this scenario, two AI teams battle against each other one, outsmarting the other as each round of the game. Persisted. After a given time, one of the teams figured out how to break the game's physics engine in order to win.
This was something that the researchers never anticipated. It's a potent demonstration of AI's problem solving abilities. The popular app, Tik TOK is completely AI driven, leading towards popularity as we've covered in the preview. So now AI is everywhere it's in our daily lives. Even if we're not aware of it, of course there's many examples of AI being used, but perhaps the most interesting uses will come after we reach singularity.
Singularity is the concept of AI surpassing human intelligence. After this point, what happens is a bit of an open-ended question by default computers would be able to reinvent better versions of themselves. They could progress fields such as medicine and science without human direction. Alpha goes, zero is a graphic illustration of the possible rate of this progress.
In 2016 experts thought that it would take an AI around 12 years to beat a human at the ancient game of go a game with virtually infinite possibilities and a game that relies on human intuition to master. But the experts were very wrong. The 12 year prediction in reality was actually zero an AI didn't fact, beat the Grandmaster of go in that very same 2016 year.
The next version of the AI AlphaGo zero learned to play the game from scratch and beat the previous version, a hundred games to zero in just three days. AlphaGo zero was so good that it was able to be applied to other things that it wasn't trained for, like lowering the power usage on Google's data centers.
The new breeds of AI could even begin to invent new tools that humans would never be able to fathom. Dr. Richard Sasson of the university of Alberta says that singularity is widely estimated to happen around 20, 40, by 2030. We should have the hardware capability to achieve this allowing for another decade for people like Sutton to make the code that achieves singularity.
It's a rather unnerving thought that in about a decade, we may have computers that are smarter than us. There's a way do you think we are from a neural network being able to do anything that a brain can do? I don't think it'll happen in the next five years beyond that. It's all a kind of fog. So I'd be very cautious about making a prediction.
Is there anything about this that makes you nervous in the very long run? Yes. I mean, obviously. Having other super intelligent beings who are more intelligent than us is something to be nervous about. It's not going to happen for a long time, but it is something to be nervous about in the long run. What aspect of it makes you nervous?
Well, will they be nice to us? Also? The move is always portray it, um, as an individual intelligence, I think it may be that. It goes in a different direction where we sort of developed jointly with these things. So the things aren't fully autonomous they're developed to help us. They're like personal assistants and we'll develop with them and it will be more of a symbiosis than a rivalry.
So we're seeing the future. So where are the pioneers now? Currently Geoffrey Hinton divides his time between his roles as a professor at the university of Toronto. And vice-president at Google Lee. Kahn is vice president at Facebook. Both of these pioneers had won the 2018 cheering award for their contribution to AI named after the father of computer science, Alan Turing, who created a machine to decipher German codes, virtually ending world war II. The Alan Turing prize is considered the Nobel prize of computing. Artifical intelligence has rapidly grown in the span of less than two decades from the fringes of science to the centerpiece of the world. Without the work of these pioneers who refused to give up our future may be very different. Perhaps we don't fully understand the potential of AI, but nonetheless.
It should be obvious that the work has created a significant point in human history, much like the invention of fire, the wheel electricity computers and the internet artificial intelligence will be one of humanity's greatest tools. Due to his back condition. Geoffrey Hinton hasn't sat down for the last 12 years has 71.
We hope Hinton will keep standing for many more years to come while AI is helping many people today, we can only hope that it will continually be used for good in the future. So thanks for watching. That's our look at the people who created AI and the history of artificial intelligence itself. Next week in the next episode, we'll be taking a look at the very strange story of the first man to visit space.
About the Creator
Zeeshan Mushtaq Lone
I'm a student and I also have conducted a marketing survey with ITC Limited. Multinational conglomerate company.


Comments
There are no comments for this story
Be the first to respond and start the conversation.