Over the past century-and-a-half, we saw more technological advancements than all other periods of human history combined. There really are no surprises there: from the height of the First Industrial Revolution in the mid-19th century to the mobile revolution of the early 21st century, it has been an incredible period in human history, and our world has changed forever.
The real surprise comes in where scientists claim that, despite the massive progress we’ve made over the past century, in the next decade we will see more technological advancements than in the previous century.
This is surprising, and one will be excused for doubting the validity of this claim. After all, a century’s worth of advancement will be surpassed in just a decade? And not just any century, but the 20th century, the century of peak human accomplishment. Is it possible?
The answer is yes, it is possible, and for one primary reason: while the technological advancements of the previous century were driven by human beings, those of the next decade will be driven by machines.
Machines will be solving the world’s most complex problems, whether those problems are of a business, scientific or social nature.
Computers are now smarter than ever and have the ability to sense the world around them, to think, identify patterns, make decisions and to learn. This is thanks to a field of artificial intelligence known as “machine learning”.
Machine learning is where machines are not programmed by humans to perform certain tasks, as they were in the traditional sense, but are instead taught how to learn and to continuously improve. All we do is to provide them with basic machine learning algorithms and lots of data, and they learn to figure things out by themselves, just like little kids exploring the world around them. It is a fascinating yet frightening thought, that our creations are now able to evolve and improve themselves beyond anything we might have imagined.
To understand how machine learning works, consider a scenario where we need a computer to sort through pictures of cats and dogs and place them into the appropriate “cat” or “dog” categories.
The one way to do this is to “teach” the computer the difference between cats and dogs by feeding it thousands of pictures of cats and dogs, and tagging each one as either “cat” or “dog”. By scanning, studying and analysing the pictures, and then associating them with the tags, the computer will, over time, be able to identify specific facial and body traits that differentiate a cat from a dog. In other words, the computer will learn to recognise a cat from a dog.
This kind of machine learning, called “supervised machine learning”, is very common. In fact, most of us have been actively teaching computers to recognise certain images, without even realising it.
For example, have you ever used one of those rather annoying “Captcha” features on websites that require you to identify text or images to prove you are a human and not a robot? Did you know that by answering the questions correctly we are actually “teaching” the computer what is in the image? The computer remembers our responses and, in the future, will use that image to identify similar objects. For example, if you identify a picture of a traffic light, the computer will use that picture to identify traffic lights in other pictures. It is a simple, yet powerful way to make computers smarter.
Have you ever wondered how social networks are able to identify people in pictures? You guessed it: we teach them. Whenever you post and tag a picture of yourself or anyone else, the system remembers who is in the picture. Then, whenever that person appears in an image, it simply recognises them by comparing them with the images you have uploaded.
Computers use a similar method to learn just about anything: pictures, handwriting and voice commands. And they are constantly learning: the machines around us, from social media systems to home automation systems, cellphones and even smart watches, are constantly observing us, and constantly learning and improving. It is only a matter of time before they will become as proficient and natural as us when it comes to looking, listening and making sense of the world around us.
The key difference between computers and us is that they are much better and faster at processing vast amounts of data, whether it is in the form of pictures, audio, text or numbers. By combining the ability to sense and recognise the world around them with their immense processing abilities, computers will be able to identify and solve problems that are too complex for humans to tackle.
In fact, complex artificial intelligence algorithms are already busy solving a number of problems in the business and scientific worlds. It is these super-intelligent algorithms that will drive innovation of the future. As for us? Well, we’ll just have to play catch-up.