top of page
Paddy Lawton

THE SURPRISING TRUTH ABOUT HUMANS AND ARTIFICIAL INTELLIGENCE


 

Artificial intelligence is not new, but suddenly everyone seems to be talking about it. As I explained in my last article, we have hit an inflection point with computing power and data that is finally allowing for commercial applications of this technology, and that’s what all the excitement is about. It’s only going to get faster and better from here on out.

Along with talk about the new possibilities, there is also a lot of fear about people possibly losing their job to a robot, or even becoming irrelevant. Should humans worry? Artificial intelligence expert Andrew Ng, the founder of the Google Deep Brain Learning Project and the former chief scientist at Baidu says yes.

How much? About as much as we should worry about overpopulation on Mars, says Ng. In other words, any such scenario is unimaginably far in the future.

Narrow problems

For one thing, the problems that AI is solving now are very narrow. Despite the wow factor of being able to shout a command at Siri or Alexa and have a task performed, when you get right down to it the tasks they are performing are rudimentary.

But the bigger reason is that the robots need us. What makes technology good is the fact that people are involved in it. You only need look at the evolution of software up until now to see that humans are essential for AI even to exist, and that our relationship will always be a symbiotic one.

As little as ten years ago, a lot of our approach to technology was, "Here you go, that's your interface, get on with it." And you think, "Crikey, how do I use that? I have no idea." And then you give up and move on.

For example, if you look at the software we had in procurement back in the late '90s and early 2000s for example, no one really could use it because it took five years to get the system up and running and then you had a whole bunch of screens and you needed a lot of training to do anything at all.

It never really worked, because we’re a bit belligerent as a species. We’re not going to use something just because it’s forced upon us. What we have now is much simpler and easier to use,

so therefore people use it.

How to train your software

When you get people using your software, the system gets more feedback, which it uses to make things even simpler. Then more people use it, and it gets even better. That's what's happened with Salesforce, the first cloud-based software to be adopted on a mass scale. Humans taught it how they wanted it to behave, and they continue to do so.

The same has happened in other areas. Back in 2000, before there was Facebook, in the UK there was a social network called Friends Reunited. It was not that much different than what Facebook is doing now. It got up to about 15 million users before it died a slow death.

What Facebook did a better job of was learning from humans and evolving. You may think of Facebook as social in that everyone can share and comment on pictures of performing cats. What I think is social about it is that you have a billion people intimately involved in the software development process, not because they’re part of a formalized user group, but simply because their every interaction feeds data back into the process.

Fueled by people

With the cloud, no one develops software in isolation any more. If you look at all the disruptive technologies that have taken hold, they've been fueled by an ever-growing amount of data from an ever-growing number of people using them. They’re not using them because they've bought them, but because they want to.

There’s a very predictable trajectory to getting to that place where people want to interact with the software. You have an early version with a small number of adopters who accept and then ultimately reject it. The next iteration solves some of the problems of earlier attempts, so it gets more adopters and more feedback, and it gets better, and so on. There’s a chain reaction that happens.

Eventually you get to mass adoption, and these technologies just become a part and parcel of peoples’ everyday life, like Facebook and cell phones and Google. There’s a hell of a lot of work and failure that goes into getting to that point, and then you make that leap. But it’s all based on the feedback from the human. Without people, it wouldn’t happen.

That's why I don't think humans will ever be out of the picture because no matter how good artificial intelligence is as a technology, it can’t exist in a vacuum. If people aren't engaged with it, you don't get that feedback loop.

Not standing still

People aren’t going to stand still either. We've been at this innovation thing for tens of thousands of years. When agriculture was invented, people no longer had to hunt and forage for their food and they turned their attention to perfecting farming instead, and that’s worked out rather well.

More recently, when automation came to the coal mines in England, people didn’t sit on their backsides doing nothing. They became mining engineers or machine engineers.

For those that are worried about the threat of machines taking over, it's just not going to happen. For AI to evolve and for a business to evolve, it means the people trained in the machine and using the machine will have to evolve too. People will still have massive influence over the technology.

Our activities and skill sets will change. When machines take on some of life’s more mundane, repetitive tasks, human behavior and quality of life goes up. Work life probably won't change for a long time because people still need to talk to people, buy stuff and pay people.

We’re not going to run out of problems to solve any time soon, which is all the more reason we need to free up the creative energy of humans: To work on really big problems such as global warming, disease, and people not having enough food or clean water, and eventually, hundreds or thousands of years from now, figuring out how to live on Mars.

28 visualizaciones0 comentarios
bottom of page