Steve’s article on the future of AI was published in City AM on 9th January 2018 – read the full copy below:
Artificial intelligence (AI) has emerged as one of the hottest areas of technology in recent years. It’s in our phones, cars, and living rooms. It’s part of our banking decisions, love lives, and even our weekly grocery shop.
By 2035, Accenture estimates that AI could add £654bn to the national economy. Given its rapid adoption across different sectors, you’d be hard pushed to think of anything that will be left completely untouched by AI in the next five years, let alone the next 20.
This year has seen the start of the very necessary public discussion about the different uses of AI – and its real dangers. Elon Musk, Stephen Hawking, and some of the smartest people on the planet are leading this conversation on different kinds of AI, and its potential perils.
It’s not the easiest of debates to understand, but it needs to happen. Above all, we need to increase awareness of the difference between strong and narrow AI, and how they are worlds apart.
Let’s start with the scary one. Strong AI focuses on making things human-like. The internet is moving closer to consciousness and accumulating massive amounts of data. It is this data, plugged into an AI, that allows it to learn to become more human.
The aim of this AI is, ultimately, to create more AI. Humans would no longer have any control, and would not set the parameters of what’s acceptable. It is here that the really loud alarms should start ringing.
Narrow AI – the safer, everyday AI – is focused on specifically targeted pieces of intelligence to optimise a certain task. It offers huge potential benefits for humanity, and will likely result in more extensive change across all sectors than computers have delivered since the 70s.
It is the equivalent of throwing 100,000 experts at a problem.
This type of AI’s real power is its ability to learn from the data to improve itself. Think about the impact on the fight against disease, for example.
Narrow AI is being used in groundbreaking ways to help humanity, from medical cures and treatments, to autonomous rescue vehicles. Yes, it will be disruptive, but in a positive way. However, there is a risk that backlash against strong AI will have an impact on these more visible and understandable positive uses.
Much like Musk and Hawking, I naturally have my concerns about strong AI, but I believe that there are real and valuable ways humans can use narrow AI to improve or enhance existing business processes and services.
For example, AI is transforming the consumer research industry. It is being used to understand the billions of conversations on the internet, and express them back to us as if they were a survey.
As pretty much every conversation, opinion, and action is now recorded on social media, this is effectively turning the internet into the worlds biggest focus group, giving brands real-time, high-quality consumer behaviour and trends insight.
Then there’s the use of AI to analyse food wastage by airlines every day, across the world. Going way beyond how many sandwiches are binned, it offers airlines a view of how individual ingredients are being wasted across the entire supply chain.
On airlines alone, AI can cut wastage by 20 per cent, and save around £1bn. It’s easy to see how this is directly applicable right across retail – not to mention avoiding tonnes of unnecessary waste.
To date, it’s ultimately been left to individual chief executives to determine the right and wrong applications of AI. This should never have been the case. The good news is that we’ve started to see governments, as well as business leaders and scientists, getting concerned about AI, asking questions about how they can claw back control.
The risks, and potential benefits, are huge. Let’s hope we’re not having this conversation too late.