Artificial Intelligence – beyond the hype, the experts, its weak spots and losing your job

In our household we love our Google Home. My girls use it continuously to play music, ask silly questions, jokes, do calculations and spell difficult words amongst other things. I use it to voice command my shopping list, set a timer and book meetings in my agenda. Speeking to and interacting with a little machine in my living room, that has the capacity to both understand and create communication, is a great example of how Artifical Intelligence has infiltrated our lives.

Today, there are four key areas of AI achieving advances and market penetration:

  • Robotics: engineering and science blended to make physical constructs that move and act
  • Vision: systems that recognize objects based in visual input
  • Automated Neural Networks: combinations of nodes of computing that, combined, provide a method of analysis very loosely based on human thought processes
  • Natural language: the capacity to both understand and create communications, both written and spoken

There is a lot of hype and mystique around AI – “the theory and development of computer systems able to perform tasks normally requiring human intelligence.”

However, despite all the mystique around it, the core of AI is simply a bunch of algorithms that recongnise patterns in data. When being fed a lot of data and a clear set of rules, the algorithms can be self learning and self adapting (machine learning). After recognising a pattern, AI can make software come in to action. This can result in speech, predictions, logical decisions, movements, search or other follow up actions, including telling me a joke in my living room.

Let’s look beyond the hype and see why there are only few AI experts, why you don’t need to be one of them, why AI doesn’t always work and why AI is not all doom and gloom for the future of work.


Why there are few AI experts

Despite all articles we are bombarded with on social media, there are very few real AI experts. Tencent estimated that there are about 300,00 AI engineers in the world, where there is a need for millions. Some estimate there are only 10,000 individuals worldwide with the right skills to spearhead serious new AI projects. It will be even less that can bring AI in to mainstream useful applications in our life.

There are only few real AI experts, simply because having an in depth understanding of AI is not easy. It requires a lot of deep knowledge of statistics, mathematics and algorithms. As some say; “AI isn’t magic, it’s just maths – albeit really hard maths.”

A quick look online at a university, tells me that before a student can follow the course Artificial Intelligence, it needs to complete the course Algorithms and Analysis. This course includes the need to understand the following key algorithmic design paradigms: brute force, divide and conquer, decrease and conquer, transform and conquer, greedy, dynamic programming and iterative improvement.

When I studied operations research over 20 years ago, I learned some of these algorithms, the maths behind it and how to program them. In my final thesis, I used a very simple greedy heuristic to calculate the optimal allocation of products in a warehouse in order to minimize picking time.

The maths behind these algorithms can be difficult, and back then this was reflected in the class sizes. In my operations research class there were less than 50 students, in general (less mathematical) economics there were over 400 students. My bet is that studying AI in depth is still a pretty hard thing to accomplish and will be limited for the beta brains.


Why it is OK not to be an AI expert

Although developing AI, finding new and better algorithms, tuning them and make them smarter will be for a few mathematical geniuses, the application of AI can be for the masses. Similar to a demand planner who can choose a best fit forecasting algorithm to create a baseline forecast, proven AI algorithms can be selected by the masses to solve other supply chain problems.

That’s fine and it is not different than in the current supply chain profession. Hardly any supply chain professional can mathematically proof forecasting algorithms, the Economic Order Quantity, or why the normal distribution can be used to estimate the safety stock need for a 98% customer service level. Still, these concepts are widely – and unfortunately sometimes incorrectly – used.

And to use my Google home, search the internet, use automatic lane change in my car, I don’t have to understand AI, I’m just a user performing an action or solving a problem.


Why Artificial Intelligence doesn’t always work

AI is pretty good at some stuff. It also can do some pretty average stuff. For me, the sweet spot of AI at this point in time is an environment that combines a clear goal, with clear rules and big data with high frequent and low impact, logical transactions. Let me explain further.

Big data versus small data

AI needs lots of data to be self-learning and find the best pattern in data or make the best decision. When Deep Blue was fed all the chess games ever played on record in 1997, it used brute calculation force to beat then world champion Gerry Kasporov.

With only few chess games or data points available, these AI algorithms pretty much loose their value. With only a few data points, Google and Facebook’s search algorithms wouldn’t be too helpful and Tesla’s autopilot wouldn’t work either.

To make accurate predictions – with 99.9% accuracy – AI needs exponential amounts of data. And when human lives are stake, anything less is simply not good enough.

Clear rules versus a complex, dynamic system

Google took AI a step further when their AI system Alpha Go first beat the worlds best player at the Chinese game Go, a game exponentially more complex than chess. Then Google developed Alpha Go Zero and loaded all the Go rules in to this AI system. With perfect knowledge of the rules of Go, this system was capable of self learning Go and in 3 days was capable to beat it’s AI predecessor. An amazing achievement, done in a deterministic environment with clear rules.

However, if a business has to assess how a merger or acquisition impacts its company culture and behaviours off their employees in detail, could an algorithm do this? Or, thinking about my two daughters, could AI predict how they as employee develop in the workforce?

With some clear and some ambiguous social and cultural norms, many to many personal relations and interactions, emotions driven by external influencers and by hormones, internal chemical reactions, behaviour influenced by peers, family life, social media, nutrition and sleep, to name just a few, AI has a while to go before tackling a complex dynamic system like this.

High frequent-low impact versus low frequent-high impact decision making

Many can accept to outsource small replenishment decisions to AI or production or transportation movements to robots. The so called 3Ds; Dull, Dirty and Dangerous jobs. These jobs are mostly high frequent, repetitive, with one single transaction having a limited impact.

However, when a business has to decide to build a new production plant or warehouse costing dozens of millions, would it outsource this to an algorithm? These low frequent, high value decisions might take some clever AI analysis as input, but AI wouldn’t make the final decision itself.

An example of a poor AI result in a low data, low frequent, high impact environment is the world-cup soccer. This tournament has only been around for about 60 years, is played once every 4 years and it’s a pretty big deal to win it. AI was used to predict the winner in the 2018 world-cup soccer by a group of scientists. They predicted that Spain was the most likely winner, followed by Germany and Brazil. This result is logical based on the history of world-cup soccer, but how much more wrong could AI and this group of scientists get it?

Logic versus humanity

AI is good in logic, because it is maths. It is not so good in being social and feelings. Things like feeling empathy, showing kindness, creativity and having an understanding of social context and relations. The Microsoft chatbot Tay, famously went from an innocent chatbot to a bully (and that’s an eufenism) in less than 24 hours.

Solving a problem versus choosing a problem to solve

Last but not least, AI needs a problem to solve. A clear goal. Then, if the right (self-learning) algorithms are applied and clear rules or data is available, AI can solve about any problem. However, machines can solve any problem but they just don’t know what problem is the relevant one. When there is no goal, for now at least, AI wouldn’t really know what problem to solve.


The impact of Artificial Intelligence and automation on jobs

The application of AI and automation will grow and likely in areas we can’t even imagine yet. McKinsey estimates that firms will derive between $1.3trn and $2trn a year in economic value from using AI in supply chains and manufacturing.  McKinsey estimates that about 50% of human work activities can be automated and that in 60% of occupations, at least third of activities can be automated.

But it’s not all doom and gloom. McKinsey also thinks that very few occupations will completely disappear in the next decade. Very few occupations – less than 5 percent – consist of activities that can be fully automated.  Deloitte estimates that over 30% of high paid jobs will be social and essentially human by nature and PWC recently estimated that AI will create as many jobs as it destroys.

However, the impact differs across sectors. Supply chain is amongst the hardest hit, with AI displacing 38% of transport jobs and 30% of manufacturing jobs. An example of automation in transport is the first iron ore delivery by automnomous train in Australia. In mining, driverless trucks are in use since 2015. Robotics have been in car manufacturing for almost 100 years and their deployment will continuously increase. However, there are already examples like Mercedes Benz, who due to hyper customization made plans to reduce robots on a production line and replaced it by human labor, as it was too costly to keep reprogramming the robots. Elon Musk recently admitted that ‘too much automation’ was slowing Tesla’s Model 3 production. Musk admits he want to use more humans in the factory, to speed up production.

Human and machine collaboration

Everyone agrees, the future of work and jobs will change. Maybe the most important thing the workforce needs to be prepared for is to develop a mindset to deal with continuous change, continuous learning and further develop the soft human elements like empathy and kindness that complement AI. Leave other tasks to machines and learn how to work best together.

Maybe this is best said by Gerry Kasparov, one of the first humans who in front of the whole world, was on the receiving end of the brute force of AI, when he lost a chess game to IBM’s deep blue in 1997.

With so much power now brought by machines, we have to find a refuge in our humanity. It’s about our creativity, our intuition, our human qualities that machines will always lack. So we have to define the territory where machines should concentrate their efforts. This is a new form of collaboration where we recognize what we’re good at and not interfere with machines where they’re superior—even if it hurts our pride

The winning businesses of the future will focus on creating cultures that embrace and develop the human qualities – re-imagine work and business models – and add value together with AI and technology. The shortsighted and losing business will focus on AI and technology only.

photo credit: Wharton


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s