Many believe that the Internet of Things, artificial intelligence or automation will solve their problems for good, only to find themselves in a differently flavoured soup post-implementation. Read on to find out why this happens.
Once upon a time…
Once upon a time (read as several years ago), while I was working with a home appliance manufacturing company in my early days of career, an interesting incident happened.
We used to report to a Senior Manager back then, and he was quite steadfast in pushing certain things, especially when he had made up his mind about something. One day he asked us to prepare a report, where certain things needed to be highlighted in italic and/or coloured fonts and then we were required to print it and give it to him.
The only (big) problem was that we were using a program called “WordStar”, and all we had was a dot-matrix printer! For those who do not know, it was a DOS (Disk Operating System) based word processor program. And, I do not have to explain why having a dot-matrix printer was a problem for the given task.
Our obvious response was — not possible! He wouldn’t listen, he kept pushing us to make it happen. He insisted, saying something on the lines of — don’t try to fool me, I know that computer can do anything, and your boss promised that to us when buying it!
So, we stared at our boss, with rather mixed feelings, waiting for him to own it up (after all he promised the impossible) and fix it! Eventually, he did something behind the scenes, and we were saved.
But that incident left a very strong impression on my ‘green-self’. This was not just the case of over-promising but also the one where business expectations were inappropriate. Our interactions (with that Senior Manager) remained full of friction thereafter as we lost some credibility during the battle.
Then and now…
Fast forward to today’s date, almost twenty years have passed, and here we are — still dealing with these types of problems! It feels like a déjà vu … but why?
For some funny reason, people are increasingly assuming that computers are better than humans and can-do wonders. What is more, their assumption says that humans may get it wrong sometimes, but computers will not, ever! This assumption is posing a different set of challenges to us.
These challenges are aggravating as computers become more pervasive and take part in our daily life in many ways. It does not take a genius to tell that the computers don’t have a brain of its own, let alone intelligence or conscience. It is the developer, who instructs and teaches the computer — what to do. If they make a mistake, and poorly design & develop the code, it will be a problem. If developers do not test their development appropriately or have used sub-par hardware, or if there are fundamental flaws in the understanding of user requirements, the computer would perform poorly!
So, what is the problem…
The problem is not the emerging technology and the bright future it brings on.
Our adamant belief that there is a sunny technological future just around the corner, is the issue!
Many businesses, I mean senior responsible managers in those businesses, still feel and believe that IoT, AI, or Automation will solve their problems for good, only to find themselves in a differently flavoured soup post-implementation. Why do you think this might be the case?
In my view, it starts with raising expectations early in the adoption process, when the person in charge of such initiatives demonstrates only one side of the story. They usually do not tell the other side, either because they do not know (lack of full knowledge) or they have some special interest in doing so (often happens with vendors of technology). Accepting the fact that we don’t know what we don’t know is quite critical here.
This problem further increases with unrealistic expectations from the technology as well as failing to define the acceptance criteria upfront. Failing to define such a criterion upfront only results into the endowment effect. Implementation teams try to justify (much later) that whatever has been developed should be consumed because they worked on it so hard; worse part is when they try to retrofit the acceptance criteria, only to make it happen.
So essentially, businesses do not have stubborn goals (read acceptance criteria) and do not have a handle on the means. This results in a somewhat uncontrollable situation. Development teams may tell you that the machine will learn eventually — but won’t tell you when and how would it improve.
Garbage in will always result in garbage out — no matter how many years you keep doing it and how intelligent the machine is!
Know thy limits…
Any machine or AI cannot differentiate between the right or wrong, it can only choose what is popular as it sees from the learned data.
There is a fundamental problem with Artificial Intelligence, unfortunately; it learns from the data fed to it. Whether it is supervised or unsupervised, does not matter. The data must be good and balanced. If we want to teach the machine with examples, they must be good.
If for some reason that (clean data) cannot be ensured, then the testing of developed AI must be flawless. If testing has gaps, and data is bad, bad AI will rise. It will not only have garbage-in to give garbage-out, but it will also do it at a much faster rate and at scale. No one would want that. And therefore, we need to know these limitations and deal appropriately with emerging technologies.
There are several challenging aspects, which an AI machine cannot handle. Virtues such as fairness, morality and ethics cannot be taught to the machine and hence Machine cannot make certain judgemental calls based on those.
Many narrow AI programs are also not per se flawless. These programs merely try to imitate human behaviour (which itself is questionable sometimes). If choices are black-and-white, it works well, but soon it folds when the problems go in the grey area. Poorly designed programs then tend to make random (often wrong) choices costing businesses heaps of money. Many businesses feel that this is perhaps acceptable error-rate, which sometimes could be an unfounded assumption. Especially when acceptance criteria were not fixed before starting the journey.
However, from a larger picture perspective, only the narrow AI is lesser of two evils. Anything further would mean we have to define and codify a lot of grey matter — and humans have limits!
What are the takeaways…
For sure, there is a lot to talk about teaching machines morality, ethics, working in grey areas, and many things alike. However, we cannot wait for all the lights to turn green and we must keep moving forward, learn and improvise.
But, the biggest takeaway, for now, would be to remain positively sceptical, maintain sensibility hats always on and adopt the technology with a grain of salt.
Machines make mistakes, just like humans do, and they will keep making them in future too. Businesses must accept this fact and know that machines also need, much like humans, attention, retraining, and performance improvement plan before they go live again.
Businesses must make sure, just as they do for humans, machines are progressively trained and rigorously tested before giving them more responsibilities. Any failure of machines’ performance should be dealt with, rather relatively stronger than humans.
I also suggest that businesses should establish or augment their existing HR department to HAIR (Human & Artificial Intelligent Resources) department. Appropriate policies should be developed for managing those AI resources, just as you do for humans. This may sound a bit silly for now, but the direction we are heading towards would soon dictate that. A movement towards making AI transparent is catching up.
Lastly, do not get carried away and assume that just because we have cool technology, we can use it to solve every problem around us. Emerging technologies are new hammers, let us avoid treating all the problems as nails and avoid rushing into emerging future. It is extremely difficult to undo strategic and technological mistakes these days.
Sometimes, it is better to deal with humans than machines, sanity is the key!
This article was first published online on 14 March 2019 and was recently published in EFY July 2019 Issue.