There is no accepted or standard definition of good artificial intelligence (AI). However, good AI is one that can guide users understand various options, explain tradeoffs among multiple possible choices and then help make those decisions. Good AI will always honour the final decision made by humans.
It is a common phenomenon that if you repeat a word enough number of times, it loses its meaning. It is happening with artificial intelligence (AI) already. Although AI has finally made it to the mainstream rather too quickly, its journey is going to be rockier than it was for other technologies in the past.
AI, as a concept, is not anything new; it has been around for centuries. However, it took off significantly during the 1950s, when Alan Turing explored it further. Progress on it was, however, limited due to the state of computer hardware available at that time.
When computers became more powerful in later years, these were faster, affordable and had more power in terms of storage as well as computing speed. Since then, research in AI has been growing steadily. There was a time when we had merely 1MB memory systems housed in a big box. Now, we have 128GB memory systems in a credit-card-sized device. Advancement in hardware has significantly enabled technological augmentation by leaps and bounds.
In the past few years, there has been sudden growth in all activities related to AI, which was mainly underpinned by the realisation of the Internet of Things (IoT) and other complementary technologies such as Big Data, cloud computing, etc. And since last year, we are seeing several AI implementations.
There is no doubt that AI is still in its childhood, but it has reached a critical mass, where research as well as application can happen simultaneously. We can undoubtedly say that we have changed gears.
AI is already making several decisions that affect our life, whether we like it or not, and has covered significant ground in recent years.
AI is not everywhere yet
While it would be natural to think that AI has penetrated almost every single vertical or market, it is far from the truth. At best, there are only a few technology spot-fires in a few select industries where AI is making its mark.
Unfortunately, as always, marketing gimmicks are at play to make everyone feel that AI has covered everything, while several sections are still untouched.
Many image-recognition systems are now better at detecting cancer or micro-fractures from a patient’s MRI or X-ray reports. Many pattern-recognition systems can correlate several pathological reports and make an almost precise prediction of the health status of the patient. And yet, medical recommendations without a doctor’s explicit approval is not a commonplace practice. And this is good, because, when there is human life at stake, systems should not make a final call, ever. Therefore as far as the medical field is concerned, AI might only reach a status of assisted intelligence and may not be permitted (should not be allowed) to become a mainstream phenomenon at all.
While companies are continually taking humans out of the customer service sector and replacing them with chatbots or automated responders that are AI-driven, human touch is becoming expensive. At an event that saw startups pitching their company/product, one startup’s primary differentiation was that they provide personal support for all queries. Mostly, we are seeing an exciting shift in terms of AI- and non-AI-based offerings.
Self-learning applications is another area where AI is making an entry. Using customised learning, pace and recommendations, it is becoming quite popular. However, as that happens, teaching, coaching and mentoring will soon become a high touch service and still be in demand. Therefore it is difficult to say whether AI has touched this sector truly or just morphed it into something else.
Another area where AI has not yet touched and might not affect is live entertainment and art. These are such personalised and creative pursuits that without having a human in it, they would not have the same meaning. There have been a few experiments with AI creating art, but those art forms have quite a different flavour to them. AI systems can create art based on what they have been trained for. Several of those are mainly geometrical and systematic shapes or pictures—nothing that a human would necessarily draw with a slightly acceptable and natural imbalance in it. Real authorship of the work of art cannot be yet bestowed to an artificial system.
Creativity is some part process and some part randomness, which is the exact opposite of the rule-based method. It is not likely for AI to be able to contribute directly to the creative industry any time soon.
Users and employees have mixed reactions
As far as end-users of AI technology are concerned, there is high-level fear, uncertainty and doubt (FUD) amongst the majority. The sheer duality of this technology is a significant concern. AI is a powerful tool, and just like any other tool, humans can use it for good or bad things. Moreover, since people are not yet actively talking about how to handle potential misuse of AI, this has remained as a growing concern.
Another reason for having a skeptical outlook towards AI is a plausible fear of job losses. If there are massive numbers of people losing jobs without an alternative system in place, it would be undoubtedly dangerous, and this can create chaos.
But then again, if you think about it deeply, you will realise that it is not losing a job that concerns many. What people usually worry about is having nothing better to do when their mainstream work is disrupted.
Unfortunately, majority of AI implementation projects do not address this issue upfront. Instead, it is done as an afterthought. It is perhaps the most substantial reason for being skeptical of AI.
At a superficial level, many of us appreciate the ease and convenience these AI solutions are providing. However, our comfort erodes as these solutions start to increase their scope and touch critical areas of our life, such as banking, social benefits, security, healthcare, jobs, driving and others.
Bias and racism have been front-runners in the list of reasons for distrust in AI. People also fear that AI may show blatant disregard for human control. This, however, does not have any precedence, but it is practically possible and, hence, is a legitimate concern.
Errors-at-scale is not a widely-known issue, but those who have been victims of this problem in the past see it as one of the significant concerns when using AI in daily life. Imagine when a public AI system cancels credit cards of thousands of people because of some error. The scale of chaos this may cause is the main reason for this concern.
As a general observation, everyone is comfortable as long as applications are not touching or affecting core life matters. They are comfortable in areas of entertainment and luxury, but not so much when critical aspects of life are in the hands of an AI.
But enterprises have different views
Despite mixed feelings and heightened expectations from AI, the business world still has some ability to see AI in a relatively balanced manner. People from a wide range of industries agree that AI is tricky to deploy, and it could suck a lot of money and time before becoming useful. It can be costly, and initial payout can be quite modest (and sometimes lower than that). Overall payback period for any AI project has not been attractive, and in many cases, it is hard to establish the same objectively.
Several experts find it unsettling that some vendors are pushing AI systems even before they figure out the purpose and claim to know what problems these will solve. Some businesses discourage this approach, and are taking a prudent view, but that is only a minority of them. Most companies blindly believe otherwise.
It is one thing to see breakthroughs in gaming AI, such as in the game of Go and Chess, or having devices that turn on music at voice command. It is another thing to use AI and make step changes in businesses, especially ones that are not fundamentally digital.
When it comes to improving and changing how businesses get done, AI and other tools form only small cogs of a giant wheel. Changes that bring about company-wide repercussions are a different ballgame.
Change management aspect has not been easy to handle in the past, and it is not going to change in the future either. Several experts from various business domains are needed to be involved for any significant change to occur, and they have to be the best ones if we are looking for effective outcomes. This essentially means pulling the best people from routine business work and letting them focus on AI implementation. This is a challenging proposition for businesses of any size.
What the future holds
The trend of technological innovation has always been heading upwards. AI and other emerging technologies, apart from bringing efficiencies, are also bringing new possibilities. These possibilities are creating new business models and opportunities. This will continue to happen in the future as we progress.
Most daily tasks that depend on best estimates or guesswork would also see a significant shift due to abundance of data. Due to access to more data, the need for devices that can process this data on Edge would increase and will be a key driver in maintaining this progression.
One of the significant drivers of these technological advances is democratisation of resources. Whether it is the Internet revolution, open source hardware and software revolution, or anything else, as AI technology becomes a part of our daily lives, we will see more of this democratisation happening. This will be a crucial factor and will keep boosting progress.
As of now, most AI applications follow a supervised learning approach. In years to come, we will start seeing more and more of unsupervised learning that will keep systems updated continuously. However, this will have one significant barrier to cross, which is the trust factor. Unless this trust factor improves, supervision will remain a necessity.
There is no accepted or standard definition of good AI. However, good AI is one that can guide users understand various options, explain tradeoffs among multiple possible choices and then help make those decisions. Good AI will always honour the final decision made by humans.
On the consumer front, several virtual support tools would increase and become mainstream. It will be almost expected to come across these bots first before talking to any human at all. However, only businesses that demonstrate a customer-centric approach would thrive in these scenarios, while others would struggle to adapt to the right technology. And, most importantly, “What do you want to do when you grow up?” will soon become an obsolete question.
AI will change the job market entirely as there will be growing requirements for soft skills, since most hard skills will be automated. Especially for Indian economy, since we have mostly relied on hard skills for local as well as global opportunities, this will pose a significant challenge to keep up with declining demands. We will be forced to come up with new business models, not just as businesses but also as an economy.
Maintaining a balanced approach
Regardless of how the recent or long-term future with AI looks like, there are a few points that we must understand and accept in entirety. Most of these points align with OECD’s AI principles that were released in early 2019.
AI systems should benefit humans, the overall ecosystem on the planet, and the planet itself by driving inclusive growth, sustainable development and well-being of all. These systems must always be designed such that they respect and follow the rule of law and rights of the ecosystem (humans, animals, etc). These should also respect the general human value system and diversity it exhibits. More importantly, there must be appropriate safeguards in the system such that humans are always in the loop when necessary, or can intervene if they feel the need, regardless of necessity. After all, a fair and just society should be the goal of any advancement.
Creators of AI systems should always demonstrate transparency and responsible disclosure about the functionality and methodology of the system. People involved and affected by this need to understand how outcomes are derived and, if required, should be able to challenge them.
Any AI system should not cause harm to users or general living beings, and must always function in a robust, secure and safe way throughout its lifecycle. Creators and managers of these systems have the responsibility to assess continually and manage any risks in this regard.
Most importantly, on accountability front, anyone creating, developing, deploying, operating or managing AI systems must always be held accountable for the system’s functioning and outcomes. Accountability can drive positive behaviours and thereby potentially ensure that all the above general principles have been adhered to.
There is a general feeling that over-regulation limits innovation and advancements. However, there is no point in racing to be the first; instead, let us strive to be better. Being fast and first by compromising on ethics and quality is certainly not an acceptable approach by any means.
It is unlikely that in the next ten years or so, we will have robots controlling humans. However, technology consuming us, our time, feelings and mindfulness is very much a reality even today; and it is getting worse day by day already.
Just one wrong turn in this fast lane is what it will take to cause regression for society. The rise of AI should not lead to the fall of humanity. Let us work towards keeping the technology, AI or otherwise, in our control, always!
Anand Tamboli is a serial entrepreneur, speaker, award-winning published author and emerging technology thought leader