Friday, November 22, 2024

Bad Users Can Fail A Good AI System

- Advertisement -

A good artificial intelligence (AI) solution in the hands of bad users can be disastrous, while an average AI solution in the hands of good users can be a great success. Hence, it’s important to educate the users to extract maximum positive value out of it.

If users or other interacting systems are not good enough, then no matter how intelligent your artificial intelligence (AI) system is, it will eventually fail to deliver. The failure may not be the only outcome, but in some cases, it may also result in business risks.

AI systems are not standalone as these often interact with several other systems and humans, too. So, at each interaction point, there is either a chance of failure or degraded performance.

- Advertisement -

There are a variety of users

We can classify computer users by their roles or expertise levels. In case of role-based classifications, they look like administrators, standard users, or guests. Whereas, skill-based groupings put them in categories such as a dummy, general user, power user, geek, or hacker.

All these categories have user levels that are just good enough to use the computer or any software installed on it. However, if users’ expertise is a border-line scenario of being good enough or below, they would soon become bad users of technology. So much so that they can cause a relatively good computer system to come to a halt, including AI.

Additionally, I have also seen that the following user categories are dangerous enough to cause problems:

Creative folks

Creative users are generally skilled enough to use the tool, but they often use it beyond its specified use. Doing that may often render the tool useless or break it.

I remember an interesting issue during my tenure with LG Electronics. One of the products LG manufactured was washing machines. A typical home appliance that a normal user would use for washing clothes.

However, when there were several field failure reports from service centres, especially from the North-West part of India, we were stunned by the creativity of washing machine users.

Restaurant owners in Punjab and nearby regions in India were using these machines for churning lassi at a large scale. Churning lassi requires more human strength due to its thick texture, especially if you are making it in large commercial quantities.

This is why restaurant owners started using top loader washing machines for making lassi. However, this caused operational issues due to unintended and unspecified usage of the appliance and resulted in an influx of a large number of service calls. This kind of creativity looks interesting at face value but certainly causes problems with technology tools.

Another example of such creativity would be the use of Microsoft Excel in organisations. How many companies have you seen where Excel is not only used for tabulation and record-keeping but also being used for small scale automation by running macros?
How many times have you seen people using PowerPoint for report making instead of creating presentations?

All these are creative uses of tools and may be okay to use once in a while. However, the users are mostly abusing the system and tools, which can cause unintended damages and losses to the organisation. These types of users also expose companies to more substantial risks.

These naughty types of users are not productive and do not mean any direct harm. These users are merely toying with the system and may cause unknown issues, especially with AI systems.

If your AI system has a feedback loop where it gathers data for continuous training and adjustments, this may be an issue as any erroneous or random data can disturb the set process and models.

The users that are deliberately acting bad and trying to sabotage the system could be disgruntled employees.

Sometimes these types of users think that the AI system is no better than them, and they must teach it a lesson. They deliberately make attempts to fail the system at every chance they get.

Mostly, deliberate users do it with some plan. These types of users are difficult to spot in the early stages.

Luddites

A classic example of bad users would be Luddites. These are people who are, in principle, opposed to new technology or ways of working.

Luddites were a secret oath-based organisation of English textile workers in the 19th century, a radical faction that destroyed textile machinery as a form of protest. The group was protesting against the use of machinery in a ‘fraudulent and deceitful manner’ to get around standard labour practices. Luddites feared that the time spent on learning the skills of their craft would go to waste as machines would replace their role in the industry.

We often use this term to indicate people who oppose industrialisation, automation, computerisation, or new technologies in general. These users, mostly employees, are threatened and affected due to the implementation of new AI systems. If your change management function is doing any good job, these types would be easy to spot.

Bad user versus incompetent user

Incompetence can mean different things to different people. However, in general, it indicates the inability to do a specified job at a satisfactory level.

If users can use the system without any (human) errors and the way they were required to use it, you can call them competent users. Incompetent users often fail to use the system flawlessly on account of their ability (not system’s problems). These users often need considerable help from others to use the system.

Bad users, on the other hand, may be excellent at using the system, but their intent is not a good one.

All incompetent users are inherently bad users of the system; however, bad users may or may not be incompetent. The reason why we need to understand this distinction is—one is curable while other isn’t. You can make incompetent users competent by training them, but no amount of training would help intentional bad users.

Importance of change management

Most of the other system interaction related issues are the result of poor or no change in management during the full term of the project.

While AI has the power to transform the organisations radically, substantial adoption numbers are difficult to achieve without having an effective change management strategy in place. All the bases should be covered before you begin the implementation and continue it as long as it is necessary.

When you have a complete understanding of how an AI solution will help end-users at all levels in the company, it becomes easy to convey the benefits.

Only quoting the feature list of your new AI solution will not help; you will need to explain what exactly AI solution is going to do and change and how it will help everyone in doing their job more effectively.

A safer and less risky approach will be to pick tech-savvy users for the first round of deployments. They will not only provide useful feedback about the AI system you’re deploying but can also highlight potential roadblocks for a full rollout. Tech-savvy users can help you determine if the AI solution works as expected for their purposes.

These users then become your advocates within the organisation and help in coaching their peers when needed. They also help in creating significant scale buy-in within teams and potentially reduce the number of bad users down the track, too.

Educating users for better adoption

Proper training plan and established training use real-life scenarios and hands-on sessions—user feedback is welcome and will be acted upon before moving forward—not doing that means you leave out much unhappy right talent.

If you want to ensure smooth transitions and user adoption, start user education early in the process. Moreover, tailor it to each stakeholder group. You should provide them with baseline information and knowledge around AI technology as a whole and then deeper insights and information on the specific application that you’re deploying. It will help in setting their expectations. Every involved member must understand the benefits of AI solution.

By educational initiatives, you can quickly dispel misconceptions about AI. For some of the stakeholders and users, especially the ones unfamiliar with how AI can help, futuristic technologies can be intimidating. This intimidation begets a defensive response and brings out the lousy user in them in various forms.

With proper education, the benefits of AI can become apparent to your team members and thus foster positive uptake.

If you centre the education on the fact that AI solutions will enhance employee’s daily work and make it easier to handle routine tasks, make sure to highlight this aspect. When communicating with your employees, focus on the purpose of the change and emphasise the positive outcomes it would bring.

Even for executive leaders, it is vital to understand what is happening and knowing the capabilities or limitations of the AI system you’re deploying. By investing time in acquiring appropriate education, executives will be able to ask the right questions at the right time. Being more involved is necessary for them.

It is hard to recover from a lack of end-user adoption if you haven’t invested enough in user education. So, make sure you have spent an adequate budget in educating users for better AI adoption. Create multiple formats that are readily available for various devices, including offline in-person sessions. When you roll out the training, measure the uptake and types of resources employees use most. It tells you which medium is more effective, and you can leverage it some more.

Going all out on education and training materials can minimise the chances of failure when employees start using the systems.

When you deploy new systems, there is a typical spike in productivity loss, which is generally a result of slow adoption and a long learning period. You can minimise this productivity loss with a proper approach. To ensure successful AI deployment, pair education planning with training.

Moreover, as a rule of thumb, education and training should not end after solution deployments. These must become a periodic activity to ensure that you can sustain all the positive gains.

Checking the performance and gaps

It is reasonable to expect a human user to demonstrate the same performance repeatedly for any given set of scenarios. You would also expect other connected systems to exhibit similar consistent behaviour for you to be able to trust the whole system.

It is essential to check performance for consistency and find any gaps as early as possible in the deployment phase. AI systems usually work on proportionate outcomes, and some variation at the solution level is already accepted. When you couple this inherent variation with the variation of several humans and other systems, it can quickly become unmanageable. Although each variation might have been acceptable independently, when combined, it can be problematic and result in poor overall performance.

That’s the reason why performance must be checked for these gaps once you deploy the AI solution. When you use your AI solution, several systems interacting with your AI solution may go haywire. If you didn’t plan for systematic changes before the deployment, it could soon become a roadblock.

Performing Gauge R&R (Gauge Repeatability and Reproducibility) tests can reveal several actionable findings. It is a statistical test used to identify variance between multiple operators and can be used to test how various users interact with the same system. You can also use it to check how multiple systems interact with your AI solution.

The outcome of Gauge R&R studies gives you an indication of the causes of variation in the performance. These findings can help in formulating training plans for fixing user performance. These can also help you in formulating system change requirements to make them work seamlessly.

Continuously monitoring the user and system interactions and periodically conducting systematic checks (and tests) can help you in managing incorrect usage of your AI solution.

Handling user testing and feedback

No matter how much content you put into training material, it is not always possible to cover all the questions users may have. It makes it essential to establish easy to use and quickly accessible communication channel between users and responding team.

If you can make it clear who the contact person is, how long it will take to get a response and how to escalate, if needed, it would help in gaining users’ confidence and give them clarity about AI deployments. By doing this, you will only encourage users to come to you when they encounter any issues.

Giving them confidence that their feedback is valuable, and you will always take it on board can go a long way. Moreover, once received, do not just consume the feedback but act on it.

Sincerely checking every feedback and fine-tuning your AI application can help in improving users’ experience. It can give them confidence in the deployed AI system. Doing this also reduces the number of bad and incompetent users significantly and thereby reduces your overall risk exposure quickly.

Augmenting HR teams

Until now, HR teams have been carrying out responsibilities to manage the performance of the (human) workforce. However, it is now changing as machines are becoming smarter, and AI is becoming mainstream. So, how do you plan to handle this new type of workforce, which is fully automatic (AI only) or is augmented by smart machines (humans+AI)?

HR members will have to manage performance gaps and issues related to system malfunctions as well as retraining requirements of humans and machines. If there is any impact on human performance due to poor-quality AI systems, it will have to be handled differently than how they would handle typical human (only) performance improvement.

Generally speaking, AI systems are smart, but they seriously lack the key characteristic of humans, common sense! With the deployment of digital twins of your human employees, it may become an essential requirement.

Humans in charge of powerful technologies would have to be trained, coached, and managed effectively.

It would be a good idea to take steps towards establishing a new HAIR (Human and AI Resources) team or augment the existing HR team and accommodate these new challenges. The development of appropriate policies and procedures must be core to their initial tasks.

Start looking beyond the technology

No matter how smart the technology or AI in particular is, it cannot apply common sense and human perspective.

Therefore merely nailing the technical element of AI is not enough; you need to balance it with the human aspect. The understanding of the surrounding environment in which you are using AI is crucial.

Technology teams need to demonstrate cognitive intelligence if they want to be successful. As much as the development and deployment of an AI solution are critical, the user aspect is important too. Without proper use (and users), AI success will surely hang by a thread.

A good AI solution in the hands of bad users can be disastrous, while an average AI solution in the hands of good users can be a great success. The users have the full power to make or break it; your goal should be to enable your users and extract maximum positive value out of it.


Anand Tamboli is a serial entrepreneur, speaker, award-winning published author and emerging technology thought leader

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics