Monday, December 23, 2024

Cybersecurity: Supervising Your AI With The Red Team

- Advertisement -

We have supervised machine learning as a concept but supervising artificial intelligence (AI) itself has not been deeply thought out. In this article we focus on ongoing controls over the AI solution and see what involves supervising AI and elaborate on the concept of a red team in the context of AI solutions.

In military parlance, the red team is one that uses its skills to imitate an attack and the techniques a potential enemy might use. The other group, which acts as the defence, is called the blue team. Cybersecurity has picked up these terms for their use, where they signify the same functionality.

A red team is an independent group of people that challenges an organisation to improve its defenses and effectiveness by acting as an adversary or attacker. In layman’s terms, it is a form of ethical hacking—a way to test how well an organisation would do in the face of a real attack.

- Advertisement -

If effectively implemented, red teams can expose vulnerabilities and risks not only in the technology infrastructure but also in people and physical assets.

You must have heard the saying “The best offence is a good defense.” Having a red team is the right step in putting up this good defense. Whether it is a complex artificial intelligence (AI) solution or merely some basic technology solution, red-teaming can give a competitive edge to the business.

A comprehensive red team exercise may involve penetration testing (also known as pen testing), social engineering and physical intrusion. Either all of these could be carried out, or a combination of those in the right manner as the red team sees fit to expose vulnerabilities.

If there exists a red team, then there also must exist a blue team. This assumption stems from the premise that system developments are done inhouse only. However, with AI systems, this can change, and the actual blue team may be the technology vendor.
Building the red team

Ideally, the red team needs at least two people to be effective, though many teams have up to five. In case of a large company, fifteen-plus members might be needed in the team to work on several fronts.

Depending on the type of AI that is deployed, the red team’s composition might change with different skillsets—structure is necessary to maximise the team’s effectiveness.

Typically, you would need physical security experts, such as the ones who understand and can deal with physical locks, door codes and other aspects. You would also need a social engineer to phish out information through emails, phone calls, social media and other such options.

Also, most importantly, you would need a technology expert, preferably a full-stack one to exploit hardware and software aspects of the system. These skillset requirements are the minimum. If the application and systems are too complicated, then it will make sense to hire specialists for individual elements and have a mixed team of experts.

Top five red team skills

The most important skill any of the red team members can have is to think as negatively as possible and remain as creative as they can be when executing the job.

Creative negative thinking

It is the core goal to continually find new tools and techniques to invade systems and eventually protect the organisation’s security. Moreover, showing flaws in the system needs a level of negative thinking to counter the inbuilt optimism in an individual.

Profound system-wide knowledge

It is imperative for red team members to have a deep understanding of computer systems, hardware and software alike. They should also be aware of typical design patterns in use.

Additionally, this knowledge should not be limited to only computer systems. It must span across many systems to involve heterogeneous systems.

Penetration testing (or pen testing)

This is a common and fundamental requirement in the cybersecurity world. Moreover, for red teams, it becomes an essential part and a kind of standard procedure.

Software and hardware development

Having this skill means, as a red team member you can envisage and develop tools required to execute tasks. Moreover, knowing how AI systems are designed in the first place means you are likely to know failure points. One of the critical risks AI systems always pose is logical errors. These errors do not break the system but make the system behave in a certain way, which is not intended or may cause more damage.

If a red team member has experience in software and hardware development, then it is highly likely that he or she has seen all typical logical errors, and can, hence, exploit these to accomplish the job.

Social engineering

This goes without saying as manipulation of people into doing something that can lead the red team to its goal is essential. The people aspect is also one of the failure vectors that the actual attacker would use. Human errors and mistakes are one of the most frequent reasons for cyber problems.

Inhouse or outsourced?

The next key question is: should you hire your team members and employ inhouse or outsource the red team activity?

We all know that security is one of those aspects where funding is mostly dry. It is tough because ROI on security initiatives cannot be proven easily. Unless something goes wrong and is prevented, it is nearly impossible to visualise or imagine. This limitation makes it difficult to convince that investment in security is worth making.

A quick answer to an inhouse or outsourced question would be: it depends on company size and budget.

If you have deployed AI systems for long-term objectives, then the inhouse red team would be the right choice as it would be engaged continuously. However, that comes with an additional ongoing budget overhead.

On the contrary, if you are unsure about the overall outlook, outsourcing is a better way to start. This way, you can test your business case for inhouse hiring in the long run.

From privacy and stricter controls perspective, the inhouse red team is highly justifiable. Red and blue teams’ activities are more like cat-and-mouse games. When done correctly, each round can improve and add to the skillset of both teams, which, in turn, would enhance the organisation’s security.

You can utilise the outsourcing option if you are planning to run a more extensive simulation. If you need specialised help or are looking for a specific skillset for a particular strategy execution, then also this option would make sense.

Objectives of a red team

Primarily, the red team exists to break the AI system and attached processes by assuming the role of the maleficent entity. It should go beyond just the technology aspect and work on the entire chain that involves the AI system. Doing this can make its efforts more effective as it can then ensure that upstream, as well as downstream processes and systems, are tested.

A red team should consider the full ecosystem and figure out how a determined threat-actor might break it. Instead of just working towards breaking a Web app or particular technology application, it should combine several attack vectors. These attack vectors could be outside the technology domain, such as social engineering or physical access, if needed. This is necessary because although the ultimate goal would be to reduce the AI system’s risks, these risks can come from many places and in many forms.

To maximise a red team’s value, you should consider a scenario and goal-based exercise.
The red team should get into motion as soon the primary machine training is complete, which applies if you are developing the model inhouse. If you are outsourcing the trained model(s), then the red team must be activated as soon as sourcing completes.

The primary goal of a red team is to produce or create a plausible scenario in which the current AI system behaviour would be unacceptable—and if possible, catastrophically unacceptable. If the red team succeeds, then you can feedback its scenarios to the machine training team for retraining the model. However, if the red team does not succeed, then you can be reasonably confident that the trained model will behave reliably in the real-world scenario, too.

Carefully staging potentially-problematic scenarios and exposing the whole AI system to those situations should be one of the critical objectives. Also, this activity need not be entirely digital in format. The red team can generate these scenarios by any means available and in any formats as it seems plausible in real-life situations.

One of the ways the red team can attempt to fail the AI system is by giving garbage inputs in primary or feedback loop and seeing how it responds. If the system is smart, it will detect the garbage and act accordingly. However, if the system magnifies or operates on the garbage input, you will know that you have work to do. These (garbage) inputs can take the form of training inputs for machine retraining.

The red team can also work on creating and providing synthetic inputs and see how the system responds. It can then use the output to examine the AI system’s internal mechanics. Based on further understanding, synthetic data could be made more authentic to test the system’s limits, responses and overall behaviour. Once you identify failure situations, these are easier to fix.

The red team may not necessarily try to break the system. Sometimes, it may merely cause a drift in the continuous learning of the system by feeding wrong inputs or modifying parameters and thereby causing it to fail much later.

A point where the AI system is taking input from another software or human could be a weak link. A point where output of the AI system forms an input to another API or ERP system could also be a weak link. By nature, these links are highly-vulnerable spots and weak links of the whole value-chain.

The red team should target and identify all such weak links in the system. These weak links may exist between two software systems or at the boundary of software-hardware or software-human interactions.

A red team is not for testing defences

The core objective of the red team is not to test defense capabilities. It is to do anything and everything to break the functional AI system, in as many ways as possible and by thinking outside the box. Ultimately, this should strengthen the whole organisation in the process.

Having this broader remit can enable the red team to follow intuitive methodologies for developing a reliable and ongoing learning system for the organisation. It is a promising approach to many severe problems in the AI system’s control.

However, remember that red-teaming is not equivalent to a testing team that works on generating test cases. Test cases usually follow well-defined failure condition(s), whereas for the red team the objective is much broad, and methods are undefined and often limitless.

In a nutshell, the red team should evaluate the AI system on three key parameters:

  • Severity of consequences of a failure vector
  • Probability of occurrence as found
  • Likelihood of early detection of failure

The red team is functional, what next?

A functional red team is not just about finding holes in the AI system; it is also about providing complete guidance and playbook to improve those weak points, plug those holes and strengthen the system along the way, continuously.

Moreover, an effective red team operation would not end after it has found a weakness in the system. That is just the beginning. The team’s next role is to provide remediation assistance and re-testing. Also, more importantly, keep doing this as long as necessary.

There may be significant work involved in comprehending the findings, their impact, likelihood, criticality and detectability. Furthermore, carrying out suggested remediations, retraining of the machine with new data, etc before the blue team says it is ready for the next round of testing is needed.

The whole process of the red team finding weaknesses and the blue team fixing these has to be an ongoing process with regular checks and balances. Avoid the temptation to do it once for the sake of it. Instead, make sure that you do it regularly and consistently. Doing so will help you to maintain a watch on the risk score of each aspect and monitor how you are progressing with the already-established risk mitigation plan. Your target for each risk item in the list should be to reduce its risk score to near-zero.


Anand Tamboli is a serial entrepreneur, speaker, award-winning published author and emerging technology thought leader

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics