Saturday, November 2, 2024

Improving Cyber-Security: Stringent Analysis for Software

- Advertisement -
Professor Michael A. Hennell, founder of LDRA Ltd.
Professor Michael A. Hennell, founder of LDRA Ltd.

Q. What would you say is the most interesting trend that you have observed about producing secure and safe software?

A. One of the many things that has become obvious to me over the years is the fact that you can achieve the same aims in many different ways. Some people can produce software very quickly whereas for others it could take a very long time. It all depends on the skills of people involved and their experience. The first time takes a very long time, the second time you can do it much faster, and over the years, it speeds up a lot more. When we first started we were very cautious because our guys were dealing with nuclear power and so they had to be very careful. As we moved on to more commercial sectors such as aerospace, our teams found themselves under the same time pressures as everybody else. Now, the drive for efficiency is paramount but even so, we still have to be careful.

Q. What would you say are the elements that have affected this learning curve?

A. As time goes by, the care and attention to detail required has increased. That has been impacted not only by the ability of people to learn, but also by the input of regulatory authorities whose positions on many things have changed significantly over the years. More and more industries are becoming regulated and each industry has its own distinctive way of doing things.

Q. How would you say the way industries and regulatory authorities interact have changed?

A. When we began, no one knew the regulatory authorities well, and there was almost no communication between them. Each industry was answerable to itself. The nuclear industry’s software specific regulations have always been under scrutiny, so they just had to be right. Aviation was the first commercial sector to put in place regulations, and both I and the company were very involved in the setting up of those early standards. Fast forward to today, and we have a much more mature regulatory system in place.

- Advertisement -

Q. I saw a discussion from 2009, where a group of people were comparing the benefits of static analysis and dynamic analysis but didn’t really conclude either way. Could you share some insights on how test engineers look at today static and dynamic analysis?

A. Dynamic analysis implies the execution of code. If you open your testing with dynamic analysis, the tendency is that you will find a number of faults. As each one appears, you have a problem because you have to change the program before you can continue. However, once you do that, you have to start the testing all over again because it is a new program and you cannot be sure what other parts of the system have been impacted by your change.

By applying static analysis first, you will recognise and correct many of the problems before any code is even executed. The sensible way to go is therefore to perform static analysis, to correct the problems that the static analysis shows up, and then do the dynamic analysis. That way, with a bit of luck you might get right through the dynamic testing without having to change the program. So, the two are very complementary, and to my mind it’s utterly stupid to not to do them both.

Q. How would you recommend that a design or development team should look at this when there are significant cost constraints before their first prototype?

A. Excluding either static or dynamic analysis would be a gamble. Static analysis enables you to find certain class of faults – basically the technical faults – where the language has been misused or people have made silly mistakes in their programming. But static analysis reveals nothing about the application faults. Only when you use dynamic analysis to execute code can you expose problems with how the application itself implements the system’s requirements. So again, there is a difference between the type of faults you can find with each technique, making it clear that it is always best to do both.

Q. Could you give an example of how algorithmic errors might be found using dynamic analysis?

A. Suppose a car is under software control, and a circumstance arises where the system fails to apply the brakes when it should have done, potentially resulting in a collision. Static analysis could not foresee that eventuality. Dynamic analysis could.

Q. You currently work with critical systems in the aerospace, defence, automotive, rail, and energy sectors. Given the explosion of connectivity brought to the world by the IoT, can you visualize other sectors where software may need to be subject to stricter testing and regulations?

A. Considering current trends both in terms of applications and new regulations, it seems certain that almost all software will have to be subject to quite stringent analysis. Connectivity means that is no longer obvious whether a simple piece of software doing something apparently innocuous can be compromised by an aggressor, perhaps by using it to access a more critical part of the system.

Q. You have mentioned that you expect there to be a lot of new users wanting to use your tools to secure and certify their systems. Do you foresee a situation mirroring that in the Test & Measurement sector, where the number of skilled people in software automation and test teams could not meet demand? In that case, the market response was to provide test-as-a-service through test centres.

A. This seems very likely to become a trend. We already do this ourselves for some of our customers who do not have the skills to negotiate the regulations by themselves – for large projects at the moment, but I can imagine it being scaled down for smaller firms in the consumer markets. I also suspect that other companies will enter this market as it develops.

Q. What are your views on whether systems consumer devices should be subjected to a similar level of certification to your traditional markets? After all, the advent of IoT has seen consumer products interacting with sensitive and critical equipment.

A. Although these devices may well function perfectly in isolation, a common problem we have witnessed is that insufficient thought has gone into the design of their interfaces. For example, suppose an infotainment head unit and a more critical control system are both placed on a common CAN bus in a car. If communications between the devices are tightly controlled, then it is difficult for an aggressor to compromise the critical system from the head unit. Too often in such situations, we find that very general interfaces are used and so that defence is gone. Clearly, such an approach does not represent best practice, meaning that it would be very difficult to defend in court.

Q. With the impending rise of self-driving cars and the level of development work required to implement them, do you anticipate that their development cycle will be as lengthy those in the aerospace sector ten years ago?

A. It will certainly have to be rigorous, and it’s even debatable whether existing best practices are going to be good enough. Because at the end of the day, self-driving cars could baulk at an unanticipated set of circumstances, do something silly, and kill people. The automotive industry has already witnessed the consequences of un-commanded acceleration in a conventional car – imagine that happening in the middle of town under computer control! We already have too much evidence of how effective cars can be as killing machines. So, I suspect that there is a great deal of work to be done.

Q. Any thoughts on how it would be possible to test semi-intelligent systems? Will we need something beyond today’s static and dynamic analysis?

A. The problem is combinatorics. It is necessary to consider the myriad of possible combinations and circumstances, and to be sure that they all work together in a satisfactory way. At the present time we don’t consider combinatorics, we just look at circumstances in isolation. I don’t think that is going to be good enough. So, I think the amount of effort required to provide convincing evidence that this software is ok is going to be hard. Combinatorial mathematical techniques alone are unlikely to provide the answer because understanding the requirements of these complex systems is stunningly difficult. Countries like India seen a huge variety of vehicles and objects on the road, and I am not sure that a computer system will be able to handle them all.

Q. Given that no connected software can be perfectly secure, what would your golden rules or guidelines be for engineers working on connected critical systems?

A. I think the problem is that outside the safety critical world, there remains a casual approach to software development in general. New regulations are likely to mean a requirement to demonstrate that you have applied best practice to minimize the potential for other people to be harmed as a result of your software being compromised, however indirectly. That is going to be very painful for the software industry.

Q. Can lessons learned by individual manufacturers be learned by the industry as a whole?

A. Yes. For example, the Lexus problem has been discussed in great depth amongst the Japanese automotive association. I know that because of the feedback from them to the MISRA organisation in Britain. Because MISRA was originally focused on the automotive sector, we have retained close relationships with the industry and we frequently receive requests to implement appropriate rules and guidelines.

Q. I have spoken to engineers who suggest that the systems for each manufacturer are so proprietary that such industry-wide lessons cannot be learned. What are your thoughts on that?

A. There is also commonality of purpose, such as not wanting to hurt people and not wanting to be sued! There is general acceptance in the industry that manufacturers all need to learn from each other, and already the potential for trading of intellectual property is becoming a more commonplace occurrence. As long as this is of mutual benefit, it can only be a good thing. No-one wants casualties, and no-one wants legal headaches.


SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics