While the dream of fully autonomous driving appears to have taken a back seat, other automotive applications of Artificial Intelligence (AI) have become increasingly sophisticated. For example, AI now plays an ever-greater role in Advanced Driver-Assistance Systems (ADAS). Although not replacing the driver, ADAS can make the driver’s role easier, more comfortable and safer. As ADAS software increases in complexity and continues to converge with the driver’s responsibility, it is more important than ever that the underlying software framework in which the AI operates is safe to use.

Despite the sophistication of AI-driven ADAS, we must not forget that the platforms these systems are built on are not so different from yesterday’s. Making ADAS safe requires the whole platform to be safe, including the compilers and libraries involved in generating it. Fortunately, Solid Sands offers the tools, techniques, and methodologies to demonstrate their trustworthiness today and tomorrow.

It is virtually impossible to demonstrate the correctness of Machine Learning-based AI models, because they are based on pre-built training sets. There exists no specification of their required behavior when they encounter a new situation, nor is there any source code for their learned behavior. Embedding them safely in ADAS requires additional, well-defined, failsafe mechanisms that can take over when the AI starts to cut corners. That’s all the more reason to ensure the quality of the platform.

To make matters more interesting, AI can force changes to the platform – in particular, by instigating small floating-point formats of less than 32 bits. Outside their training environment, which typically runs on high-performance GPUs, AI models do not need the fidelity of a full 32-bit floating-point format when they are out in the field. We have seen floating-point formats with 16, 8, and even fewer bits. It is not just the hardware that such changes impact. They impact the required support in the compiler and, when mathematical functions are included, also the library.

AI is a new technology that is not yet battle hardened. It is therefore extremely important to acknowledge that its impact on the hardware, compiler, and library means you cannot rely on a ‘confidence-in-use’ argument. Using AI as safely as possible requires you to go the extra mile and to step up your quality and safety efforts.

Continuous enhancements to test suites are critical, not only for embracing new standards such as small floating-point formats but also for adhering to rigorous safety regulations. SuperTest already includes many data-model-specific test suites to verify the compiler’s arithmetic, and our mathematical function test generator can handle 4-bit floating-point sine functions if required. As ADAS technologies advance, the precision and reliability of compilers and libraries remain as crucial as ever.

Dr. Marcel Beemster, CTO


Subscribe to our monthly blog!