Presentation
Workshop on Design Automation for the Certification of Autonomous Systems (DAC-AS)
TimeSunday, July 10th8am - 5pm PDT
Location3003, Level 3
Event Type
Workshop
AI
Autonomous Systems
DescriptionThe integration of increasingly richer software functions into complex, autonomous systems raises
concerns about their trustworthiness with respect to safety, security, and other dependability measures. While its definition can vary by industry, trust is often achieved through a process of certification, in which the residual risks associated with the deployment of a system in a specified environment are evaluated and deemed acceptable. However, current certification processes heavily depend on human judgment. A certification or regulatory authority is expected to determine whether a system is trustworthy by analyzing large amounts of evidence about a product and its development.
Such a lengthy process can result in superficial, incomplete, biased, and costly evaluations. The situation is exacerbated by the emergence of artificial intelligence (AI) and machine learning (ML) solutions in consumer applications, which has revolutionized the industry, enabling features that were not possible with traditional methods. Safety-critical and mission-critical domains such as the aerospace, automotive, medical, and nuclear domains are eager to leverage AI-enabled software in their products as well, but there is a lack of consensus on how to ensure that such software is trustworthy. Certification standards such as DO-178C and IEC 62304 do not provide explicit guidance for certifying software containing AI components. Without a clear pathway to certification, the risks in developing AI-enabled high-integrity systems remains a barrier to adoption.
This workshop investigates the potential of design automation to mitigate these risks. Design automation concepts can help streamline the certification process by aiding the construction of comprehensive and defensible arguments for system correctness, for example, in the form of assurance cases. On the other hand, new design methods and tools can facilitate the analysis of AI enhanced components and the generation of evidence to support the correctness claims. The workshop aims to bring together the certification, design automation, and artificial intelligence communities in both academia and industry to discuss promising methods for increasing trust in autonomous systems. The one-day workshop will consist of the following three sessions:
1. A set of invited talks covering: (i) argument-based certification, an emerging approach in the medical and aerospace domains that allow developers to provide their own means of regulatory compliance, as defined by structured arguments; (ii) design and verification methods that can increase assurance of AI-enabled systems. Invited talks will be given by established researchers both from academia and industry.
2. An academic and industrial panel on “Certification of AI-Enabled Systems” to foster closer interactions with the audience.
3. A set of working sessions, including parallel breakout sessions, where the participants will discuss, in groups, structured arguments for the assurance of AI-enabled components. A plenary session will follow, to summarize and share with all the participants the outcomes of the breakout sessions.
Meant as a highly interactive workshop, DAC-AS will help the certification, design automation, and artificial intelligence communities to share their experience, identify potential focus areas, and foster collaborations that could lead to breakthroughs in the certification of autonomous systems.
concerns about their trustworthiness with respect to safety, security, and other dependability measures. While its definition can vary by industry, trust is often achieved through a process of certification, in which the residual risks associated with the deployment of a system in a specified environment are evaluated and deemed acceptable. However, current certification processes heavily depend on human judgment. A certification or regulatory authority is expected to determine whether a system is trustworthy by analyzing large amounts of evidence about a product and its development.
Such a lengthy process can result in superficial, incomplete, biased, and costly evaluations. The situation is exacerbated by the emergence of artificial intelligence (AI) and machine learning (ML) solutions in consumer applications, which has revolutionized the industry, enabling features that were not possible with traditional methods. Safety-critical and mission-critical domains such as the aerospace, automotive, medical, and nuclear domains are eager to leverage AI-enabled software in their products as well, but there is a lack of consensus on how to ensure that such software is trustworthy. Certification standards such as DO-178C and IEC 62304 do not provide explicit guidance for certifying software containing AI components. Without a clear pathway to certification, the risks in developing AI-enabled high-integrity systems remains a barrier to adoption.
This workshop investigates the potential of design automation to mitigate these risks. Design automation concepts can help streamline the certification process by aiding the construction of comprehensive and defensible arguments for system correctness, for example, in the form of assurance cases. On the other hand, new design methods and tools can facilitate the analysis of AI enhanced components and the generation of evidence to support the correctness claims. The workshop aims to bring together the certification, design automation, and artificial intelligence communities in both academia and industry to discuss promising methods for increasing trust in autonomous systems. The one-day workshop will consist of the following three sessions:
1. A set of invited talks covering: (i) argument-based certification, an emerging approach in the medical and aerospace domains that allow developers to provide their own means of regulatory compliance, as defined by structured arguments; (ii) design and verification methods that can increase assurance of AI-enabled systems. Invited talks will be given by established researchers both from academia and industry.
2. An academic and industrial panel on “Certification of AI-Enabled Systems” to foster closer interactions with the audience.
3. A set of working sessions, including parallel breakout sessions, where the participants will discuss, in groups, structured arguments for the assurance of AI-enabled components. A plenary session will follow, to summarize and share with all the participants the outcomes of the breakout sessions.
Meant as a highly interactive workshop, DAC-AS will help the certification, design automation, and artificial intelligence communities to share their experience, identify potential focus areas, and foster collaborations that could lead to breakthroughs in the certification of autonomous systems.