During the last decade, we can observe a rise in software systems using machine learning (ML).  In particular when it comes to handling uncertainty, ML techniques promise to add a higher level of flexibility and automation.  They observe usually large amounts of data relevant for the system w.r.t. tackling this uncertainty, and encode their observations into a ML model.  Software incorporating ML components bases part of its automated decision making (in particular also for unseen data) on such ML models.

In this seminar, we focus on the challenges arising for the successful adoption of quality assurance techniques to software incorporating ML components. Tackling these challenges  becomes increasingly important, given the increased integration of ML components in particular also in the field of e.g. safety-critical systems. We first study selected approaches specifically developed to test and verify ML components, and in particular explore their inherent limitations. From there we explore which additional challenges therefore arise when it comes to assuring quality of the overall software system, and we evaluate first approaches with the aim of tackling these challenges.