Partners in the project are the Department of Computer Science at the University of Innsbruck with Assoc. Prof. Dr Michael Felderer as overall project manager, the Software Competence Centre Hagenberg and the company Gepardec IT Services GmbH.
Modern software systems are continuously being developed, which means that new functions and updates can be made available to users quickly. Continuous development means that quality must also be ensured on an ongoing basis. It is no longer sufficient for functions and updates to be tested on the test system before release, on the assumption that the user's system will behave in exactly the same way as the test system. However, this does not apply to many modern applications that run on a wide variety of end devices and in a wide variety of environments (from edge devices to the cloud). What's more, with modern systems, it is often impossible to predict at the time of testing how users will actually use the system in the field and what data the AI algorithms will be confronted with. Continuous testing therefore means that the test phase does not end abruptly with the release, but is extended into the phase of ongoing operation.
However, the continuous approach poses a difficult question for the test: how can a decision be made as to whether the result of a future use of the system, which cannot be predicted at this point in time, is correct or incorrect?
The ConTest project is researching a fundamentally new test oracle approach that uses robust distance measures based on statistical moments to make the results of different uses comparable and thus reliably identify differences in quality and faulty system behaviour even in the event of a gradual change. "However, the results of the mathematically formalised, statistical measurement must also be understandable and interpretable for the tester," says the head of the project, Associate Professor Michael Felderer. "This is why the statistical measurement results are mapped to a risk model that can be used to make application-specific quality statements - how good is the system - and assess the risk - how critical are deviations or errors?"
In the project "ConTest: Continuous Software Testing: Solving the Oracle Problem in Production", researchers from the Department of Computer Science at the University of Innsbruck are working together with the Software Competence Centre Hagenberg and Gepardec IT Services GmbH on a solution specifically for AI-based systems and cloud services. The project is funded by the Austrian Research Promotion Agency FFG as part of the BRIDGE programme.
