Lunchtime Seminar

Archive Summer Semester 2019

The County Fair Cyber Loss Distribution: Drawing Inferences from Insurance Prices

Daniel Wood
University of Oxford

Date: Thursday, 27th of June 2019, 12:00 – 1:00


Quantifying the probability and impact of cyber loss events has proved an elusive quest.  Doing so could help organisations determine the optimal amount of security investment.  This talk looks to the insurance industry to introduce a new approach.  The first part investigates the pricing tables and algorithms used by 26 insurance providers in the USA.  We provide empirical observations on how cyber insurance premiums vary by coverage type, amount, policyholder type, and over time.  The second part introduces a method using Particle Swarm Optimisation to iterate through candidate parameterised distributions with the goal of reducing error in predicting observed prices. We then aggregate the inferred loss models across 6,828 observed prices from all 26 insurers to derive the County Fair Cyber Loss Distribution. We demonstrate its value in decision support by applying it to a theoretical retail firm with annual revenue of $50M. The results suggest that the expected cyber liability loss is $428K, and that the firm faces a 2.3% chance of experiencing a cyber liability loss between $100K and $10M each year.  The method could help organisations better manage cyber risk, regardless of whether they purchase insurance.

The research exchange of Daniel Woods is kindly supported by a BritInn Incoming fellowship.

Adversarial machine learning

Pavel Laskov 
Universität Liechtenstein

Date: Thursday, 13th of June 2019, 12:00 – 1:00


 "Data is the new oil". This succinct metaphor fuels an intense scholarly debate about the genuine value of data in modern economy and society. Tremendous recent progress in methods and applications of machine learning brought up new products, services and capabilities that would have appeared science fiction even a decade ago. As machine learning is being increasingly deployed for security- and safety-critical applications, robustness of learning algorithms to unexpected data perturbations, commonly knowns as “adversarial examples", becomes a crucial property. In this presentation, I will introduce two general scenarios for learning in adversarial environments and present exemplary techniques for attacks against machine learning algorithms. I will further discuss the recent state-of-the-art in detection of adversarial perturbations and development of security guarantees for machine learning.

Compiler generated progress estimation for parallel programs

Peter Zangerl
Researcher at DPS group, University of Innsbruck

Date: Thursday, 6th of June 2019, 12:00 – 1:00


Task-parallel runtime systems have to tune several parameters and take scheduling decisions during program execution to achieve the best performance. In order to decide whether a change was beneficial to the program performance, the runtime needs some kind of feedback mechanism on the progress of the program after such a parameter change was performed. Traditionally, this feedback is derived from metrics only indirectly related to the progress of the program.

To mitigate this drawback, we propose a fully automatic compiler analysis and transformation which generates progress estimates for sequential and OpenMP programs. Combined with a runtime system interface for progress reporting this enables the runtime system to get direct feedback on the progress of the executed program.

Our evaluation results show a significant improvement in estimation accuracy over traditional estimation methods, with an increasing advantage for larger degrees of parallelism.

Automated Dependency Detection Between Test Cases Using Machine Learning

Michael Felderer
Researcher at QE group, University of Innsbruck

Date: Thursday, 23rd of May 2019, 12:00 – 1:00


Knowing about dependencies and similarities between test cases is beneficial for prioritizing them for cost-effective test execution. This holds especially true for the time consuming, manual execution of system-level test cases written in natural language. Test case dependencies are typically derived from requirements and design artifacts. However, such artifacts are not always available, and the derivation process can be very time-consuming. In this presentation, we propose, apply and evaluate a novel approach that derives test cases’ similarities and functional dependencies directly from the test specification documents written in natural language, without requiring any other data source. Our approach uses an implementation of the Doc2Vec algorithm to detect text-semantic similarities between test cases and then groups them using two clustering algorithms HDBSCAN and FCM. The correlation between test case text-semantic similarities and their functional dependencies is evaluated in the context of an industrial on-board train control system. For this system, the dependencies between the test cases were previously derived and are compared to the results of our approach. The results show that of the two evaluated clustering algorithms, HDBSCAN has better performance than FCM or a dummy classifier. The classification methods’ results are of reasonable quality and especially useful from an industrial point of view.

Learned Preconditioning for Sparse Systems of Linear Equations

Johannes Sappl
Researcher at IGS group, University of Innsbruck

Date: Thursday, 16th of May 2019, 12:00 – 1:00


Approximating the solution to the incompressible Navier–Stokes equations with an operator splitting approach, the so-called projection method, leads to a well-known Poisson equation for the pressure inside of the fluid. After a discretization step this results in a huge sparse system of linear equations that needs to be solved. Since the underlying matrix is symmetric positive definite and direct factorization methods are too expensive one usually relies on iterative techniques like conjugate gradients. The speed of convergence, which depends on the condition number of the system matrix, can be increased via preconditioning, i.e. transforming the original system of equations into a better conditioned one with the same set of solutions by means of a non-singular linear mapping. Our novel approach is to train a convolutional neural network to come up with preconditioning matrices depending on the boundary conditions inside of the fluid domain. When comparing this method to some well established techniques we found that the performance of the convolutional neural network is at least comparable to, in some cases even better than state of the art.

Action representations in robotics: A taxonomy and systematic classification

Erwan Renaudo
Researcher at IIS group, University of Innsbruck

Date: Thursday, 9th of May 2019, 12:00 – 1:00


Understanding and defining the meaning of “action” is substantial for robotics research. This becomes utterly evident when aiming at equipping autonomous robots with robust manipulation skills for action execution. Unfortunately, to this day we still lack both a clear understanding of the concept of an action and a set of established criteria that ultimately characterize an action. In this survey we thus first review existing ideas and theories on the notion and meaning of action. Subsequently we discuss the role of action in robotics and attempt to give a seminal definition of action in accordance with its use in robotics research. Given this definition we then introduce a taxonomy for categorizing action representations in robotics along various dimensions. Finally, we provide a meticulous literature survey on action representations in robotics where we categorize relevant literature along our taxonomy. After discussing the current state of the art we conclude with an outlook towards promising research directions.

Verification of the LLL basis reduction algorithm

René Thiemann
Researcher at CL group, University of Innsbruck

Date: Thursday, 2nd of May 2019, 12:00 – 1:00


The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximates an NP-hard problem where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm has several applications in number theory, computer algebra and cryptography.

In this talk we present a formalization of the LLL algorithm in the proof assistant Isabelle/HOL. Both its soundness and its polynomial running-time have been verified.

This is joint work with Ralph Bottesch, Jose Divasón, Max W. Haslbeck, Sebastiaan J. C. Joosten and Akihisa Yamada.

Enhancing visual rendering for interactive AR

Jodok Huber
Researcher at IGS group, University of Innsbruck

Date: Thursday, 11th of April 2019, 12:00 – 1:00


Augmented Reality is defined as an interactive experience where the user’s real environment is enhanced by virtual elements that are directly registered to the physical environment so that both appear to coexist in the same space. The research topic has gained traction over the recent years and has been used in a variety of different fields including entertainment, education, construction and maintenance, navigation, and medical applications. Augmented Reality combines a wide spectrum of research areas dealing with different hardware needed to capture and display the real and mixed environments, methods for tracking a user’s position, capturing the real-world geometry, material and lighting properties and combining the information with real-time rendering techniques trying to achieve consistent visually plausible results. In this thesis we build an Augmented Reality framework for a Head-Mounted Display equipped with stereo cameras for video see-through that focuses on combining the real and virtual environment in a plausible way considering proper occlusion and advanced physical rendering techniques. Our method is flexible and does not rely on any external sensors or other prerequisites in the scene, and except for an optional static light probe used for indirect lighting does not require any precomputation or setup.

IT governance, risk & compliance – Theory and practice

Christian Russ
ZHAW School of Management and Law, Institut für Wirtschaftsinformatik

Date: Thursday, 4th of April 2019, 12:00 – 1:00


The area of Governance, Risk and Compliance (GRC) is still an emerging field within the corporate management, auditors and the academic community. The aim of IT GRC, as an umbrella term, is to align the business strategy with the IT strategy and utilize the available resource in a way, to deliver the maximum performance, conformance and value for the organization. Generally said IT GRC should define, measure and control with Executive Management the “what” (effectiveness) and IT Management should execute the “how” (efficiency).
Nevertheless, the concept behind the acronym has still a lot of potential for misunderstanding and misinterpretation. Today’s business pressure for successfully Digital transformation and radical change would even more need a flexible and enabling IT governance framework. However, IT GRC is often seen as over engineered, bureaucratic and rigid, which hinders agility and speed. Leading Governance frameworks like COBIT 2019 have taken this criticism into account and offer now new ways of designing and implementing more lean and tailor made governance systems.
Based on theoretical concepts, frameworks and existing studies, several perspectives of IT GRC need, benefits and approaches will be discussed. Additionally, the practical GRC application fields used by IT professional are illustrated. Special attention is given to the gap between theory and practice in understanding, applying and harvesting the benefits of GRC. This includes specific aspects such as: goals and objectives, purpose of the concepts, key stakeholders, methodology and requirements, as well as critical success factors and problems/barriers. Further discussion about the issues, the concerns and the diverse views on IT GRC as well as an outlook for future trends will be addressed.

Detecting token systems on Ethereum

Michael Fröwis
Researcher at SEC group, University of Innsbruck

Date: Thursday, 21st of March 2019, 12:00 – 1:00


We propose and compare two approaches to identify smart contracts as token systems by analyzing their public bytecode. The first approach symbolically executes the code in order to detect token systems by their characteristic behavior of updating internal accounts. The second approach serves as a comparison base and exploits the common interface of ERC-20, the most popular token standard. We present quantitative results for the Ethereum blockchain, and validate the effectiveness of both approaches using a set of curated token systems as ground truth. We observe 100% recall for the second approach. Recall rates of 89% (with well explainable missed detections) indicate that the first approach may also be able to identify “hidden” or undocumented token systems that intentionally do not implement the standard. One possible application of the proposed methods is to facilitate regulators’ tasks of monitoring and policing the use of token systems and their underlying platforms.

Budget-Constrained Workflow Scheduling in IaaS Cloud

Hamid Faragardi
Researcher at DPS group, University of Innsbruck

Date: Thursday, 14th of March 2019, 12:00 – 1:00


Nowadays, the huge amount of computing capacity provided by the cloud computing paradigm enables users to execute massive workflow applications faster than ever. However, this enormous computing power provided by Infrastructure as a Service (IaaS) cloud services is not for free. The users will be charged according to a pay-per-use model to utilize the cloud services. The more powerful computing resources, the more expensive cloud services. Accordingly, if the users intend to run their workflow applications on the cloud resources within a specific budget, they have to adjust their demands for cloud resources with respect to this budget.
Although several studies have been accomplished to minimize the completion time of workflows on a set of heterogeneous IaaS cloud resources within a certain budget, the hourly-based cost model used by some well-known cloud providers (e.g., Amazon EC2 Cloud) makes them inefficient for scheduling workflows in such cloud systems.
In our research work, we proposed multiple efficient static solution frameworks, each of which includes both resource provisioning and workflow scheduling algorithms for minimizing the completion time of a given workflow subject to a budget constraint for the hourly-based cost model. 

Rewrite-Based Analysis of Expression Simplifications, or: Why Your LLVM Compiler May Not Terminate

Sarah Winkler
Researcher at CL group, University of Innsbruck

Date: Thursday, 7th of March 2019, 12:00 – 1:00


Simplifying expressions of some kind is a task common to many applications. Naturally, in simplification processes nontermination is a critical source of errors. At the same time, uniqueness of results is often a desirable property. Such simplification processes can be modeled by logically constrained term rewrite systems (LCTRSs), a general yet practical rewriting formalism.

In this talk I propose term rewriting techniques using LCTRSs to investigate non-termination and confluence. The usefulness of the presented methods is illustrated by analyzing LLVM peephole optimizations, that is, expression simplifications in the LLVM compilation toolchain. 


Nach oben scrollen