Lunchtime Seminar

Archive Winter Semester 2018/2019

Dynamic scheduling using cloud volatile resources

Sashko Ristov
Researcher at DPS group, University of Innsbruck

Date: Thursday, 31st of January 2019, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Public cloud providers offer their resources as a service on-demand by promising a very high availability. The resources are reliable and their prices are fixed during leased time. During the periods of under-provisioning, public cloud providers can lease their unused resources for much cheaper price. The trade-off for the lower prices is that the leased resources will be volatile (unreliable) because they can be reclaimed by public cloud providers at any time. The price also fluctuates in real-time according to current user demand.
This talk will present the (un)reliability and performance behavior of volatile resources. Using these features, an ongoing work towards new approach of dynamic scheduling algorithm for scientific computing using cloud volatile resources will be elaborated.

Bobtail: Improved Blockchain Security with Low-Variance Mining

George Bissias
Research scientist at the College of Information and Computer Science at the University of Massachusetts at Amherst

Date: Thursday, 24th of January 2019, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

 Blockchain systems are designed to produce blocks at a constant average rate. The most popular systems currently employ a Proof of Work (PoW) algorithm as a means of creating these blocks. An unfortunate limitation of all deployed PoW blockchain systems is that the time between blocks has high variance. For example, Bitcoin produces, on average, one block every 10 minutes. However, 5% of the time, Bitcoin’s inter-block time is at least 40 minutes.
In this paper, we show that high variance is at the root of several fundamental attacks on PoW blockchains. We propose an alternative process for PoW-based block discovery that results in an inter-block time with significantly lower variance. Our algorithm, called Bobtail, generalizes the current algorithm by comparing the mean of the k-lowest order statistics to a target. We show that the variance of inter-block times decreases as k increases. Bobtail significantly thwarts doublespend and selfish mining attacks, and makes detection of eclipse attacks trivial and quick. For example, for Bitcoin and Ethereum, a doublespending attacker with 40% of the mining power will succeed with 53% probability when the merchant sets up an embargo of 1 block; however, when k ≥ 40, the probability of success for the same attacker falls to less than 1%. Similarly, for Bitcoin and Ethereum currently, a selfish miner with 49% of the mining power will claim about 95% of blocks; however, when k ≥ 20, the same miner will find that selfish mining is less successful than honest mining. We also investigate attacks newly made possible by Bobtail and show how they can be defeated. The primary costs of our approach are larger blocks and increased network traffic. 

Time-varying surface animation from multi-view reconstruction of actors

Ludovic Blache
Researcher at IGS group, University of Innsbruck

Date: Thursday, 17th of January 2019, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

4D multi-view reconstruction technologies are used in media production due to their ability to produce a virtual clone of an actor from a video acquisition, performed by a set of multi-viewpoint cameras. This approach is a major advance for the composition of 3D scenes mixing real actors and virtual environments. The drawback of this method is that it is not suited for the reconstruction of dynamic scenes. The output are time series which describe the successive poses of the actor, figured as a sequence of static objects. We propose a new approach to transform these initial results into a dynamic 3D object where the actor is figured as an animated character. The resulting mesh and its texture can then directly be edited in post-production.

Quantum computation, matchgate circuits, and compressed quantum computation

Martin Hebenstreit
Researcher at the Institute for Theoretical Physics, University of Innsbruck

Date: Thursday, 10th of January 2019, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck


Although it is believed that quantum computation cannot be classically efficiently simulated in general, there exist certain restricted classes of quantum circuits for which classical simulation is indeed possible. The most prominent example are the Clifford circuits. Here, we consider another such class, the so-called matchgate circuits (MGCs) [1,2]. MGCs can be classically efficiently simulated and moreover, performed as a compressed quantum computation, i.e., the computation can be performed on an quantum computer using exponentially fewer qubits and only polynomial overhead in runtime [3]. We elaborate on and extend recent results [4] on classical simulability of MGCs. To this end, we discuss the notion of magic states in this context.

[1] L. Valiant, SIAM J. Computing 31, 1229 (2002), B. Terhal and D. DiVincenzo, Phys. Rev. A 65, 032325 (2002)

[2] R. Jozsa and A. Miyake, Proc. R. Soc. A 464, 3089 (2008)

[3] R. Jozsa, B. Kraus, A. Miyake, J. Watrous, Proc. R. Soc. A 466, 809 (2010)

[4] D. J. Brod, Phys. Rev. A 93, 062332 (2016)

Searching for architecture knowledge in online developer communities

Matthias Galster
Head of the Software Engineering Research and Applications Lab, University of Canterbury, New Zealand

Date: Thursday, 13th of December 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Software developers need to know and understand suitable architecture solution alternatives to make informed software architecture design decisions. However, with the rapid change and continuous increase of architecture solution alternatives (e.g. technologies, architecture patterns, tactics), it is challenging for developers to acquire architecture knowledge and to ensure that this knowledge is up to date. In this talk we explore how to improve how architects search for architecturally relevant information in online developer communities. We discuss a new search approach for architecturally relevant information using Stack Overflow as an example of an online developer community. The proposed search approach differs from a conventional keyword-based search in that it considers semantic information of architecturally relevant concepts in Stack Overflow. Experiments with practitioners showed that the new search approach outperforms a conventional keyword based search.

Blockchain superoptimizer

Julian Nagele
Postdoctoral Research Assistant at the School of Electronic Engineering and Computer Science at the Queen Mary University of London

Date: Thursday, 6th of December 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

With increased reliance on smart contracts in many areas, the volume and frequency of transactions has grown dramatically in recent years. Under this increased pressure, overhead of transaction execution has become a bottleneck.
We built a tool that automatically finds optimizations for Ethereum smart contracts. The tool implements a method called unbounded superoptimization, which relies on a constraint solver to guarantee correctness of the transformation. We analyzed over 50K smart contracts and, using our tool, identified missed optimization opportunities in many of them, achieving 2259871 reduction in "gas", i.e., the cost of executing a contract. 

Improving the quality of art market data with Linked Open Data and machine learning

Dominik Filipiak
Research Assistant at the Poznan University of Economics and Business

Date: Thursday, 22nd of November 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Among numerous research studies devoted to art markets, very little attention is given to the quality of the data. The usage of Linked Open Data and Machine Learning can pave the way to improve the quality of data and enrich results of other art market research as a consequence, such as building indices. Most of the quantitative research on art market focuses on building indices to yield an easy-to-interpret outlook and compare art to other collectibles or forms of investment. Since these methods rely mostly on a variety of linear regression models, they are tightly coupled to the data. A single lot (a painting, for example) can be deconstructed to a set of features describing it, such as its author, size, style or technique. Methods used to construct art market indices usually pair the hammer price and these features using a linear model to examine their influence on the hammer price. What makes art market data useful is a large number of features to examine in such models. Therefore, the richer and better a data source is, the more accurate a model will be. However, art market data are often incomplete or contain human errors, inconsistencies, and ambiguations. The aforementioned motivation paves the way to enunciate the main research problem, which is: How to improve the quality and usefulness of data for the Polish art market? The approach to this problem is twofold and consist of using Linked Open Data and Deep Convolutional Networks.
There are two ways of using LOD for enriching an art market dataset. Since a single considered observation contains information about lot's author and (sometimes) its description, it can be analysed separately. DBpedia, tightly coupled with Wikipedia, provides access to information about numerous extracted concepts - especially from infoboxes, which are available in an easy to parse way. Making a vast amount of crowd-sourced data structured, it comes with a SPARQL API to facilitate querying about sophisticated structures. Therefore, the artists' names and surnames can be linked with DBpedia entities. From that point it is possible to extract e.g. artist's date of birth or death, nationality or style. With regard to the second method, the description analysis, it needs an employment of Natural Language Processing, since it requires stemming, lemmatisation and Named Entity Recognition. This might enable the extraction of single concepts from the description. The second sub-stage analyses paintings' style and genre. By employing deep convolutional neural networks it is possible to classify images regarding these features. A classifier might be trained using WikiArt data, a decent source of labeled artworks. This will enable to connect observations with their styles and features. Having a wide range of relevant data is one of the most important steps in the index calculation process. Therefore, this is the place where this approach can be used for yielding more accurate indices.

Overview of the PAN initiative and its tasks in the field of natural language processing

Michael Tschuggnall
Researcher at DBIS group, University of Innsbruck

Date: Thursday, 15th of November 2018, 12:00 – 1:00

Venue: SR 2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

PAN is an internationally renown initiative which organizes workshops and competitions in the field of natural language processing, especially in stylometry. As co-organizer of the latest PAN events I will give an insight of the reproducibility concept followed by PAN as well as an overview of the tasks proposed in 2018's competitions, which are (i) style change detection, (ii) cross-domain authorship attribution, (iii) author profiling and (iv) author obfuscation. For each task, a short problem overview as well as a summary of the most promising approaches and their results is given.

The rise and fall of cryptocurrencies

Marie Vasek
Assistent Professor at the Computer Science Department, University of New Mexico

Date: Thursday, 8th of November 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Since Bitcoin’s introduction in 2009, interest in cryptocurrencies has soared. One manifestation of this interest has been the explosion of newly created coins. This talk will examine the dynamics of coin creation, competition and destruction in the cryptocurrency industry. In addition, this talk will discuss the growing trend of cryptocurrency pump and dump scams. While pump and dump scams have been around for the past century, the recent explosion of nearly 2,000 cryptocurrencies in a largely unregulated environment has greatly expanded the scope of abuse.

Children's Privacy Protection Engine from Smart Anthropomorphic Toys to Robots

Patrick C. K. Hung
Researcher at the Faculty of Business and IT, University of Ontario Institute of Technology, Canada

Date: Thursday, 31st of October 2018, 12:00 – 1:00

Venue: 3W04, 2nd floor, ICT Building, Technikerstraße 21a, 6020 Innsbruck

A toy is an item or product intended for learning or play, which can have various benefits to childhood development. Children's toys have become increasingly sophisticated over the years, with a growing shift from simple physical products to toys that engage the digital world. Toy makers are seizing this opportunity to develop products that combine the characteristics of traditional toys such as dolls and stuffed toys with computing software and hardware. A smart anthropomorphism toy is defined as a device consisting of a physical toy component in the humanoid form that connects to a computing system through networking and sensory technologies to enhance the functionality of a traditional toy. Many studies found out that anthropomorphic designs resulted in greater user engagement. Children trusted such designs serve a good purpose and felt less anxious about privacy.
While there have been many efforts by governments and international organizations such as UNICEF to encourage the protection of children's data online, there is currently no standard privacy-preserving framework for mobile toy computing applications. Children's privacy is becoming a major concern for parents who wish to protect their children from potential harms related to the collection or misuse of their private data, particularly their location. This talk presents the related research issues with a case study of Mattel's Hello Barbie. Further, this talk will also discuss the current research works for companion robots.

Transaction fees, block size and auctions in Bitcoin & Bitcoin mining as a contest

Nicola Dimitri
Professor at the Department of Economics and Statistics, University of Siena

Date: Thursday, 25th of October 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck



A fundamental element of the blockchain technology supporting Bitcoin is the block size. Recently the issue of the preferred block size raised a lively debate within the Bitcoin community. This led to the birth of Bitcoin Cash, a new cryptocurrency based on similar principles as Bitcoin however with a larger block size. From an economic perspective, this paper presents a discussion on what could be an optimal block size for the bitcoin miners. We argue that transaction fees play a crucial role in the analysis, where they are interpreted as bids in an auction selling block space for transactions registration. We analyse the Nash Equilibria of this auction game and discuss that the optimal block size for a miner depends on the willingness to pay distribution of users.    


A very important feature of Bitcoin is the so called mining activity. This is performed by specialised nodes of the network, currently endowed with massive computational power. These nodes, the miners, are engaged in solving a cryptopuzzle which would allow to register and confirm outstanding transactions in the next block, in exchange for a reward. There are no strategies to solve the puzzle whose solution can only be found by trial and error, brute force. For this reason, the energy expenditure associated to the mining activity recently became huge, and it natural to ask if and under what conditions mining is profitable. Moreover, are such high costs leading to dominant positions? By modelling this race to the solution as a simple contest we obtain two main findings. First is that the decision to be an active miner depends only on the relative cost structure and not on the reward. Moreover, there is no intrinsic element in the mining activity that would lead to a monopoly, though (oligopolistic) dominant positions could not be excluded.          

Shared Cyber Threat Intelligence for Information Security Risk Management

Clemens Sauerwein
Researcher at QE group, University Innsbruck

Date: Thursday, 18th of October 2018, 12:00 – 1:00

Venue: 3W04, 2nd floor, ICT Building, Technikerstraße 21a, 6020 Innsbruck

In order to counteract today’s sophisticated and increasing number of cyber threats the timely acquisition of cyber threat intelligence regarding changing risks, emerging attacks, attackers’ courses of action and indicators of compromise has become indispensable in information security risk management. As a result, employees tasked with information security risk management processes rely on a large variety of information sources. These sources range from public available cyber threat intelligence sources (e.g., mailing lists,...) to inter-organizational cyber threat intelligence sharing platforms, resulting from a new trend of forming sharing communities to collectively protect against today’s complex cyber attacks.
Since empirical research in the field of cyber threat intelligence sharing is rare, comprehensive analyses about the characteristics of the different types of cyber threat intelligence sources and how they can be uniformly integrated into information security risk management processes are missing. Moreover, little is known how companies utilize cyber threat intelligence and what requirements must be fulfilled in order to support information security risk management processes.
The goal of our research is to empirically investigate these gaps by conducting qualitative research in form of interviews and workshops with cyber security experts responsible for information security risk management processes at their companies combined with empirical investigations of different cyber threat intelligence sources. 

Towards a theory of JPEG block convergence

Cecilia Pasquini
Researcher at SEC group, University Innsbruck

Date: Thursday, 11th of October 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Image forensics develops techniques to reveal the processing history of digital images, with the purpose of inferring information on their authenticity. Among many scenarios, this includes discovering the application of one or more lossy compressions, mostly in JPEG format. When the compression scheme is applied repeatedly with the same parameters and high compression quality, the 8x8 blocks processed separately in the compression pipeline get stable (i.e., unaltered) after a number of compression operations. This is referred to as JPEG block convergence and has been shown to be a useful tool to forensically analyze high quality compressed images, as it is largely independent on the image content.
While current approaches are based on empirical observations solely and the convergence mechanism is still obscure, in our talk we outline a theoretical analysis explaining the case of grayscale images and maximum quality JPEG compression (i.e., quality factor equal to 100). The approximate distribution of the stable block ratio at different compression stages is derived. We apply our theory to discriminate never compressed images and images compressed once with maximum quality, by allowing for a 
calibration-free maximum likelihood classification rule. Tests on image patches with different size and content validate the theoretical results.

Twitter trolls and bots in politics

Melih Kirlidog
IFI visiting researcher

Date: Thursday, 4th of October 2018, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

There is an intensive activity of trolls and bots in Twitter. A significant part of this activity is about politics. Although the term "troll" implies an individual activity, some political parties employ volunteer or salaried trolls to spread the word. Trolling also implies online harassment and intimidation.
Trolls are supported by bots to create large "echo chambers." This is usually a quantifiable activity such as positioning in the TT ranks. As this activity is open to manipulation and abuse, Twitter suspended more than 70 million bot accounts in mid-2018. Although there are several ways to detect the bots, the most reliable one is the large number of posts within a second.
More than 120 million tweets have been collected since July 2017. The collection mainly involves Turkish and German politics although the data from 18 other countries including Austria have also been collected. Tweets are stored in JSON format in a PostgreSQL database.
The dynamics of troll-bot cooperation will be discussed in the talk.

Nach oben scrollen