Lunchtime Seminar

Archive Summer Semester 2016

Analysing the Usage of Wikipedia on Twitter

Lecturer:
Eva Zangerle
Post doctoral researcher at DBIS group, University of Innsbruck

Date: Thursday, 23th of June 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Wikipedia is a central source of information as 450 million people consult the online encyclopaedia every month to satisfy their information needs. Some of these users also refer to Wikipedia within their tweets. Firstly, we analyze the usage of Wikipedia on Twitter by looking into languages used on both platforms, content features of posted articles and recent edits of those articles. Secondly, we analyse links within tweets referring to a Wikipedia of a language different from the tweet’s language. Therefore, we investigate causes for the usage of such inter-language links by comparing the tweeted article and its counterpart in the tweet’s language (if there is any) in terms of article quality. We find that the main cause for inter-language links is the non-existence of the article in the tweet’s language. Furthermore, we observe that the quality of the tweeted articles is constantly higher in comparison to their counterparts, suggesting that users choose the article of higher quality even when tweeting in another language. Moreover, we find that English is the most dominant target for inter-language links.


Certified Automated Confluence Analysis of Rewrite Systems

Lecturer:
Julian Nagele
Research assistant at CL group, University of Innsbruck

Date: Thursday, 16th of June 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Term rewriting is a simple yet Turing complete model of computation. Equipped with clear semantics it underlies much of declarative programming and automated reasoning. Arguably one of the most important properties of rewrite systems is confluence, which guarantees that computations are deterministic in the sense that any two diverging computation paths can be joined eventually, thus ensuring that results are unique. Although undecidable in general, much work has been spent on confluence analysis and recently powerful, automatic tools have been developed. However, with great power comes great software complexity, and consequently such tools may contain errors and produce wrong answers and proofs. The predominant solution is to develop independent, highly trusted certifiers that can be used to verify the proofs generated by untrusted tools. To ensure correctness of the certifier itself, its soundness is formally shown in a proof assistant.
This talk discusses automated confluence analysis of rewrite systems, with special focus on certification.


The Art of MPI Benchmarking

Lecturer:
Sascha Hunold
assistant professor at Research Group Parallel Computing, Vienna University of Technology

Date: Thursday, 9th of June 2016, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:
The Message Passing Interface (MPI) is the prevalent programming model used on current supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome these issues, we show which experimental factors have an impact on the run-time of blocking collective MPI operations and how to measure their effect. We present a new experimental method that allows us to obtain reproducible and statistically sound measurements of MPI functions. However, to obtain reproducible measurements, it is a common approach to synchronize all processes before executing an MPI collective operation. Thus, we take a closer look at two commonly used process synchronization schemes: (1) relying on MPI_Barrier or (2) applying a window-based scheme using a common global time. We analyze both schemes experimentally and show the strengths and weaknesses of each approach. Last, we propose an automatic way to check whether MPI libraries respect self-consistent performance guidelines. In this talk, we take a closer look at the PGMPI framework, which can benchmark MPI functions and detect violations of performance guidelines.


Robots learning like a child

Lecturer:
Justus Piater
Head of research group at IIS group at IIS Group, University of Innsbruck

Date: Thursday, 2nd of June 2016, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:
General-purpose autonomous robots for deployment in unstructured domains such as service and households require a high level of understanding of their environment. For example, they need to understand how to handle objects, how to operate devices, the function of objects and their important parts, etc. How can such understanding be made available to robots? Hard-coding is not feasible, and conventional machine learning approaches will not work in such high-dimensional, continuous perception-action spaces and realistic amounts of training data. One way to get robots to learn higher-level concepts may be to focus on simple learning problems first, and then learn harder problems in ways that make use of simpler problems already-learned. For example, learning problems can be stacked by making the output of lower-level learners available as input to higher-level learning problems, effectively turning hard problems into easier problems by expressing them in terms of highly-predictive attributes. This talk discusses how this can be done, including further boosting learning efficiency by active learning, and automatic, unsupervised structuring of sets of learning problems and their interconnections. Following a stacked learning approach, we discuss how symbolic planning operators can be formed in the continuous sensorimotor space of a manipulator robot that explores its world, and how the acquired symbolic knowledge can be further used in developing higher-level reasoning skills.


Risk Assessment for Socio-Technical Systems

Lecturer:
Christian W. Probst
Associate professor at Department of Applied Mathematics and Computer Science, Technical University of Denmark

Date: Thursday, 19th of May 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Attacks on systems and organisations increasingly exploit human actors, for example through social engineering. This non-technical aspect of attacks complicates their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brain-storming of experts. In this talk we will present some results of the TREsPASS project for risk assessment of socio-technical systems. Based on an analysis of the system under scrutiny we identify all possible attacks in the system and measure their potential impact, likelihood of success, and cost. Together, these factors provide us with the means to assess the risk faced by the system, and to identify relevant counter measures.


Predicting Soft Tissue Deformations Using Patient-Specific Meshless Model for Whole-Body CT Image Registration

Lecturer:
Mao Li
postdoctoral researcher at IGS group, University of Innsbruck

Date: Thursday, 12th of May 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Non-rigid registration algorithms that align source and target images play an important role in image-guided surgery and diagnosis. For problems involving large differences between images, such as registration of whole-body radiographic images, biomechanical models have been proposed in recent years. Biomechanical registration has been dominated by Finite Element Method (FEM). In practice, major drawback of FEM is a long time required to generate patient-specific finite element meshes and divide (segment) the image into non-overlapping constituents with different material properties. We eliminate time-consuming mesh generation through application of Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithm that utilises a computational grid in a form of cloud of points. To eliminate the need for segmentation, we use fuzzy tissue classification algorithm to assign the material properties to meshless grid. Comparison of the organ contours in the registered (i.e. source image warped using deformations predicted by our patient-specific meshless model) and target images indicate that our meshless approach facilitates accurate registration of whole-body images with local misalignments of up to only two voxels.


Fixation Patterns During Process Model Creation: Initial Steps Toward Neuro-adaptive Process Modeling Environments

Lecturer:
Manuel Neurauter
Research assistant at QE group, University of Innsbruck

Date: Thursday, 28th of April 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Despite their wide adoption in practice, industrial business process models often display a wide range of quality issues. While significant efforts have been undertaken to better understand factors impacting process model comprehension, only few studies focused on the creation of process models. To better support users during task execution and to reduce cognitive overload, neuro-adaptive information systems provide promising perspectives. This paper presents a set of fixation patterns that have been derived from a modeling session with 120 participants during which the behavior of the modelers was recorded in terms of eye movements as well as model interactions. The identified patterns can be used for automatic real-time detection of the activities a user is performing, an essential building block for the development of a neuro-adaptive environment for process modeling that is able to best fit the task at hand and the user’s individual processing capacities.


Three-Way Replication Industry Standard – High Storage Cost in a Bandwidth Limited Regime?

Lecturer:
Nishant Saurabh
Research assistant at DPS group, University of Innsbruck

Date: Thursday, 21th of April 2016, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:
Three way Replication has been widely adopted in large scale distributed storage systems to enhance fault-tolerance. To this end, a major storage cost overhead incurs as a result of maintaining three replicas, typically each in size of Gigabytes and even beyond. Furthermore, the data extraction hits a major road block in this limited bandwidth regime resulting to application overheads uniquely defined for each storage resource. In this work, we identify Virtual Machine Image(VMI) as a resource to be stored and its application overheads in terms of VMI distribution to the Cloud provider. Henceforth, as an alternative to replication, we focus on erasure coding technique initially been used for secured information dispersal, to achieve similar availability and scalabilty at a lowered storage cost. We also propose a decentralized erasure coded VMI storage repository architecture as a middleware system with a view to improve pre-mentioned overheads and provide services to the Federated Cloud models negating the issue of provider lock-in for rapid VM Provisioning.


Detection of Copy-Move Forgeries in Scanned Text Documents

Lecturer:
Svetlana Abramova
Research assistant at SEC group, University of Innsbruck

Date: Thursday, 14th of April 2016, 12:00 – 1:00

Venue: 3W04, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
A copy-move image forgery refers to copying a portion of an image and re-inserting it (or its filtered version) elsewhere in the same image, with the intent of hiding undesirable contents or duplicating particular objects of interest. The detection of such forgeries has been studied extensively, however all known methods were designed and evaluated for digital images depicting natural scenes. In this talk, I will address the problem of detecting and localizing copy–move forgeries in images of scanned text documents. The purpose of the analysis is to study how block-based detection of near-duplicates performs in this application scenario considering that even authentic scanned text contains multiple, similar-looking glyphs (letters, numbers, and punctuation marks). I will present the results of a series of experiments on scanned documents, carried out to examine the operation of some feature representations with respect to the correct detection of copied image segments and the minimization of false positives. The findings indicate that, subject to specific threshold and parameter values, the block-based methods show modest performance in detecting copy–move forgery from scanned documents. I will present strategies to further adapt block-based copy–move forgery detection approaches to this relevant application scenario.


Region-based Software Auto-tuning

Lecturer:
Juan Durillo
Postdoctoral researcher at DPS group, University of Innsbruck

Date: Thursday, 7th of April 2016, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Software auto-tuning is the process of automatically tuning the code of an application. From initial approaches aimed to reduce the execution time of programs to more sophisticated techniques aimed to optimize several criteria simultaneously, the last decade has witnessed an ever growing interest in this field. The success of auto-tuners relies on two basic properties: (1) efficient exploration of different ways to execute an application; and (2) portability, as auto-tuners can be easily run on any hardware architecture.
Despite their popularity, a wider adoption of auto-tuners to optimize real-world applications is far from being a reality. On the one hand, for real-world scientific applications, the number of possibilities to execute them explodes as a consequence of an ever-increasing number of tunable opportunities offered at both, the software and hardware, levels. On the other hand, most of the current auto-tuners have been proved successful only for specific classes of applications and when applied to small applications composed of a few lines of code, while small-to-medium real applications consist of at least few thousand lines of code.
Our goal is to advance the current state of the art in software auto-tuning, aiming for a wider adoption of auto-tuning methods to optimize real-world applications. In our model, applications are partitioned into different regions, which are the units of tuning. This way, regions with different characteristics will be optimized in different ways: CPU intensive operations, for example, can be performed by setting the CPU at the highest clock frequency to reduce the application time; conversely, memory access operations can be performed at low frequency reducing energy consumption with minimal impact on performance. We aim to face three major challenges related to tuning applications using a region-based approach: (1) how to partition into different regions; (2) how to select these regions out of a program which are worth the tuning effort and which ones should not be tuned at all or beyond a given level; and (3) how to effectively and efficiently tune complex regions to optimize several criteria.


Automated Complexity Analysis of Programs

Lecturer:
Michael Schaper
Research assistant at CL group, University of Innsbruck

Date: Thursday, 17th of March 2016, 12:00 – 1:00

Venue: HSB 7, Technikerstraße 13b, 6020 Innsbruck

Abstract:
Automatically checking programs for correctness has attracted the attention of the computer science research community since the birth of the discipline. In this talk we present an abstract combination framework for the automated complexity analysis of programs and its implementation in the Tyrolean Complexity Tool (TcT).


The Symbiosis Relationship between Computer Vision, Machine Learning and Neuroscience

Lecturer:
Antonio Rodríguez-Sánchez
Assistant Professor at the IIS group, University of Innsbruck

Date: Thursday, 10th of March 2016, 12:00 – 1:00

Venue: SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Computer Vision and Machine Learning benefit from the latest knowledge in Neuroscience to build systems that closely resemble human efficiency in those areas. On the other hand, Neuroscience benefits from Computer Vision and Machine Learning to test hypothesis on how the connections on the brain (more specifically the visual cortex) work. I will present here two approaches concerning how Computer Vision benefits from Neuroscience and how Machine Learning provides hypothesis on how to obtain neural populations resembling those in the visual cortex. For the former, I will present a 3D descriptor that is inspired by recent findings from neurophysiology. The descriptor incorporates surface curvatures and distributions of local surface point projections that represent flatness, concavity and convexity in a 3D object-centered and view-dependent representation. For the latter, I will talk on how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. A diversity prior is introduced in this work for training Restricted Boltzmann Machines (RBMs). We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations.


 

Nach oben scrollen