Contact:

University of Innsbruck

Dep. of Mathematics

Technikerstraße 13

6020 Innsbruck

Austria

karin.schnass[?]uibk.ac.at

Tel.: +43 512 507 53881

# Karin Schnass

## Hello creative brains!

No open jobs for the moment, unless you are feeling daring or desperate enough to start a PhD with only 2 years guaranteed funding. Nothing better to test your courage or desperation than browsing the START research page and reading some of our papers!

If you have most of the skills in linear algebra, probability theory or functional analysis and all the interest in dictionary learning but don't have a master's project yet, come for a chat (msc)!

Finally, if your head is still spinning from the introductory lectures but you think that messing around with images could be fun, check out the bachelor projects (bsc)!

### News

**[Jan20]**

1 review of a 67 page paper + 2 weeks of hearings for 2 hiring committees = January.

**[Dec19]**

Finished the revision of the compressed dictionary learning paper, feeling a ton lighter. Unfortunately the
manifesto and the associated hell of 2x2 matrices, I inhabit while turning
the proof sketch into a proof, are still waiting. On the positive side Marie is fed up with real data and so we've started to think about theory
for a cute little greedy algorithm, we (re)-discovered and dubbed adaptive pursuit, because it works without knowledge of the sparsity or noise level.

**[Nov19]**

Do you know the obvious thing in the paper, which you try to prove and cannot,
then search the literature for the proof and the only thing you find is that it's obvious? I am proud to say
that Simon and I found a proof for an obvious thing.

**[Oct19]**

I signed my new contract! This means that I am officially tenured and will only research interesting things from now on. No more low-hanging fruit... well maybe for students if it's interesting... anyway, the interesting low-hanging fruit tend to have this Tartarusian behaviour, keyword Tantalus.

**[Sep19]**

Went to Vienna to weasel a four-eye explanation of her SPARS-talk out of Monika Dörfler. Ran into Markus Faulhuber and small talk lead to a date for explanations of his lattice problems. The rendevous arrangements were overheard by Dennis Elbrächter, who immediately volunteered a lesson of his stuff. Conclusion: since people seem happy to give private lessons, next year I'll do a fully organised week of mathematical tourism!

Changing sides I then went to Chemnitz to give a crash course in dictionary learning and to be flashed by the enormous street sizes.

**[Aug19]**

Did what felt like a million reviews, gave a shot at being the project fairy (= reviewing a friend's project proposal) and then went on holidays!!!

**[Jul19]**

Family trip to SPARS in Toulouse, with three presentations! The week after Yong Sheng Soh visited to explain to us his manifesto. I not only learned a lot but we got on so well that we will try to have a shot at analysing the MOD algorithm together.

**[Jun19]**

Papers coming home to be revised and in case of the manifesto being pimped with the new results I got with Marie.

**[May19]**

Got confused over my scientific identity (mathematician, computer scientist, engineer or maybe statistician?) at the very enjoyable Oberwolfach Workshop Statistical and Computational Aspects of Learning with Complex Structure.

**[Apr19]**

Gave the first talk - shamelessly plagiarising Romeo and Juliet - about average OMP results at the, Workshop on Mathematical Signal and Image Analysis in Raitenhaslach and found out how liberally Bavarians define a cycling path (section Tittmoning - Raitenhaslach).

**[Mar19]**

Habemus PhD-studentem, Simon Ruetz will join the START project mid-May. Freedman's inequality did its job, unfortunately now some other estimate turned out to be too crude, aargh. Luckily, also the algorithmic side of dictionary learning is turning out to be fascinatingly messy. In particular, Cristian and I are having a hard time transferring adaptivity from ITKrM to K-SVD, first results indicate that either OMP is a little too greedy, or image data is a little too heterogeneously sparse.

**[Feb19]**

Happiness, I got paper identifier 101 at SPARS - meaning I was the first! Despair, Azuma's inequality is not strong enough, something more Bernsteiny is needed. Still lacking my stochastic oracle, I therefore pressured Alex Steinicke into explaining to me the weird probabilistic notation encountered in Freedman's inequality. And Marie hoped it would be finished soon, PhD students are adorable in their innocence.

**[Jan19]**

I decided that I want PhD students, then I got the accounting for the project and found out that I'm not that good at math and that I overlooked quite a chunk of funds. It came to me in my sleep, I need someone to help me with my math, to be exact I need a stochastic oracle meaning a postdoc in probability theory... I wonder if I can find one who will speak in rhyme but not riddle.

**[Dec18]**

This year I'll do December without life-threatening stress levels - couple of reviews, some project writing and slowly getting back into research gear. I also should decide how to spend the rest of the START money... leaning towards PhD students... best to get them before May next year... maybe a concise call would help...

**[Nov18]**

Habilitation thesis submitted! It's a virtual thesis, though. So if your are interested but not too desperate to read the acknowledgements or the two page introduction, I recommend reading all my papers, that have not been included in my PhD thesis (arXiv numbers 1008 to 1809). The good news of this and last month together with moving house must have addled my brains because I found myself volunteering to rework the big project proposal.

**[Oct18]**

Still one third of October left and I've already been so lucky that I'm scared of being run over by a bus! The START-project has been prolonged for 3 years, the nightmare paper has been accepted and the average OMP-paper has been accepted within 1 month and 1 week (this also means that SPL is now my favourite journal).

**[Sep18]**

For all you OMP lovers out there, who are fed up with being bullied by the BP-mafia for using an 'inferior' algorithm, here is, the reason why I'm using OMP.

**[Aug18]**

More holidays and writing up the OMP results. Let's see if I can keep it to 4 pages, double column of course.

**[Jul18]**

This time I got it, at least I produced some theory and some very interesting curves about the success rates of OMP and BP, best of all: theory predicts curves. Now I'm going for a lot of medical check-ups to be in perfect shape for the pending end of the world and for holidays to get over the shock of realisation.

**[Jun18]**

I went to Strobl and as requested enchanted people with the dictionary learning - from local to global and adaptive talk, featuring the amazing grapefruit slide to explain regions of convergence. Also Cristian Rusu and I joined forces on adaptive dictionary learning, so he visited for a week.

**[May18]**

The START-evaluation report is submitted! Now we have to wait till November to know if we get to keep our jobs, so motivation is soaring. Actually it's not as bad as expected. Things being beyond my control turns out to be quite liberating. So after cleaning my code for the adaptive dictionary learning toolbox, and adding pseudocode to the manifesto, I sharpened my pencil for a bit of dragon-slaying - the still elusive average case analysis of OMP.

**[Apr18]**

It is 47 pages long or - to sound more attractive - 4 pages per figure. Behold, the new paper on dictionary learning - from local to global and adaptive, which is - I am absolutely unbiased here - pretty cool, because apart from the theory getting closer to what we see in simulations, we can do adaptive dictionary learning, meaning automatic choice of sparsity level and dictionary size. Toolbox soon, alas I first have to submit the START-evaluation report. Also for greater enjoyment the m-files should better be converted to humanly readable format.

**[Mar18]**

Michi submitted his thesis, found himself a real job and before leaving for good in April went on holiday for the rest of the month. Who is going to shock me with his code, block my brain with wooden puzzles and send me brilliant music videos? Best to ignore the conundrum as long as possible, invitations to workshops in Paris and Oberwolfach help.

**[Feb18]**

We have a date!!! The mid-term report needs to be submitted by the 9th of May. So full throttle till the 9th and lethargy from the 10th on. Luckily the stochastics exercises are finished and next semester I'm teaching something with prior knowledge! Just need to convince enough students that channel coding is cool, so cool that in the workshop for high school students, they gave me as many positive mentions as they gave to gaming and free food!!!

**[Jan18]**

The nightmare paper is accepted (pdf), long live the nightmare paper!! Actually the other nightmare paper has also been improved, if you want to verify that we really have a fast analysis operator learning algorithm, have a look at the updated toolbox. It's also been resubmitted as a test to see if I will ever do a review for TSP again - couldn't in good conscience force my reviews on them, since they are probably as bad as my papers.

**[Dec17]**

Busy doing the millions of simulations required to revamp the masked dictionary learning paper. While doing that I'm starting to wonder if it's not time to ask the FWF about the prolongation of the project, meaning when the mid-term report is due, meaning when we really should have a finished all those papers, ...on second thought, better stay blissfully ignorant until next year.

**[Nov17]**

Say it with an Offspring song: This rejection got me so low..., well wasn't outright rejection this time, more of a: we will be happy to reject it once you have wasted 6 more weeks doing millions of additional simulations to improve the manuscript. Sometimes I ask myself: Why don't you get a (real) job? Always finish on a positive note... the website has been successfully transferred!

**[Oct17]**

Went to Rennes to be the female jury member in Nicolas Keriven's thesis committee, compressed sample results here. Otherwise drinking coffee and forcing myself to write the paper. At least the transfer of the website is in capable hands, meaning not mine.

**[Sep17]**

As usual once back from holidays I realised that the semester is looming and that I made a horrible mistake when teaching was distributed, meaning I was motivated to learn something new, volunteered for something I forgot a long time ago/never actually knew and now will have to educate myself really fast. And they will kill the server hosting this page, so the page will have to be moved, aargh.

**[Aug17]**

Going mad with finding the correct cut-off for adaptive dictionary learning, ie. how to distinguish between atoms and rubbish in the case of noisy data. Fortunately after the engineering/cooking approach failed, going back to theory worked and I have a criterion!! I can't prevent rubbish to enter but I can weed it out afterwards. Now there is really no reason not to write a paper, but given a week of holidays I'm sure I can come up with something. So off I go!!

**[Jul17]**

Family trip to FoCM in Barcelona!! While Flavio, Michi and Marie were sweating, my brain finally reached operating temperature. List of achievements: pretty talk on adaptive dictionary learning, magic new step size for analysis operator learning, squeezing promise for help with random subdictionaries out of Joel (Tropp). Also cervical supercooling is now the official reason for slow publishing.

**[Jun17]**

Went to SPARS in Lisbon with Flavio and Michi, where we told people about compressed dictionary learning and analysis operator learning, more details here. On this occasion I witnessed for the first time in my life left-over-cake at a conference, amazing!!!

**[May17]**

Tinkering with theory for nightmare paper went well, so now I'm back to tweaking millions of screws to make it work in practice. I'm also trying to improve myself and my teaching with a lot of personnel development courses. Let's see if Flavio notices any improvement once he is back from learning audio dictionaries in Rennes.

**[Apr17]**

Michi hit the arXiv submit button! If you can't bear to leave my homepage to go to arXiv, the analysis operator learning nightmare paper is also available here and we even have a toolbox to play around with!

**[Mar17]**

The WDI2 workshop took place on the 10th. Nobody complained so I'd say success! More good news, both submitted SPARS abstracts are accepted so I'm going to Lisbon to cheerlead Flavio and Michi with their posters. Eventful month because Marie finished her MSc thesis and so I now have a second PhD student. As happens in this case Michi the older kid got jealous and started to actually write the paper, let's see when we finish.

**[Feb17]**

The nightmare lecture is over. In the end I learned a lot. Other than that - can't remember what I did in February. Well it is a short month, subtract school holidays and it gets even shorter. Probably I was tinkering around with theory for the nightmare paper to motivate my replacement strategy.

**[Jan17]**

Things, I never expected to happen, happened.

1) I've almost survived the nightmare lecture - doing much better at the end talking about pca and clustering, ie. stuff I know something about.

2) The project proposal is finished.

3) After wading knee-deep in bugblood and submitting millions of jobs to the cluster, Valeriya and I finished the nightmare paper on dictionary learning from incomplete data, also featuring a toolbox.

To celebrate the START project page is going online!

**[Dec16]**

To increase my stress level from unbearable to life-threatening, I wisely decided to contribute to writing another project proposal. Conclusion: I shall learn to say no, I shall learn to say no, I shall learn to say no.

**[Nov16]**

Marie Pali joined the START project for a 3.5 month internship to have a look if she can bear to do a 3.5 year PhD with us. She is working on extending recovery results for dictionary learning to non-homogenous coefficient models and providing me with chocolate and cigarettes. And Michi and I are organising the next edition of the Donau-Isar-Inn Workshop (WDI2). Have look at the workshop page and register!!

**[Oct16]**

Due to the nightmare lecture, research, paper writing and life in general have been suspended until further notice... well ok the first two have been reduced to brainless low level activities like feeding simulations to the cluster to make pretty pictures and the third to sleeping.

**[Sep16]**

Postholiday disaster: Michi managed to convince me that our simple analysis operater learning algorithm can't be made to work on real data - too unstable. Glimpse of hope, he managed to do something smart but more complicated to make it work. I think he also just wants to avoid writing the paper, which is completely understandable. After going to the Dagstuhl Seminar: Foundations of Unsupervised Learning my remaining holiday relaxation was efficiently annihilated by the realisation that I have one week to prepare another lecture in the famous series "today I learn, tomorrow I teach".

**[Aug16]**

The nightmare paper with the fastest (working) dictionary learning algorithm in the west has been accepted! The nightmare paper is dead! Long live the nightmare paper(s)! The only way progress could be slower would be if I started deleting lines - time for holidays. To ensure a smooth transition I first went to the iTWIST workshop where I was allowed to ramble on for 45 minutes.

**[Jul16]**

Fascinating how time is flying when you are writing up results or rather not writing up results. I am behind all schedules, my excuse is that I was forced to also include a real data simulation in the nightmare paper. At this point thanks to Deanna Needell for sending me fantastic Fabio, to be found on page 17!

**[Jun16]**

Praise the lord - Michi has finally acceded to writing up our stuff about analysis operator learning, so there's hope for a paper once I have done some translation from Shakespearean to modern English. Actually I'm now in triple paper hell because Valeriya and I solved our last algorithmic issues and have given each other deadlines for the various chapters.

**[May16]**

I'm writing up the dictionary learning with replacement etc. stuff, I'm not enjoying myself, I'm grumpy, I won't make a longer entry.

**[Apr16]**

Flavio Teixeira has joined the START-project! He will solve all my dictionary learning problems and remind me to water the plant. Also I have created a monster uaahahahaha dictionary learning algorithm that is fast and does not need to know the sparsity level or the number of atoms. Needless to say that we will never be able to prove anything. However the really bad part is that it also does sensible things with image patches, meaning real data, so there is no way to avoid writing a paper anymore.

**[Mar16]**

I corrected and resubmitted the ITKrM (aka nightmare) paper, which now also features pretty pictures. According to plan all of my time goes into preparing the lecture - well ok secretly I'm comparing the ITKrM algorithm to its cheap version which means watching a lot of jumping points.

**[Feb16]**

Went to the Mathematics of Signal Processing Workshop in Bonn, nice! Then I tried to decide which of my nicely working algorithms should be turned into a paper or rather which aspect should go to which paper to stay below 30 pages and how much math do I really need to add as opposed to how much math would I like to add. Luckily I then found a suitable excuse not to decide anything in the preparation of the time-frequency lecture :D.

**[Jan16]**

Utter frustration, I have tons algorithms that work nicely but I can't prove it. Also I'm in panic because I have to get 6 papers accepted in 5 years to keep my job. At the current rate I'm not going to make it.

**[Dec15]**

Habemus postdocem! If we manage to actually lift the administrational burden, Flavio Teixeira will join the START-project in spring. The advertising talk for masked dictionary learning in Berlin sold well - too bad the product is not finished yet. On hind-sight the trip counts as pretty disastrous, since I paid the hotel for a single but as it turned out a week later I was definitely not alone in the room when counting also the bed bugs.

**[Nov15]**

I hate writing papers. It is so much easier to watch the points in my simulations jump around and improve the algorithms than to actually hammer out all the nasty little details why the masked dictionary learning stuff is locally converging. Obviously after the happy, enthusiastic phase, Michi and I are now experiencing some set-backs with our analysis operator learning - time to ask Andi to teach us how to use the supercomputer. Good news is we potentially bagged a researcher at the medical university, who will give us real world problems and free coffee! Finally anybody interested in our AOL or masked DL stuff has the chance to see it/us in Berlin!

**[Oct15]**

Andi Kofler will join the dictionary learning crowd for 4 months with the goals of a) making dictionary learning faster and b) finding out whether he wants to get a real job or do a PhD with me. The codes seminar is organised, luckily teaching stuff I don't know much about is getting easier every time I do it. Soon I'll be ready for PDEs .... hahaha no way.

**[Sep15]**

I set a new personal record in the category number of means of transport to arrive at a scientific event, that is, 9 to cover the distance Innsbruck - Dagstuhl, where again I was allowed to entertain people with a talk called 'dictionary learning - fast and dirty' (slides, etc) and on top of that lost my fear of the machine learning community, who successfully convinced me that I only need to be afraid of the computer vision community. Then I went on holidays and afterwards into panic about having to organise the seminar on codes.

**[Aug15]**

I managed to write the final report of the Schroedinger project. Obviously this involved a lot of procrastination. To atone I thought I'll turn the latex file into a template. So here an absolutely not official latex template for the FWF final project report (Endbericht), I cannot disclaim enough, but hope someone will find it useful. Also I went to a workshop in Oberwolfach, where I was allowed to entertain people with a talk called 'dictionary learning - fast and dirty' (report).

**[Jul15]**

If I were a rational person I would start to deal with the small problems of the project in order to get it prolonged and then go for the big ones. On the other hand if I were rational I would have gotten a real job 5 years ago, so I'm going straight for the big bastard problem, ie. the global minimum. All help welcome. As threatened I gave a talk about the ITKrM (aka nightmare) paper at SPARS in Cambridge and I think they plan to put the slides online.

**[Jun15]**

Started with the Finnish flu and with the decision to stay in Innsbruck (sigh there goes TU Munich), meaning that this is the first but not the last month of the START-project, proper (longer) page later. It also means that I am again looking for a postdoc and 1-2 PhD students. In the mean-time Michi Sandbichler has agreed to temporarily act as my first minion and we will work on analysis operator learning. After running the simulations for the nightmare paper I decided that the corresponding figures would not be enlightening because you actually can't test the message of the theorems, so here the figure-free but didactically improved submitted version - code for itkrm and simulation results on request. Last thing, as part of our ongoing efforts to save the world Valeriya Naumova and I started to learn dictionaries from incomplete data and in synthetic experiments it even works :D.

**[May15]**

Last month of the Schroedinger project. Since my brain is still blocked with the Innsbruck - Munich decision don't expect mega scientific progress but there is the chance to see a talk about the first part of the nightmare paper at the minisymposium 'Learning Subspaces' at AIP in Helsinki. If you can't make it to Helsinki, don't despair, you have the chance to see a similar talk at SPARS in Cambridge.

**[Apr15]**

While Matlab is running simulations to make nice pictures, I made a more readable version of the nightmare paper, second try. For all those out there interested in dictionary learning the alternative introduction to dictionary learning, written for the bulletin of the Austrian Mathematical Society, is now available and gives an overview over theory until 2014.

And finally a note to all associate editors, I consider my review quota for this year filled and so until the end of 2015 will not feel guilty for rejecting uninteresting reviews. You might still get lucky with theory of sparse recovery (compressed sensingless) or theory of dictionary learning.

**[Mar15]**

Ohoh, I've been lucky again, so now I have to decide between Innsbruck and TU Munich, that will be a hard one. In any case even thinking about the decision has to be postponed in order to finish the nightmare paper. Here is a first try, but I'm not 100% convinced by the structure. The inner mathematician and inner engineer are fighting, where to put the proofs and whether to do some work intense simulations to include enlightening figures.

**[Feb15]**

I've been a good girl because I've reviewed 3 conference and 3 journal papers (don't tell Gitta or I'll immediately get a new one), because I've written an introduction to dictionary learning for the journal of the Austrian Mathematical Society and because I've augmented the web-page with a student area and prepared part of the seminar. But actually I've been a bad girl because I only did the good things in order to avoid finishing the nightmare paper.

**[Jan15]**

Darkness, depression, makes you miss the cosy wet island. Luckily there is a light at the end of the constant tunnel, I decided them all! Finishing the paper will still be postponed indefinitely, since I have been spammed with reviews.

**[Dec14]**

Hello world from Innsbruck! I have overcome huge obstacles, ie. the instructions of the ZID on how to access my webfolder, but finally there it is: a homepage at my institution.

I also decided that new institution, new country means new news section and that the old news will go into an old news section. Btw, I realise that this is a somewhat lengthy entry but that is to force the ugly warning flag of the university down.

Good news: the non-tight dl paper was accepted to JMLR, so soon you can get it for free from there, here, or arXiv. If you prefer a mathy style where proofs are not banished to the appendix go here, but beware the bugs.

Bad news: Still haven't decided all the constants.