Porträt Karin Schnass

 


Contact:

University of Innsbruck
Dep. of Mathematics
Technikerstraße 13
6020 Innsbruck
Austria

karin.schnass[?]uibk.ac.at
Tel.: +43 512 507 53881

Karin Schnass

Welcome!

To my personal homepage. If you are interested in dictionary learning and other research have a look at my group page, where you can read some papers!

If you have most of the skills in linear algebra, some of the skills in probability theory or harmonic analysis and all the interest in dictionary learning but don't have a master's project yet, come for a chat (msc)!

If your head is still spinning from the introductory lectures but you think that messing around with images could be fun, check out the bachelor projects (bsc)!

Finally, to share my woes and joys in research - have a look at the news (and olds) below.

News


[Jun24]
A month of revision nightmare interspersed with wonderful talks and visits. First Felix Krahmer and Ayush Bhandari told me about unlimited sensing or how to reconstruct from quantization noise and we talked a lot about sparse approximation in this or other contexts.
Then, WDI2 turned out to be as interesting as selective with very enjoyable talks and posters.


[Apr24]
I am happy that Felix Krahmer will visit mid-June to give a colloquium talk on unlimited sensing. Be there or be somewhere less interesting!


[Mar24]
The next edition of WDI2 - Workshop on Approximation Theory and Applications will be in Innsbruck, Friday 28th of June. It will feature talks by Diana Carbajal, David Krieg and Antoine Maillard as well as posters and cookies. More details here.
I am looking forward to seeing you in Innsbruck!


[Dec23]
I am looking forward Sjoerd Dirksen's visit end of January. He will give a talk on random two layer networks at our colloquium. Be there or be somewhere less interesting!


[Nov23]
Congratulations to Mo Kühmeier for successfully defending his masters thesis! Also, welcome to the joys and sorrows of life as PhD student!


[Sep23]
The end of my sabbatical. On a more positive note Mo(rris-Luca) Kühmeier has handed in his master thesis. If after reading our paper on the convergence of MOD and ODL, you are asking yourself 'but what about varying the distributions of the non-zero coefficients between atoms?', you can find an answer in his thesis.


[May23]
The last month of START-project Y760 - we haven't solved all problems in dictionary learning but we've for sure drilled a deep hole into it! So a big thanks to the whole team, but especially Michi, Andi, Flavio, Marie and Simon - it's been a pleasure to break our heads together.


[Apr23]
Simon's last month in the project but will it also be his last month in academia? If, after reading his latest preprint on the convergence of MOD and ODL (aKSVD) for dictionary learning, you think 'no way', I will happily forward any job offers.


[Mar23]
Praise Urania and all gods in charge of math, the nightmare paper has been accepted to Information and Inference. If you want to learn a dictionary but don't know size or sparsity level, give it a try. If you want to see some nice convergence results, give it a try. If you want to know, why your own algorithm got stuck, and want to unstick it, give it a try!


[Dec22]
Congratulations to Dr. Ruetz, the PhD student formerly known as Simon!
Also right on schedule we have a new preprint collecting all you ever wanted to know about the hottest topic of the sixties: inclusion probabilities in rejective sampling.


[Sep22]
Holidays have been taken, fall semester preparations have started with unprecedented chaos and there was the chance to get a preview at the new results for MOD and KSVD at ICCHA2022.


[Aug22]
Simon has handed in his thesis! Congratulations!! Now we can both collapse, go on holidays in September, play a round of tennis in October and start turning the chapters into some very nice papers in November.


[June22]
Simon has a new preprint on adapted variable density subsampling for compressed sensing out, which will tell you how to give a simple boost to your cs with a bit of statistical info about your data! Find out more by watching his talk.


[May22]
The nightmare paper has left the pipeline with favourable reviews, meaning it could be only a couple more years until publication.


[Mar22]
Congratulations to Elli and welcome to the world Emma Christina - born on pi day - how cool is that!


[Feb22]
Marie is leaving me at the end of the month. Fortunately she is not going far, so I can still meet her for lunch and coffee near the city centre.


[Dec21]
Aargh, the first conference in 2 years and I get corona just in time! Looking on the bright side, the vaccination seems to be doing its job and I will most likely survive. Also online conference participation is actually a nice way to spend your quarantine.


[Aug21]
Congratulations to Dr. Pali, the scientist formerly known as Marie!!


[Jul21]
Joscha Prochno and I won a prize!! So a big thanks to Michaela Szölgyenyi, Monika Dörfler and Philipp Grohs for nominating me and to the ÖMG for giving it to me.


[Jun21]
If there is one good thing that came out of corona, it's the new tablet presentation possibilities. Oh my dear, do I enjoy any opportunity to methodically scrawl all over my slides.


[May21]
The random subdictionary paper has been accepted. Normally the acceptance of a paper means the death and resurrection of the nightmare paper but, alas it is still firmly stuck in the pipeline.


[Apr21]
Marie has handed in her thesis!! And we have revised the manifesto. Also Simon and I got fantastically thorough reviews for the random subdictionary paper.


[Mar21]
I said yes once too often and landed myself with the job of study responsible for the math undergraduate programmes. Hopefully confusion will subside with time. I will also be proud new co-organiser of the 1W-MINDS Seminar from July on. Currently I'm learning a lot about time zones and which countries use day light saving time.


[Feb21]
Marie and Andi's paper on dictionary learning for adaptive mri is listed as editor's choice. I'm so proud of them I could burst. If I had stronger magnets, I'd forever attach it to my fridge. For those interested in the conditioning of submatrices, I gave a talk at the CodEX Seminar in Colorado, which is available on youtube.


[Jan21]
I have been upgraded and so the group is now listed on the math department's homepage under the fancy new label mathematical data science. Thanks to the Applied Math Group for hosting us until now!


[Dec20]
Simon and I have a new paper. If you want to know about the conditioning of submatrices, when you don't draw the atoms uniformly at random, you should definitely take a look. Looking on the bright side of corona, Flavio could join the Christmas beer & pubquiz, which was chosen as low effort replacement for the christmas pasta.


[Nov20]
I found a new PhD student to lavish all my money on! Elli Schneckenreiter, MSc in teaching math and chemistry, started at the beginning of this month. In my defense, Marie, Simon and I did everything in our power to discourage her and thus not keep her from saving our schools. It's not our fault they closed 2 weeks later. For more lamentation over closed schools, home schooling etc. see the March entry. Luckily Andi Kofler saved all my future research with this piece of wisdom: good coffee should not be 100% arabica but contain also robusta!!


[Oct20]
Marie's nightmare paper has been accepted, long live Marie's new nightmare paper, aka the manifesto, which incidently has been my nightmare paper for already some years.


[Jul20]
Found out that the OMP paper I was so happy about is not wrong but definitely an overkill. Seems that all you need to perfectly describe the shape of the experimental curve is a random support, decaying coefficients and the correct p-norm in a matrix vector bound. Random signs completely unnecessary!


[Jun20]
The destination of this year's family trip has been decided. The exotic place of choice is Graz, where we will crash the Austrian Stochastics Days, in the hope of finding someone to solve our probability problems. The victim we have in mind is Joscha Prochno and the strategy is inspired by Jurassic Park.


[May20]
I'm officially old now. I'm putting on a brave show pretending I don't mind. Luckily, since I only celebrate round and interestingly numbered birthdays, I got the celebration of my 40th already out of the way two years ago - 38 is just too ugly a number to warrant any recognition. For those who stop by for scientific content, I gave a talk about some new results to an empty auditorium (weird), which some people in the online audience told me they liked, so here: slides. Also I managed to pin down Sigrid (Neuhauser) long enough to have her explain to me her imaging problems and to sketch a small project as part of a big project together - fingers crossed.


[Apr20]
Success! I have survived the first part of the home-learning lecture and reached the Easter holidays alive. Time to check how many of the students managed to do the same. Thank heavens or rather Alex for lending me his tablet, so we can do some Q&A sessions and continue with almost normal lectures after the holidays. But first a well-earned nervous break-down.


[Mar20]
I'm expanding my teaching portfolio. It now covers: how many jumps of 3 units does the bullfrog need to make to go 12 units, the difference in English between present simple and progressive, the fairy tale, horses, cows and rodents, cross-multiplications, the boreal zone with its perifluvial and periglacial features (which personally I consider quite advanced for 10-11 year olds), doubling of consonants after short vowels,...


[Feb20]
Went to Genova to visit Cristian (Rusu) and Lorenzo (Rosasco). Lorenzo installed the nicest visiting system - I will steal it. He also deserves a medal for his efforts to explain to me the ideas behind the recently discovered W shape of statistical learning. I'm motivated to look for the paper.


[Jan20]
1 review of a 67 page paper + 2 weeks of hearings for 2 hiring committees = January.


[Dec19]
Finished the revision of the compressed dictionary learning paper, feeling a ton lighter. Unfortunately the manifesto and the associated hell of 2x2 matrices, I inhabit while turning the proof sketch into a proof, are still waiting. On the positive side Marie is fed up with real data and so we've started to think about theory for a cute little greedy algorithm, we (re)-discovered and dubbed adaptive pursuit, because it works without knowledge of the sparsity or noise level.


[Nov19]
Do you know the obvious thing in the paper, which you try to prove and cannot, then search the literature for the proof and the only thing you find is that it's obvious? I am proud to say that Simon and I found a proof for an obvious thing.


[Oct19]
I signed my new contract! This means that I am officially tenured and will only research interesting things from now on. No more low-hanging fruit... well maybe for students if it's interesting... anyway, the interesting low-hanging fruit tend to have this Tartarusian behaviour, keyword Tantalus.


[Sep19]
Went to Vienna to weasel a four-eye explanation of her SPARS-talk out of Monika Dörfler. Ran into Markus Faulhuber and small talk lead to a date for explanations of his lattice problems. The rendevous arrangements were overheard by Dennis Elbrächter, who immediately volunteered a lesson of his stuff. Conclusion: since people seem happy to give private lessons, next year I'll do a fully organised week of mathematical tourism!
Changing sides I then went to Chemnitz to give a crash course in dictionary learning and to be flashed by the enormous street sizes.


[Aug19]
Did what felt like a million reviews, gave a shot at being the project fairy (= reviewing a friend's project proposal) and then went on holidays!!!


[Jul19]
Family trip to SPARS in Toulouse, with three presentations! The week after Yong Sheng Soh visited to explain to us his manifesto. I not only learned a lot but we got on so well that we will try to have a shot at analysing the MOD algorithm together.


[Jun19]
Papers coming home to be revised and in case of the manifesto being pimped with the new results I got with Marie.


[May19]
Got confused over my scientific identity (mathematician, computer scientist, engineer or maybe statistician?) at the very enjoyable Oberwolfach Workshop Statistical and Computational Aspects of Learning with Complex Structure.


[Apr19]
Gave the first talk - shamelessly plagiarising Romeo and Juliet - about average OMP results at the, Workshop on Mathematical Signal and Image Analysis in Raitenhaslach and found out how liberally Bavarians define a cycling path (section Tittmoning - Raitenhaslach).


[Mar19]
Habemus PhD-studentem, Simon Ruetz will join the START project mid-May. Freedman's inequality did its job, unfortunately now some other estimate turned out to be too crude, aargh. Luckily, also the algorithmic side of dictionary learning is turning out to be fascinatingly messy. In particular, Cristian and I are having a hard time transferring adaptivity from ITKrM to K-SVD, first results indicate that either OMP is a little too greedy, or image data is a little too heterogeneously sparse.


[Feb19]
Happiness, I got paper identifier 101 at SPARS - meaning I was the first! Despair, Azuma's inequality is not strong enough, something more Bernsteiny is needed. Still lacking my stochastic oracle, I therefore pressured Alex Steinicke into explaining to me the weird probabilistic notation encountered in Freedman's inequality. And Marie hoped it would be finished soon, PhD students are adorable in their innocence.


[Jan19]
I decided that I want PhD students, then I got the accounting for the project and found out that I'm not that good at math and that I overlooked quite a chunk of funds. It came to me in my sleep, I need someone to help me with my math, to be exact I need a stochastic oracle meaning a postdoc in probability theory... I wonder if I can find one who will speak in rhyme but not riddle.


[Dec18]
This year I'll do December without life-threatening stress levels - couple of reviews, some project writing and slowly getting back into research gear. I also should decide how to spend the rest of the START money... leaning towards PhD students... best to get them before May next year... maybe a concise call would help...


[Nov18]
Habilitation thesis submitted! It's a virtual thesis, though. So if your are interested but not too desperate to read the acknowledgements or the two page introduction, I recommend reading all my papers, that have not been included in my PhD thesis (arXiv numbers 1008 to 1809). The good news of this and last month together with moving house must have addled my brains because I found myself volunteering to rework the big project proposal.


[Oct18]
Still one third of October left and I've already been so lucky that I'm scared of being run over by a bus! The START-project has been prolonged for 3 years, the nightmare paper has been accepted and the average OMP-paper has been accepted within 1 month and 1 week (this also means that SPL is now my favourite journal).


[Sep18]
For all you OMP lovers out there, who are fed up with being bullied by the BP-mafia for using an 'inferior' algorithm, here is, the reason why I'm using OMP.


[Aug18]
More holidays and writing up the OMP results. Let's see if I can keep it to 4 pages, double column of course.


[Jul18]
This time I got it, at least I produced some theory and some very interesting curves about the success rates of OMP and BP, best of all: theory predicts curves. Now I'm going for a lot of medical check-ups to be in perfect shape for the pending end of the world and for holidays to get over the shock of realisation.


[Jun18]
I went to Strobl and as requested enchanted people with the dictionary learning - from local to global and adaptive talk, featuring the amazing grapefruit slide to explain regions of convergence. Also Cristian Rusu and I joined forces on adaptive dictionary learning, so he visited for a week.


[May18]
The START-evaluation report is submitted! Now we have to wait till November to know if we get to keep our jobs, so motivation is soaring. Actually it's not as bad as expected. Things being beyond my control turns out to be quite liberating. So after cleaning my code for the adaptive dictionary learning toolbox, and adding pseudocode to the manifesto, I sharpened my pencil for a bit of dragon-slaying - the still elusive average case analysis of OMP.


[Apr18]
It is 47 pages long or - to sound more attractive - 4 pages per figure. Behold, the new paper on dictionary learning - from local to global and adaptive, which is - I am absolutely unbiased here - pretty cool, because apart from the theory getting closer to what we see in simulations, we can do adaptive dictionary learning, meaning automatic choice of sparsity level and dictionary size. Toolbox soon, alas I first have to submit the START-evaluation report. Also for greater enjoyment the m-files should better be converted to humanly readable format.


[Mar18]
Michi submitted his thesis, found himself a real job and before leaving for good in April went on holiday for the rest of the month. Who is going to shock me with his code, block my brain with wooden puzzles and send me brilliant music videos? Best to ignore the conundrum as long as possible, invitations to workshops in Paris and Oberwolfach help.


[Feb18]
We have a date!!! The mid-term report needs to be submitted by the 9th of May. So full throttle till the 9th and lethargy from the 10th on. Luckily the stochastics exercises are finished and next semester I'm teaching something with prior knowledge! Just need to convince enough students that channel coding is cool, so cool that in the workshop for high school students, they gave me as many positive mentions as they gave to gaming and free food!!!


[Jan18]
The nightmare paper is accepted (pdf), long live the nightmare paper!! Actually the other nightmare paper has also been improved, if you want to verify that we really have a fast analysis operator learning algorithm, have a look at the updated toolbox. It's also been resubmitted as a test to see if I will ever do a review for TSP again - couldn't in good conscience force my reviews on them, since they are probably as bad as my papers.


[Dec17]
Busy doing the millions of simulations required to revamp the masked dictionary learning paper. While doing that I'm starting to wonder if it's not time to ask the FWF about the prolongation of the project, meaning when the mid-term report is due, meaning when we really should have a finished all those papers, ...on second thought, better stay blissfully ignorant until next year.


[Nov17]
Say it with an Offspring song: This rejection got me so low..., well wasn't outright rejection this time, more of a: we will be happy to reject it once you have wasted 6 more weeks doing millions of additional simulations to improve the manuscript. Sometimes I ask myself: Why don't you get a (real) job? Always finish on a positive note... the website has been successfully transferred!


[Oct17]
Went to Rennes to be the female jury member in Nicolas Keriven's thesis committee, compressed sample results here. Otherwise drinking coffee and forcing myself to write the paper. At least the transfer of the website is in capable hands, meaning not mine.


[Sep17]
As usual once back from holidays I realised that the semester is looming and that I made a horrible mistake when teaching was distributed, meaning I was motivated to learn something new, volunteered for something I forgot a long time ago/never actually knew and now will have to educate myself really fast. And they will kill the server hosting this page, so the page will have to be moved, aargh.


[Aug17]
Going mad with finding the correct cut-off for adaptive dictionary learning, ie. how to distinguish between atoms and rubbish in the case of noisy data. Fortunately after the engineering/cooking approach failed, going back to theory worked and I have a criterion!! I can't prevent rubbish to enter but I can weed it out afterwards. Now there is really no reason not to write a paper, but given a week of holidays I'm sure I can come up with something. So off I go!!


[Jul17]
Family trip to FoCM in Barcelona!! While Flavio, Michi and Marie were sweating, my brain finally reached operating temperature. List of achievements: pretty talk on adaptive dictionary learning, magic new step size for analysis operator learning, squeezing promise for help with random subdictionaries out of Joel (Tropp). Also cervical supercooling is now the official reason for slow publishing.


[Jun17]
Went to SPARS in Lisbon with Flavio and Michi, where we told people about compressed dictionary learning and analysis operator learning, more details here. On this occasion I witnessed for the first time in my life left-over-cake at a conference, amazing!!!


[May17]
Tinkering with theory for nightmare paper went well, so now I'm back to tweaking millions of screws to make it work in practice. I'm also trying to improve myself and my teaching with a lot of personnel development courses. Let's see if Flavio notices any improvement once he is back from learning audio dictionaries in Rennes.


[Apr17]
Michi hit the arXiv submit button! If you can't bear to leave my homepage to go to arXiv, the analysis operator learning nightmare paper is also available here and we even have a toolbox to play around with!


[Mar17]
The WDI2 workshop took place on the 10th. Nobody complained so I'd say success! More good news, both submitted SPARS abstracts are accepted so I'm going to Lisbon to cheerlead Flavio and Michi with their posters. Eventful month because Marie finished her MSc thesis and so I now have a second PhD student. As happens in this case Michi the older kid got jealous and started to actually write the paper, let's see when we finish.


[Feb17]
The nightmare lecture is over. In the end I learned a lot. Other than that - can't remember what I did in February. Well it is a short month, subtract school holidays and it gets even shorter. Probably I was tinkering around with theory for the nightmare paper to motivate my replacement strategy.


[Jan17]
Things, I never expected to happen, happened.
1) I've almost survived the nightmare lecture - doing much better at the end talking about pca and clustering, ie. stuff I know something about.
2) The project proposal is finished.
3) After wading knee-deep in bugblood and submitting millions of jobs to the cluster, Valeriya and I finished the nightmare paper on dictionary learning from incomplete data, also featuring a toolbox.
To celebrate the START project page is going online!


[Dec16]
To increase my stress level from unbearable to life-threatening, I wisely decided to contribute to writing another project proposal. Conclusion: I shall learn to say no, I shall learn to say no, I shall learn to say no.


[Nov16]
Marie Pali joined the START project for a 3.5 month internship to have a look if she can bear to do a 3.5 year PhD with us. She is working on extending recovery results for dictionary learning to non-homogenous coefficient models and providing me with chocolate and cigarettes. And Michi and I are organising the next edition of the Donau-Isar-Inn Workshop (WDI2). Have look at the workshop page and register!!


[Oct16]
Due to the nightmare lecture, research, paper writing and life in general have been suspended until further notice... well ok the first two have been reduced to brainless low level activities like feeding simulations to the cluster to make pretty pictures and the third to sleeping.


[Sep16]
Postholiday disaster: Michi managed to convince me that our simple analysis operater learning algorithm can't be made to work on real data - too unstable. Glimpse of hope, he managed to do something smart but more complicated to make it work. I think he also just wants to avoid writing the paper, which is completely understandable. After going to the Dagstuhl Seminar: Foundations of Unsupervised Learning my remaining holiday relaxation was efficiently annihilated by the realisation that I have one week to prepare another lecture in the famous series "today I learn, tomorrow I teach".


[Aug16]
The nightmare paper with the fastest (working) dictionary learning algorithm in the west has been accepted! The nightmare paper is dead! Long live the nightmare paper(s)! The only way progress could be slower would be if I started deleting lines - time for holidays. To ensure a smooth transition I first went to the iTWIST workshop where I was allowed to ramble on for 45 minutes.


[Jul16]
Fascinating how time is flying when you are writing up results or rather not writing up results. I am behind all schedules, my excuse is that I was forced to also include a real data simulation in the nightmare paper. At this point thanks to Deanna Needell for sending me fantastic Fabio, to be found on page 17!


[Jun16]
Praise the lord - Michi has finally acceded to writing up our stuff about analysis operator learning, so there's hope for a paper once I have done some translation from Shakespearean to modern English. Actually I'm now in triple paper hell because Valeriya and I solved our last algorithmic issues and have given each other deadlines for the various chapters.


[May16]
I'm writing up the dictionary learning with replacement etc. stuff, I'm not enjoying myself, I'm grumpy, I won't make a longer entry.


[Apr16]
Flavio Teixeira has joined the START-project! He will solve all my dictionary learning problems and remind me to water the plant. Also I have created a monster uaahahahaha dictionary learning algorithm that is fast and does not need to know the sparsity level or the number of atoms. Needless to say that we will never be able to prove anything. However the really bad part is that it also does sensible things with image patches, meaning real data, so there is no way to avoid writing a paper anymore.


[Mar16]
I corrected and resubmitted the ITKrM (aka nightmare) paper, which now also features pretty pictures. According to plan all of my time goes into preparing the lecture - well ok secretly I'm comparing the ITKrM algorithm to its cheap version which means watching a lot of jumping points.


[Feb16]
Went to the Mathematics of Signal Processing Workshop in Bonn, nice! Then I tried to decide which of my nicely working algorithms should be turned into a paper or rather which aspect should go to which paper to stay below 30 pages and how much math do I really need to add as opposed to how much math would I like to add. Luckily I then found a suitable excuse not to decide anything in the preparation of the time-frequency lecture :D.


[Jan16]
Utter frustration, I have tons algorithms that work nicely but I can't prove it. Also I'm in panic because I have to get 6 papers accepted in 5 years to keep my job. At the current rate I'm not going to make it.


[Dec15]
Habemus postdocem! If we manage to actually lift the administrational burden, Flavio Teixeira will join the START-project in spring. The advertising talk for masked dictionary learning in Berlin sold well - too bad the product is not finished yet. On hind-sight the trip counts as pretty disastrous, since I paid the hotel for a single but as it turned out a week later I was definitely not alone in the room when counting also the bed bugs.


[Nov15]
I hate writing papers. It is so much easier to watch the points in my simulations jump around and improve the algorithms than to actually hammer out all the nasty little details why the masked dictionary learning stuff is locally converging. Obviously after the happy, enthusiastic phase, Michi and I are now experiencing some set-backs with our analysis operator learning - time to ask Andi to teach us how to use the supercomputer. Good news is we potentially bagged a researcher at the medical university, who will give us real world problems and free coffee! Finally anybody interested in our AOL or masked DL stuff has the chance to see it/us in Berlin!


[Oct15]
Andi Kofler will join the dictionary learning crowd for 4 months with the goals of a) making dictionary learning faster and b) finding out whether he wants to get a real job or do a PhD with me. The codes seminar is organised, luckily teaching stuff I don't know much about is getting easier every time I do it. Soon I'll be ready for PDEs .... hahaha no way.


[Sep15]
I set a new personal record in the category number of means of transport to arrive at a scientific event, that is, 9 to cover the distance Innsbruck - Dagstuhl, where again I was allowed to entertain people with a talk called 'dictionary learning - fast and dirty' (slides, etc) and on top of that lost my fear of the machine learning community, who successfully convinced me that I only need to be afraid of the computer vision community. Then I went on holidays and afterwards into panic about having to organise the seminar on codes.


[Aug15]
I managed to write the final report of the Schroedinger project. Obviously this involved a lot of procrastination. To atone I thought I'll turn the latex file into a template. So here an absolutely not official latex template for the FWF final project report (Endbericht), I cannot disclaim enough, but hope someone will find it useful. Also I went to a workshop in Oberwolfach, where I was allowed to entertain people with a talk called 'dictionary learning - fast and dirty' (report).


[Jul15]
If I were a rational person I would start to deal with the small problems of the project in order to get it prolonged and then go for the big ones. On the other hand if I were rational I would have gotten a real job 5 years ago, so I'm going straight for the big bastard problem, ie. the global minimum. All help welcome. As threatened I gave a talk about the ITKrM (aka nightmare) paper at SPARS in Cambridge and I think they plan to put the slides online.


[Jun15]
Started with the Finnish flu and with the decision to stay in Innsbruck (sigh there goes TU Munich), meaning that this is the first but not the last month of the START-project, proper (longer) page later. It also means that I am again looking for a postdoc and 1-2 PhD students. In the mean-time Michi Sandbichler has agreed to temporarily act as my first minion and we will work on analysis operator learning. After running the simulations for the nightmare paper I decided that the corresponding figures would not be enlightening because you actually can't test the message of the theorems, so here the figure-free but didactically improved submitted version - code for itkrm and simulation results on request. Last thing, as part of our ongoing efforts to save the world Valeriya Naumova and I started to learn dictionaries from incomplete data and in synthetic experiments it even works :D.


[May15]
Last month of the Schroedinger project. Since my brain is still blocked with the Innsbruck - Munich decision don't expect mega scientific progress but there is the chance to see a talk about the first part of the nightmare paper at the minisymposium 'Learning Subspaces' at AIP in Helsinki. If you can't make it to Helsinki, don't despair, you have the chance to see a similar talk at SPARS in Cambridge.


[Apr15]
While Matlab is running simulations to make nice pictures, I made a more readable version of the nightmare paper, second try. For all those out there interested in dictionary learning the alternative introduction to dictionary learning, written for the bulletin of the Austrian Mathematical Society, is now available and gives an overview over theory until 2014.
And finally a note to all associate editors, I consider my review quota for this year filled and so until the end of 2015 will not feel guilty for rejecting uninteresting reviews. You might still get lucky with theory of sparse recovery (compressed sensingless) or theory of dictionary learning.


[Mar15]
Ohoh, I've been lucky again, so now I have to decide between Innsbruck and TU Munich, that will be a hard one. In any case even thinking about the decision has to be postponed in order to finish the nightmare paper. Here is a first try, but I'm not 100% convinced by the structure. The inner mathematician and inner engineer are fighting, where to put the proofs and whether to do some work intense simulations to include enlightening figures.


[Feb15]
I've been a good girl because I've reviewed 3 conference and 3 journal papers (don't tell Gitta or I'll immediately get a new one), because I've written an introduction to dictionary learning for the journal of the Austrian Mathematical Society and because I've augmented the web-page with a student area and prepared part of the seminar. But actually I've been a bad girl because I only did the good things in order to avoid finishing the nightmare paper.


[Jan15]
Darkness, depression, makes you miss the cosy wet island. Luckily there is a light at the end of the constant tunnel, I decided them all! Finishing the paper will still be postponed indefinitely, since I have been spammed with reviews.


[Dec14]
Hello world from Innsbruck! I have overcome huge obstacles, ie. the instructions of the ZID on how to access my webfolder, but finally there it is: a homepage at my institution.
I also decided that new institution, new country means new news section and that the old news will go into an old news section. Btw, I realise that this is a somewhat lengthy entry but that is to force the ugly warning flag of the university down.
Good news: the non-tight dl paper was accepted to JMLR, so soon you can get it for free from there, here, or arXiv. If you prefer a mathy style where proofs are not banished to the appendix go here, but beware the bugs.
Bad news: Still haven't decided all the constants.

old news 

Nach oben scrollen