bdico


Contact:

Karin Schnass
University of Innsbruck
Dep. of Mathematics
Technikerstraße 13
6020 Innsbruck
Austria

karin.schnass[?]uibk.ac.at
Tel: +43 512 507 53881



Optimisation Principles, Models and Algorithms for Dictionary Learning

Hello creative brains!

This is the homepage of FWF START project Y760, which ran from June 2015 to May 2023.
We were mainly studying theoretical dictionary learning, with the occasional foray into signal processing, sparse approximation or random matrices thrown in.
If you want to find out more, have a look at the research page or read some of our papers and play with the code.

If you feel that you would have enjoyed being part of the project, check out the page of the mathematical data science group, which evolved from it.

News


[May23]
Last month of the project - we haven't solve all problems in dictionary learning but we made a big dent! So a big thanks to the whole team, but especially Michi, Andi, Flavio, Marie and Simon - it's been a pleasure to break our heads together.
Finally, if you want to hear about the coolest results of the project, you have the chance to do so at FoCM in Paris in June and at the Meeting of the Austrian Mathematical Society in Graz in September.


[Apr23]
Simon's last month in the project but will it also be his last month in academia? If, after reading his latest preprint on the convergence of MOD and ODL (aKSVD) for dictionary learning, you think 'no way', I will happily forward any job offers.


[Mar23]
Praise Urania and all gods in charge of math, the nightmare paper has been accepted to Information and Inference. If you want to learn a dictionary but don't know size or sparsity level, give it a try. If you want to see some nice convergence results, give it a try. If you want to know, why your own algorithm got stuck, and want to unstick it, give it a try!


[Dec22]
Congratulations to Dr. Ruetz, the PhD student formerly known as Simon!
Also right on schedule we have a new preprint collecting all you ever wanted to know about the hottest topic of the sixties: inclusion probabilities in rejective sampling.


[Sep22]
Holidays have been taken, fall semester preparations have started with unprecedented chaos and there was the chance to get a preview at the new results for MOD and KSVD at ICCHA2022.


[Aug22]
Simon has handed in his thesis! Congratulations!! Now we can both collapse, go on holidays in September, play a round of tennis in October and start turning the chapters into some very nice papers in November.


[June22]
Simon has a new preprint on adapted variable density subsampling for compressed sensing out, which will tell you how to give a simple boost to your cs with a bit of statistical info about your data! Find out more by watching his talk.


[May22]
The nightmare paper, has left the pipeline with favourable reviews, meaning it could be only a couple more years until publication.


[Mar22]
Congratulations to Elli and welcome to the world Emma Christina - born on pi day - how cool is that!


[Feb22]
Marie is leaving us at the end of the month. Fortunately she is not going far, so we can still meet her for lunch and coffee near the city centre.


[Dec21]
Karin is in mourning, cause she caught corona and so can only participate in the first conference in 2 years virtually.


[Aug21]
Congratulations to Dr. Pali, the scientist formerly known as Marie!!


[May21]
The random subdictionary paper has been accepted. Normally the acceptance of a paper means the death and resurrection of the nightmare paper but, alas it is still firmly stuck in the pipeline.


[Apr21]
Marie has handed in her thesis!! Congratulations!!
Also we have revised the manifesto and received fantastically thorough reviews for the random subdictionary paper.


[Mar21]
Karin said yes once too often and landed herself with the job of study responsible for the math undergraduate programmes. Also she will be proud new co-organiser of the 1W-MINDS Seminar from July on. Marie is crawling towards the finishing line of her PhD, Simon is wrestling several monsters in dictionary learning all at the same time and Elli is suffering in silence.


[Feb21]
Congratulations to Marie and Andi, their paper on dictionary learning for adaptive mri is listed as editor's choice! For those interested in the conditioning of submatrices, there is a new talk from the CodEX Seminar in Colorado available on youtube.


[Jan21]
We have been upgraded to research group, meaning we are now listed on the math department's homepage under the fancy new label mathematical data science. Thanks to the Applied Math Group for hosting us until now!


[Dec20]
We have a new paper. If you want to know about the conditioning of submatrices, when you don't draw the atoms uniformly at random, you should definitely take a look. Looking on the bright side of corona, Flavio could join the Christmas beer & pubquiz, which replaced the more traditional christmas pasta. Again Andi provided the highlight - this time in the form of his very special haircut.


[Nov20]
We have a new PhD student! Elli Schneckenreiter, MSc in teaching math and chemistry, joined us at the beginning of this month. In our defense, we did everything humanly possible to discourage her and thus not keep her from saving our schools. It's not our fault they closed 2 weeks later. For more lamentation over closed schools, home schooling etc. see the March entry. Instead we will share Andi Kofler's new-found wisdom, which should have a major impact on the world of mathematics: good coffee should not be 100% arabica but contain also robusta!!


[Oct20]
Marie's nightmare paper has been accepted, long live Marie's new nightmare paper aka the manifesto.


[Jul20]
Collective breaking of heads over OMP. Outcome: the average OMP paper Karin was so happy about is not wrong but definitely an overkill, at least in the noiseless case. Seems that all you need to perfectly describe the shape of the experimental curve is a random support, decaying coefficients and the correct p-norm in a matrix vector bound. Random signs completely unnecessary!


[Jun20]
The destination of this year's family trip has been decided. The exotic place of choice is Graz, where we will crash the Austrian Stochastics Days, in the hope of finding someone to solve our probability problems. The victim we have in mind is Joscha Prochno and the strategy is inspired by Jurassic Park.


[Mar20]
Corona - home office - home schooling - distance teaching. The status of our research can be summarised as this: It harms noone.


[Dec19]
Marie is fed up with real data and keen to think about theory for a cute little greedy algorithm, we (re)-discovered and dubbed adaptive pursuit, because it works without knowledge of the sparsity or noise level. Meanwhile Simon and Karin are exploding their heads with decoupling and the precision of Joel's Matrix Bernstein, Chernoff and Freedman inequalities.


[Nov19]
With the minor additional assumption of drunk zombie behaviour Escape from New York is analysed (check it out!!) and Achilles is waiting for job offers from Hollywood.


[Oct19]
Despair in pairs - Simon and Karin are trying to prove something obvious, Flavio and Karin are revising the compressed dictionary learning paper and Marie and Andi are facing the challenge of presenting their results in a way that is accepted by physicians and mathematicians alike.


[Sep19]
If you had wanted to know more about dictionary learning, the Applied Analysis Summer School in Chemnitz would have been the perfect opportunity. Maybe we should have mentioned that before. Marie and Andi are applying ITKrM and its adaptive version to reconstruction of MRI from undersampled measurements (it's working!!!!), Simon is reading, reading, reading and Achilles is analysing the mathematics behind Escape from New York.


[Aug19]
August - weirdly enough, nothing extraordinary happened, unless you count convincing Marie to take 2 weeks of holidays.


[Jul19]
Family trip to SPARS in Toulouse, with three presentations! The week after Yong Sheng Soh visited us to explain to us his manifesto. We not only learned a lot but got on so well that we will try to have a shot at analysing the MOD algorithm together.


[Jun19]
Papers coming home to be revised and in case of the manifesto being pimped with the improved analysis resulting from Marie's suffering.


[May19]
Marie started working with Andi on MRI restoration and thus exchanged her source of frustration from theory to programming, while Karin got confused over her scientific identity (mathematician, computer scientist, engineer or maybe statistician?) at the very enjoyable Oberwolfach Workshop Statistical and Computational Aspects of Learning with Complex Structure. We have also doubled the group size since Simon Ruetz joined us as PhD-student and Achilles Tzioufas as postdoc (the stochastic oracle).


[Apr19]
Karin gave the first talk - shamelessly plagiarising Romeo and Juliet - about average OMP results at the, Workshop on Mathematical Signal and Image Analysis in Raitenhaslach and found out how liberally Bavarians define a cycling path (section Tittmoning - Raitenhaslach).


[Mar19]
Habemus PhD-studentem, Simon Ruetz will join our team in mid-May. Freedman's inequality did its job, unfortunately now some other estimate turned out to be too crude, aargh. Luckily, also the algorithmic side of dictionary learning is turning out to be fascinatingly messy. In particular, we are having a hard time transferring adaptivity from ITKrM to K-SVD, first results indicate that either OMP is a little too greedy, or image data is a little too heterogeneously sparse.


[Feb19]
Seems that Azuma's inequality is not strong enough for our needs. Still lacking our stochastic oracle, we have therefore pressured Alex Steinicke into explaining to us the weird probabilistic notation encountered in Freedman's inequality. Marie is falling into despair over the never ending problems, while Karin who is more experienced with the research hydra, is happy because her SPARS19 submission got the number 101, so she thinks she was the first.


[Jan19]
Flavio left the project to keep his residence permit, so Marie, Karin and the plant are huddling together for warmth. To feel less lonely we are looking for PhD students.
Addendum: Karin found out just how bad she is at math and how much money she overlooked. To help her with the math we are therefore looking for a postdoc in probability theory - fondly to be known as the stochastic oracle (preferably not speaking in riddle).


[Dec18]
The START-Christmas-pasta-lunch this year included a clothline of tagliatelle and several werewolf casualties => everybody happy. Science-wise Flavio is discovering the beauty of theory to make algorithms work, while Marie and Karin are cutting down the 40 page mess to something readable.


[Nov18]
Flavio is working on a version of the iterative thresholding and K-residual means algorithm for non-negative matrix factorisation, while Marie is increasing her modular mess (last count 40 pages). Karin is in post-habilitation-thesis-submission bliss, which led to her volunteering for a dirty, project-reworking job, nothing useful can be expected from her this month.


[Oct18]
The START-project has been prolonged for 3 years!!! This means that not only do we all get to keep our jobs but also that we can be more. If you love dictionary learning or sparse approximation as much as we do, contact us for PhDs, postDocs or research visits!!
Non-trivial side note, the nightmare paper and the average OMP paper have been accepted.


[Sep18]
For all you OMP lovers out there, who are fed up with being bullied by the BP-mafia for using an 'inferior' algorithm, here is, the reason why I'm using OMP.


[Aug18]
Marie has turned some paper mess into latex mess and is now trying the modular approach using lemmata and such stuff to convert to humanly readable format. Hours of her life get wasted deciding constants. Karin is fighting to keep the OMP results to 4 pages and Flavio is on fire, meaning visiting the FLAME project at ARI Vienna, to do pitch identification.


[Jul18]
Flavio hit the submit button on the compressed dictionary learning paper. Marie is trying to convert her paper mess into latex mess, to then convert into pretty paper. Karin produced some theory for OMP and some very interesting curves about the success rates of OMP and BP, best of all: theory predicts curves. Downside: she now insists on being addressed as Karin Dragon Slayer.


[Jun18]
Karin went to Strobl and as requested enchanted people with the dictionary learning - from local to global and adaptive talk, featuring the amazing grapefruit slide to explain regions of convergence. Also Cristian Rusu visited for a week to join forces on adaptive dictionary learning.


[May18]
The mid-evaluation report is submitted! Now we have to wait till November to know if we get to keep our jobs, so motivation is soaring. Actually it's not as bad as expected. Things being beyond our control turns out to be quite liberating. Flavio found a bug and now all simulation results are much better, Marie is writing up a first version of results for approximate sparsity level. Karin, after cleaning her code for the adaptive dictionary learning toolbox, and adding pseudocode to the manifesto, has sharpened her pencil to go after her favourite dragon - the elusive average case analysis of OMP.


[Apr18]
It is 47 pages long or - to sound more attractive - 4 pages per figure. Behold, the new paper on dictionary learning - from local to global and adaptive, which is - we are absolutely unbiased here - pretty cool, because apart from the theory getting closer to what we see in simulations we can do adaptive dictionary learning, meaning automatic choice of sparsity level and dictionary size. Toolbox soon, alas we first have to submit the evaluation report. Also for greater enjoyment the mfiles should better be converted to humanly readable format.


[Mar18]
Michi submitted his thesis, found a job and went on holiday before leaving us for good in April. The others are trying to cope each in their own way, Marie near tears, getting a new tattoo, Karin in denial, going to Paris and Oberwolfach , and Flavio, in compensation, doubling the job submissions to the computing cluster.


[Feb18]
We have a date!!! The mid-term report needs to be submitted by the 9th of May. So full throttle till the 9th and lethargy from the 10th on.


[Jan18]
The nightmare paper is accepted (pdf), long live the nightmare paper!! Actually the other nightmare paper has also been improved, if you want to verify that we really have a fast analysis operator learning algorithm, have a look at the updated toolbox. It's also been resubmitted as a test to see if we will ever do a review for TSP again - couldn't in good conscience force our reviews on them, since they are probably as bad as our papers.


[Dec17]
Except for Marie everybody is busy doing the millions of simulations. While revamping the masked dictionary learning paper, Karin is starting to wonder if it's not time to ask the FWF about the prolongation of the project as in continued salaries for all of us but also meaning the mid-term report is due, meaning we should really finish all those papers, ...on second thought, better stay blissfully ignorant until next year.


[Nov17]
Family trouble: Flavio is blocking the cluster with simulations and Michi is pissed off because he wants to make just the last simulations to finish the paper in November. Karin in her endless wisdom is calming the waves, saying don't worry finishing in December is already ambitious and in any case nothing's going to happen over Christmas.


[Oct17]
Moral is low, Michi only managed to nail down convergence to the generating operator with some help from the oracle support fairy. Disgusted with analysis operators he went to cry with Papa Felix (Krahmer). Karin is finding out how much she does not know about probability, Flavio is trying to make horribly complicated results look simple and Marie is suffering her first PhD year in silence.


[Sep17]
Nice better step size for learning analysis operators actually leads to better results, ok smarter algorithms lead to even better results even faster, but if you insist on learning analysis operators online that can now be done.


[Aug17]
In between holidays, group is driven mad by Karin trying to find the correct cut-off for adaptive dictionary learning, ie. how to distinguish between atoms and rubbish in the case of noisy data. Fortunately after the engineering/cooking approach failed, going back to theory worked, e.g. we have a criterion and sanity is preserved!! We are also implementing the better step size for analysis operator learning and writing up the big paper on compressed dictionary learning - slowly (compare July entry).


[Jul17]
Family trip to FoCM in Barcelona!! While Flavio, Michi and Marie were sweating, Karin's brain finally reached operating temperature. List of achievements: pretty talk on adaptive dictionary learning, magic new step size for analysis operator learning, squeezing promise for help with random subdictionaries out of Joel (Tropp), realising yet again that in most talks we know as much as Jon Snow. Also cervical supercooling is now the official reason for slow publishing.


[Jun17]
We (Karin, Flavio and Michi) went to SPARS in Lisbon to entertain people with compressed dictionary learning and analysis operator learning, more details here. On this occasion we also witnessed for the first time in our lives left-over-cake at a conference, amazing!!!


[May17]
Tinkering with theory for nightmare paper went well, so Karin is back to tweaking millions of screws to make it work in practice. Flavio is off to Rennes learning audio dictionaries. Michi is praying for paper acceptance and dreaming of the end of his PhD, while Marie is thoroughly enjoying the misery and self-doubt of her first PhD year.


[Apr17]
Michi hit the arXiv submit button! If you can't bear to leave this page to go to arXiv, the analysis operator learning nightmare paper is also available here and we even have a toolbox to play around with!


[Mar17]
We hosted the WDI2 workshop, nobody complained so I'd say success! More good news, both our SPARS abstracts were accepted so we (Flavio, Michi, Karin) are going to Lisbon. Eventful month, congratulations to Marie for successfully defending her MSc thesis and officially starting the PhD!! Useful side effect, Michi the older kid got jealous and started to actually write the paper, let's see when we finish.


[Feb17]
February - Karin tinkering with theory for her nightmare paper, Flavio exploding the cluster with simulations, Michi on holidays in Cuba and Marie putting the finishing touches on her MSc thesis - nothing much happening - well it is a short month.


[Jan17]
Hello World!!!
To celebrate the acceptance of the first cofunded paper [pdf], the submission of the first fully funded paper [pdf], the quasi - completeness of the team and the organisation of our first workshop [wdi2].... voila a website!!! 

 

Nach oben scrollen