Thursday, 22nd of April 2021, 12:00 – 1:00

Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks

Venue: 
 online
Please use this link & sign in with your real name

Lecturer:
David Peer
Researcher at IIS, University of Innsbruck

Abstract: 

Designing neural network architectures is a challenging task and knowing which specific layers of a model must be adapted to improve the performance is almost a mystery. In this talk, I will describe a novel method that we developed to identify layers that decrease the test accuracy of trained models. More precisely, we identified those layers that worsen the performance because they produce conflicting training bundles. I will show theoretical and empirical why the occurrence of conflicting bundles during training decreases the accuracy of neural networks. Based on these findings, I will also describe a novel neural-architecture-search algorithm that we introduced to remove performance decreasing layers automatically already at the beginning of the training. Finally, I will show that it's possible with the same method to remove around 60% of the layers of an already trained residual neural network with no significant increase in the test error.

 

References
Peer, D., Stabinger, S., & Rodriguez-Sanchez, A. (2021). Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 256-265).

Peer, D., Stabinger, S., & Rodriguez-Sanchez, A. (2021). Auto-tuning of Deep Neural Networks by Conflicting Layer Removal. arXiv preprint arXiv:2103.04331.

  

Nach oben scrollen