Sidebar

Blog


Blog


https://horea.caramizaru.xyz


โ† Go Back


Search by Tags



You are not allowed to perform this action


Controlled Sparsity via Constrained Optimization


Abstract:

Penalty-based regularization is extremely popular in ML. However, this powerful technique can require an expensive trial-and-error process for tuning the penalty coefficient. In this paper, we take sparse training of deep neural networks as a case study to illustrate the advantages of a constrained optimization approach: improved tunability, and a more interpretable hyperparameter. Our proposed technique (i) has a negligible computational overhead, (ii) reliably achieves arbitrary sparsity targets โ€œin one shotโ€ while retaining high accuracy, and (iii) scales successfully to large residual models and datasets.

In this talk, I will also give a brief introduction to Cooper External Link, a general-purpose, deep learning-first library for constrained optimization in Pytorch. Cooper was developed as part of the research direction above, and was born out of the need to handle constrained optimization problems for which the loss or constraints may not be โ€œnicely behavedโ€ or โ€œtheoretically tractableโ€, as is often the case in DL.


Discussion

Enter your comment. Wiki syntax is allowed:
 
feed/2023/11/24/controlled_sparsity_via_constrained_optimization.txt ยท Last modified: 2023/11/24 22:14 by Horea Caramizaru