Deep Learning Paper Recap - Redundancy Reduction and Sparse MoEs
This week’s Deep Learning Paper Reviews is Barlow Twins: Self-Supervised Learning via Redundancy Reduction and Sparse MoEs Meet Efficient Ensembles
Deep Learning Researcher
Deep Learning
This week’s Deep Learning Paper Reviews is Barlow Twins: Self-Supervised Learning via Redundancy Reduction and Sparse MoEs Meet Efficient Ensembles
Deep Learning
This week’s Deep Learning Paper Review is Decision Transformer: Reinforcement Learning via Sequence Modeling and SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
Deep Learning
This week's Deep Learning Paper Review is VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning.
Deep Learning
In recent years, Transducers have become the dominant ASR model architecture, surpassing CTC and LAS model architectures. In this article, we will examine the Transducer architecture more closely, and compare it to the more common CTC model architecture.
Deep Learning
In recent years, pretraining has proved to be an essential ingredient for success in the fields of NLP and computer vision. In this week's Deep Learning Paper Review, we look at "Pretraining Representations for Data-Efficient Reinforcement Learning".