Optimization using Adam on Sparse Tensors
Adaptive optimization methods, such as Adam and Adagrad, maintain some statistics over time about the variables and gradients (e.g. moments) which affect the learning rate. These statistics won’t be very accurate when working with sparse tensors, where most of its elements are zero or near zero. We investigated the effects… Read More »Optimization using Adam on Sparse Tensors