Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided CompressionPublished in 2nd Sparse Neural Networks Workshop at International Conference on Machine Learning (ICML), 2022Share on Twitter Facebook LinkedIn Previous NextLeave a Comment
Leave a Comment