@omarsar0
🎓TinyML and Efficient Deep Learning Computing You can now catch up on the efficient ML space with this new set of YouTube lectures by MIT. Topics include model compression, pruning, quantization, neural architecture search, distributed training, data/model parallelism, gradient compression, and on-device fine-tuning. As we look ahead, it's starting to become very clear on the importance of this topic. I might summarize a few of these and share my takeaways/notes in an upcoming post. https://t.co/Tu6k4WFurd