阅读清单

基础(必读)

  1. 完成Python和PyTorch的基础知识学习,教程包括代码
  2. 从Word Embedding到Bert模型—自然语言处理中的预训练技术发展史(https://zhuanlan.zhihu.com/p/49271699)
  3. 何凯明MIT第一课Deep Learning Bootcamp
  4. 2023何凯明未来科学家大会报告
  5. 理解《统计学习方法》第一章中的模型、策略、算法。
  6. 复旦大学邱锡鹏教授的《神经网络与深度学习》

##这些经典内容可能需要反复阅读,常翻出来看看,温故而知新

深度学习经典论文(必读)

  1. Deep residual learning for image recognition
  2. Efficient Estimation of Word Representations in Vector Space
  3. Focal loss for dense object detection
  4. Momentum Contrast for Unsupervised Visual Representation Learning
  5. Non-local Neural Networks
  6. Attention Is All You Need
  7. Learning Transferable Visual Models From Natural Language Supervision
  8. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
  9. Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
  10. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering

实践代码

  1. 读懂论文代码,在colab上运行成功,并尝试复现该论文中的实验结果

推荐课程

  1. Carnegie Mellon University course 11777: Multimodal Machine Learning
  2. Carnegie Mellon University course 11776: Multimodal Affective Computing

常看常新

  1. 找坑+填坑+编故事

选读论文

多模态情感计算

  1. Misa: Modality-invariant and-specific representations for multimodal sentiment analysis
  2. Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
  3. CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotations of Modality
  4. Awesome Multimodal Sentiment Analysis

长尾学习

  1. Feature space augmentation for long-tailed data
  2. Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
  3. BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition

持续学习

  1. iCaRL: Incremental Classifier and Representation Learning
  2. Gradient episodic memory for continual learning
  3. Large scale incremental learning

半监督学习

  1. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidenc
  2. Mixmatch: A holistic approach to semi-supervised learning
  3. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

噪声标签学习

  1. Co-teaching: Robust training of deep neural networks with extremely noisy labels
  2. Meta-weight-net: Learning an explicit mapping for sample weighting
  3. Dividemix: Learning with noisy labels as semi-supervised learning