基于 Class-Incremental Learning: A Survey

Data

  • Data Replay
    • Direct Replay
    • Generative Replay
  • Data Regularization

基于回放的增量学习的基本思想就是”温故而知新”,在训练新任务时,一部分具有代表性的旧数据会被保留并用于模型复习曾经学到的旧知识,因此要保留旧任务的哪部分数据,以及如何利用旧数据与新数据一起训练模型,就是这类方法需要考虑的主要问题。

Direct Replay

Generative Replay

Data Regularization

Model

  • Dynamic Networks
    • Neuron Expansion
    • Backbone Expansion
    • PEFT Expansion
  • Parameter Regularization

Neuron Expansion

Backbone Expansion

DER

Dynamically Expandable Representation for Class Incremental Learning

FOSTER

Feature Boosting and Compression for Class-Incremental Learning

PEFT Expansion

Prompt

Learning to Prompt for Continual Learning

image-20240117202730051

DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning

image-20240117202825652

Generating Instance-level Prompts for Rehearsal-free Continual Learning

image-20240117202935765

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

image-20240117203003021

CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning

image-20240117203108181

S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning

image-20240117203150551

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

image-20240117203223995

Adapter

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

image-20240117203329309

Parameter Regularization

Algorithm

  • Knowledge Distillation
    • Logit Distillation
    • Feature Distillation
    • Relational Distillation
  • Model Rectify
    • Feature Rectify
    • Logit Rectify
    • Weight Rectify

基于正则化的增量学习的主要思想是通过给新任务的损失函数施加约束的方法来保护旧知识不被新知识覆盖,这类方法通常不需要用旧数据来让模型复习已学习的任务,因此是最优雅的一类增量学习方法。
基于正则化的增量学习方法通过引入额外损失的方式来修正梯度,保护模型学习到的旧知识,提供了一种缓解特定条件下的灾难性遗忘的方法。不过,虽然目前的深度学习模型都是过参数化的,但模型容量终究是有限的,我们通常还是需要在旧任务和新任务的性能表现上作出权衡。

Logit Distillation

LwF

Learning without Forgetting

20240128205848

Feature Distillation

Relational Distillation

Feature Rectify

Logit Rectify

Weight Rectify

Survey

Learn or Recall? Revisiting Incremental Learning with Pre-trainedLanguage Models

大模型在连续学习中真的遗忘了吗?重新审视基于预训练语言模型的增量学习