image

paper , blog

Details

Preliminaries

Replay methods Save about 1% of the old data to external memory and use the Pseudo Rehearsal: generate historical samples https://ffighting.tistory.com/entry/iCaRL-%ED%95%B5%EC%8B%AC-%EB%A6%AC%EB%B7%B0

Regularization-based methods Constrain parameters so they don’t change too much

Parameter isolation Train a model for each class seg and think about how to combine its parameters later

TL;DR

  • task : class incremental learning / domain incremental learning / task-agnostic learning
  • problem : catastrophic forgetting.
  • idea : prompt learning ! Given an image, pick N close prompts out of a pool of M prompts (Learning to Prompt, L2P) and prepend them in front of a ViT vision token to classify the image.
  • architecture : ViT-B/16
  • objective : CrossEntropyLoss + diversifying prompt-selection.
  • baseline : CIL variants(finetuning sequentially, BiC, EWC, DER++..)
  • data : split CIFAR100, 5-datasets
  • result : SOTA. just lower than iid finetuning.
  • contribution : Contribute to SOTA with a simple idea/architecture.

ETC.