Reseach Focus
[Topics change stage by stage following the trending challenges to AI]
Efficient Pretraining, Serving and Adaptation
-
Naive pretraining on Chest X-ray Data: [UniChest (TMI'24)]
-
Training acceleration in the persective of data selection: [DivBS (ICML'24)]
-
Efficient Serving: [LoRKD (CVPR'24)]
-
Adaptation [RD (MICCAI'24)]
Generalized Imbalanced Learning
Imbalanced Learning is an old topic in machine learning area, which is still lack of the solid foundation from theory to algorithm, although
it seems to be studied in (at least) two decades. The reason that we re-focus this problem is that the generalization of
algorithms in a broad sense has been recently drawn more attention, especially in the context of the pretraining paradigm. The underlying
evaluation metric on the holistic measure of each class, each task, each domain coincidentally is similar to the fine-grained measure in imbalanced
learning. This motivates us to extend imbalanced learning to boost these typical paradigms like self-supervised learning, weakly-supervised learning or generative modeling
to enhance the generalization. The following taxonomy is mainly based on the aspect that each research work considers, but actually these imbalance types may mix in practice.
Noise Robust Machine Learning
Perturbation can be ubiquitous on real-world data and the proper mass can actually robustify the training of machine
learning models. This is common in the training practice like label smoothing, dropout and data augmentation with randomness. However,
when it is excessive or deliberate, or even only emerges during serving, the special design should be considered to reduce their negative
impact. Motivated by this belief, we developed a range of methods as references on this way.
-
Label-noise learning under the class-conditional assumption [NeurIPS'18, AAAI'19, TPAMI'23]
-
Label-noise learning with the selection, correction or regularization [ICML'19, TIP'19, AAAI'21, NeurIPS'23, ICML'24]
-
Adversarial learning with distillation, slack or representation learning [ICLR'22, ICLR'23, ICML'23]
-
Out-of-Distribution Detection with intrinsic capacity, extrapolation or calibration [ICML'23, NeurIPS'23]
|