Lifelong deep learning (LDL) enables neural networks to learn continuously across tasks while preserving prior knowledge. We propose Task-Aware Multi-Expert (TAME), a novel algorithm that leverages task similarity to guide expert model selection and knowledge transfer. TAME maintains a pool of pretrained neural networks, activating the most relevant expert for each new task. A shared dense layer integrates the selected expert’s features to generate predictions. To mitigate catastrophic forgetting, TAME employs a replay buffer storing representative samples and embeddings from past tasks, enabling continual reference. An attention mechanism further prioritizes the most relevant stored knowledge for each task. Overall, the TAME algorithm supports flexible and adaptive learning across diverse scenarios. Experiments on CIFAR-100–derived classification tasks show that TAME improves classification accuracy while maintaining performance on earlier tasks, demonstrating its effectiveness in balancing adaptation and retention in evolving task sequences.
@InProceedings{Wang2025TAME,
author = {Wang, Jianyu and Sheikh, Jacob Nean-Hua and Le, Cat P. and Bidkhori, Hoda},
title = {Task-Aware Multi-Expert Architectures for Lifelong Deep Learning},
booktitle = {Proceedings of the 2025 Winter Simulation Conference},
month = {December},
year = {2025},
url = {https://jianyuwang0511.github.io/tameldl}
}