Currently, I am researching how to train models able to learn new tasks continuously. Current models cannot learn new tasks without retraining the whole model. Each time the model learns a new task, the model optimizes the weights to learn only the new data distribution, causing the problem known as Catastrophic Forgetting, which means the decrease of performance of previously learned tasks.
One way to avoid this problem is by restricting the modification of the learned weights, limiting learning of new tasks but forcing them to recall past tasks, creating the plasticity dilemma. Another way to tackle this problem is by encouraging the model to learn representations that can be useful across tasks. This helps avoid forgetting previous tasks and increase the knowledge transfer between them since we are learning weights with helpful knowledge.
My research focuses on devising ways to learn models able to accomplish this scenario, where the model can learn a representation that can transfer its knowledge across tasks and specific enough to solve the current tasks.
As part of my PhD., I am part of the IALab, which is the artificial intelligence lab at the (UC). Also, I participate in the Millennium Institute of Foundational Research on Data. I am part of the Continual Learning Community, actively participating in the reading groups and helping to organize the 2nd Workshop on Continual Learning in Computer Vision. Additionally, I have participated in activities of the Latinx in AI.