- Final year PhD candidate at Univ. of Cambridge focusing on Machine Unlearning
- Currently interning as Student Researcher at Google DeepMind (🇬🇧)
- Prev. AI Security & Privacy Research Intern at IBM Research (🇮🇪)
- Pre-PhD experience:
- Associate at Boston Consulting Group (BCG) (🇨🇭)
- Co-founder of an ESA BIC funded deep tech startup (🇦🇹)
I work on making machine learning models forget undesired data without having to retrain the whole model ($$$). This can be privacy infringing, copyrighted, erroneous, poisoned, outdated or otherwise problematic data.
🪄: Machine Unlearning Papers
Visual | Title | Authorship | Venue |
---|---|---|---|
Potion: Towards Poison Unlearning | First | Journal of Data-Centric Machine Learning Research (DMLR) | |
Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening | Equal Contrib. | AAAI 2024 | |
Loss-Free Machine Unlearning | Equal Contrib. | ICLR 2024 Tiny Paper | |
Parameter-Tuning-Free Data Entry Error Unlearning with Adaptive Selective Synaptic Dampening | First | Preprint | |
Zero-Shot Machine Unlearning at Scale via Lipschitz Regularization | 3rd | Preprint | |
Learning to Forget using Hypernetworks | Co-supervised MPhil thesis | NeurIPS 2024 Workshop | |
CONDA: Fast Federated Unlearning with Contribution Dampening | 4th | Preprint |
📖: Other Research (Red Teaming, Agentic LLMs, and more)
Visual | Title | Authorship | Venue |
---|---|---|---|
Identifying contributors to manufacturing outcomes in a multi-echelon setting: a decentralised uncertainty quantification approach | First | IEEE Transactions on Industrial Informatics | |
Using Reinforcement Learning for the Three-Dimensional Loading Capacitated Vehicle Routing Problem | First | IJCAI 2023 Workshop | |
Attack Atlas: A Practitioner’s Perspective on Challenges and Pitfalls in Red Teaming GenAI | 2nd | NeurIPS 2024 Workshop | |
Agentic LLMs in the Supply Chain: Towards Autonomous Multi-Agent Consensus-Seeking | Co-supervised MSc thesis | Preprint |