Overview
I currently work on cryptography and machine learning, with a focus on efficient and privacy-preserving machine learning systems.
Main Research Interests
Efficient Machine Learning
I am mainly interested in efficient machine learning, more precisely on quantization and quantized neural networks and their applications. This includes:
- Quantization techniques: Developing methods to reduce the precision of neural network weights and activations while maintaining performance
- Hardware-specific optimizations: Adapting neural network architectures for efficient deployment on specialized hardware
Privacy-Preserving Machine Learning (PPML)
A significant part of my research focuses on privacy-preserving machine learning, particularly:
- Full Homomorphic Encryption (FHE): Designing neural network architectures that are compatible with FHE schemes, enabling computation on encrypted data
- Privacy in Machine Learning: Exploring methods to train and deploy machine learning models while protecting sensitive data
Explainable AI (XAI)
I am also interested in making machine learning models more interpretable and explainable:
- Interpretable neural network architectures: Developing models that are both accurate and interpretable
- Rule-based models: Creating neural network-based rule models that provide clear decision-making processes
Current Work
My PhD research, under the supervision of Thomas Peyrin, focuses on bridging the gap between cryptography and machine learning, with particular emphasis on:
- Designing FHE-friendly neural network architectures
- Developing scalable and verifiable neural networks
- Creating interpretable models for real-world applications
For more details about my publications, please see my Resume or visit my Homepage.