I am currently working as a Research Scientist in Artificial Intelligence in the research team of the AXA Group.
Previously, I did my Ph.D. in Machine Learning at Sorbonne Université in Paris, in the lab LIP6 (LFI team). My thesis was done under the supervision of Marie-Jeanne Lesot , Christophe Marsala and Marcin Detyniecki and focuses on machine learning interpretability. Before starting my Ph.D., I graduated from ENS Cachan and ENSAE ParisTech in Data Science and Machine Learning in France and worked as a Data Scientist for a year and a half at Deezer.
Research interests: My research mainly focuses on machine learning interpretability. I am also interested in the following topics: adversarial learning, fairness in machine learning, active learning and causal inference. Feel free to contact me if you need more information or simply want to discuss any topic.
Imperceptible Adversarial Attacks on Tabular Data. Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki. NeurIPS 2019 Workshop on Robust AI in Financial Services: Data, Fairness, Explainability, Trustworthiness, and Privacy
@inproceedings{ijcai2019-388,
title = {The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations},
author = {Laugel, Thibault and Lesot, Marie-Jeanne and Marsala, Christophe and Renard, Xavier and Detyniecki, Marcin},
booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, {IJCAI-19}},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
pages = {2801--2807},
year = {2019}
}
@article{Laugel2018WHI,
author={Laugel, Thibault and Renard, Xavier and Lesot, Marie-Jeanne and Marsala, Christophe and Detyniecki, Marcin},
title={Defining Locality for Surrogates in Post-hoc Interpretablity},
booktitle={ICML Workshop on Human Interpretability in Machine Learning (WHI 2018)},
year={2018}
}
@InProceedings{Laugel2018,
author="Laugel, Thibault
and Lesot, Marie-Jeanne
and Marsala, Christophe
and Renard, Xavier
and Detyniecki, Marcin",
title="Comparison-Based Inverse Classification for Interpretability in Machine Learning",
booktitle="Information Processing and Management of Uncertainty in Knowledge-Based Systems.",
year="2018",
pages="100--111",
}
Other Talks
I gave the following talks about my work:
'Post-hoc Local Interpretability for Black-Box Classifiers' (SINCLAIR, Joint lab by EDF-Total-Thalès), October 2020
PhD Defense (Sorbonne Université), July 2020
'Local Border Detection for Post-hoc Interpretability' (ISFA, Université Lyon 1), March 2019 [slides]
'Instance-based Method for Post-hoc Interpretability: a Local Approach' (Workshop on Explainability in Machine Learning, Université d'Orléans), October 2018
'Defining Locality for Surrogates in Post-hoc Interpretatability' (CNRS AI Explainability Workshop), October 2018
'Comparison-based Interpretability' (AXA group, Paris), October 2017
Teaching
My experience as a teaching assistant include the following courses:
AI for Data Science, with Python (Sorbonne Université, 2018):
Introduction to supervised learning (knn, perceptron, decision trees, random forests) and unsupervised learning (hierarchical clustering, k-means)
Statistics (Université Panthéon-Sorbonne, 2013):
Introduction to proability theory and bayesian statistics
I also supervised a group of master students for their computer science project on Machine Learning Interpretability.
Other Projects
GdR-IA
I am part of the 'GT Explicabilité', a group of researchers working on machine learning interpretability and part of the research group GdR-IA of the CNRS (French research organization).
Data Science Game
I cofounded in 2014 the Data Science Game, an association organizing every year a machine learning competition for students from all around the world.
We got invited to talk about the 2nd edition of the Data Science Game at NIPS 2016 CiML workshop. Link to poster.