Aleksandar Bojchevski
Hi, I’m Aleks I’m a tenuretrack faculty at the CISPA Helmholtz Center for Information Security. I’m broadly interested in trustworthy machine learning, that is models and algorithms that are not only accurate or efficient but also robust, privacypreserving, fair, uncertaintyaware, interpretable, etc.
I completed my PhD on machine learning for graphs at the DAML group at TU Munich, advised by Stephan Günnemann. I have a MSc in computer science from TU Munich where I worked at Rostlab on natural language mutation mentions.
Research interests: adversarial robustness, provable guarantees, fairness, privacypreserving machine learning, uncertainty estimation, interpretability, graph neural networks, graph representation learning, and (deep) generative models. If you are interested in working with us on these (or adjacent) topics don’t hesitate to get in touch. We have multiple open positions!
news
Sep 22  Two papers on robustness, one on adaptive attacks and one on robustness certificates, were accepted at NeurIPS 2022. 

Sep 22  This semester I am coteaching the Elements of Machine Learning lecture with Prof. Jilles Vreeken. 
Jan 22  Our paper on generalization of combinatorial solvers was accepted at ICLR 2022. 
Oct 21  This semester I am teaching the Trustworthy Graph Neural Networks seminar. 
Sep 21  Our paper on robustness of GNNs at scale was accepted at NeurIPS 2021. 
Sep 21  I have joined CISPA as a tenuretrack faculty . We have multiple open positions! 
Jun 21  I gave a talk on Trustworthy Machine Learning for Graphs with Guarantees at the Math of AI Seminar @ LMU. 
Mar 21  I gave a talk on Trustworthy Machine Learning for Graphs at CISPA. 
Mar 21  I gave a talk on Provably Robust Machine Learning on Graphs at NEC Labs Europe. 
Feb 21  Our paper on the curse of dimensionality for randomized smoothing was accepted at AISTATS 2021. 
selected publications [full list]

NeurIPSAre Defenses for Graph Neural Networks Robust?In Neural Information Processing Systems, NeurIPS 2022