Aleksandar Bojchevski

Hi, I’m Aleks :wave: I’m a tenure-track faculty at the CISPA Helmholtz Center for Information Security. I’m broadly interested in trustworthy machine learning, that is models and algorithms that are not only accurate or efficient but also robust, privacy-preserving, fair, uncertainty-aware, interpretable, etc.

I completed my PhD on machine learning for graphs at the DAML group at TU Munich, advised by Stephan Günnemann. I have a MSc in computer science from TU Munich where I worked at Rostlab on natural language mutation mentions.

Research interests: adversarial robustness, provable guarantees, fairness, privacy-preserving machine learning, uncertainty estimation, interpretability, graph neural networks, graph representation learning, and (deep) generative models. If you are interested in working with us on these (or adjacent) topics don’t hesitate to get in touch. We have multiple open positions!

news

Sep 22 Two papers on robustness, one on adaptive attacks and one on robustness certificates, were accepted at NeurIPS 2022.
Sep 22 This semester I am co-teaching the Elements of Machine Learning lecture with Prof. Jilles Vreeken.
Jan 22 Our paper on generalization of combinatorial solvers was accepted at ICLR 2022.
Oct 21 This semester I am teaching the Trustworthy Graph Neural Networks seminar.
Sep 21 Our paper on robustness of GNNs at scale was accepted at NeurIPS 2021.
Sep 21 I have joined CISPA as a tenure-track faculty :microscope:. We have multiple open positions!
Jun 21 I gave a talk on Trustworthy Machine Learning for Graphs with Guarantees at the Math of AI Seminar @ LMU.
Mar 21 I gave a talk on Trustworthy Machine Learning for Graphs at CISPA.
Mar 21 I gave a talk on Provably Robust Machine Learning on Graphs at NEC Labs Europe.
Feb 21 Our paper on the curse of dimensionality for randomized smoothing :skull: was accepted at AISTATS 2021.

selected publications [full list]

  1. NeurIPS
    Are Defenses for Graph Neural Networks Robust?
    Felix Mujkanovic, Simon Geisler, Aleksandar Bojchevski, and Stephan Günnemann
    In Neural Information Processing Systems, NeurIPS 2022
  2. ICLR
    Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks
    Jan Schuchardt, Aleksandar Bojchevski, Johannes Klicpera, and Stephan Günnemann
    In International Conference on Learning Representations, ICLR 2021
  3. ICML
    Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More
    Aleksandar Bojchevski, Johannes Klicpera, and Stephan Günnemann
    In International Conference on Machine Learning, ICML 2020
  4. NeurIPS
    Certifiable Robustness to Graph Perturbations
    Aleksandar Bojchevski, and Stephan Günnemann
    In Neural Information Processing Systems, NeurIPS 2019
  5. ICML Oral
    Adversarial Attacks on Node Embeddings via Graph Poisoning
    Aleksandar Bojchevski, and Stephan Günnemann
    In International Conference on Machine Learning, ICML 2019