I'm a Ph.D. candidate in theoretical computer science at Boston University, under the excellent supervision of Ran Canetti. I am currently looking for full-time job opportunities in TCS and AI/ML research (postdoc and industry) for after graduation in Summer 2024.


I research algorithmic and complexity-theoretic aspects of machine learning.
1) I want to develop the theoretical justification for empirical phenomena observed in the practice of ML/AI. Currently, I'm thinking about how to formalize the advantages of multimodal vs. unimodal data in machine learning. I'm open to new projects in ML theory; feel free to reach out!
2) I also study the theory of meta-complexity as a way of making formal connections between cryptography, complexity, and learning. In the spring semester of 2023, I visited the Simons Institute for the theory of computing at UC Berkeley, where I participated in the meta-complexity program. Since then, I have been very interested in developing our understanding of what kinds of algorithms are implied by natural circuit lower bounds. See my research presented at ITCS and ALT 2024 for more on this.

I am also interested in privacy, security and responsibility issues in machine learning and AI.
1) In the past, I designed algorithms for conducting secret experiments. This work has consequences to the theory and practice of model stealing attacks, as well as information security in data curation and anotation. I recently wrote a series of blog posts that help explain some of the theoretical difficulty of defending against model stealing attacks, and how that relates to possible challenges in abuse prevention in LLM chatbots like chatGPT. This series is based in part on some of my research published at TCC '21 and SaTML '23, and can be found at the links below.

Model Extraction, LLM Abuse, Steganography, and Covert Learning part 1 part 2 part 3


Publications and manuscripts

  • "On Stronger Computational Separations Between Multimodal and Unimodal Machine Learning."
    Ari Karchmer
    In submission...
    ArXiv preprint.

  • "Agnostic Membership Query Learning with Nontrivial Savings: New Results and Techniques."
    Ari Karchmer
    Appeared at ALT 2024.
    ArXiv preprint.

  • "Distributional PAC-Learning from Nisan's Natural Proofs."
    Ari Karchmer
    Appeared at ITCS 2024. Winner of best student paper award. Invited for publication at TheoretiCS.
    ArXiv preprint.

  • "Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses."
    Ari Karchmer
    Appeared at SaTML 2023.
    IACR ePrint.

  • "Covert Learning: How to Learn with an Untrusted Intermediary."
    Ran Canetti and Ari Karchmer
    Appeared at TCC 2021. Invited to Journal of Cryptology special issue of select papers from TCC.
    IACR ePrint.

Teaching fellowships


Select talks

  • "Cryptography and Complexity Theory in the Design and Analysis of ML" - Vector Institue, Toronto CA, April '24 Slides
  • "Learning from Nisan's Natural Proofs" - MIT CIS Seminar, March '24 Slides
  • "Distributional PAC-learning from Nisan's Natural Proofs" - ITCS, Simons Institute, Jan '24 Video
  • "Undetectable Model Stealing and more with Covert Learning" - Google Research MTV, algorithms seminar, Jan '24 Slides
  • "New Approaches to Heuristic PAC-Learning vs. PRFs" - Lower Bounds, Learning, and Average-case Complexity Workshop at Simons Institute, Feb '23 Simons talk
  • "The Limits of Provable Security Against Model Extraction" - Privacy Preserving Machine Learning Workshop at Crypto, Aug '22 PPML talk
  • "Covert Learning: How to Learn with an Untrusted Intermediary" - Charles River Crypto Day at MIT, Nov '21 Crypto day talk



Flag Counter