ABOUT


- LinkedIn - Google scholar - X

In my professional life, I'm a member of the Machine Learning Research team at Morgan Stanley. Prior to joining Morgan Stanley, I obtained my Ph.D. from Boston University under the supervision of Ran Canetti in spring of 2024. I was then a postdoc in 2024-2025 at Harvard University, where I was hosted by Seth Neel.

I'm interested in Machine Learning, AI, and Theoretical Computer Science. My current research interests are in the ``convex hull'' of (in no particular order):

Steering and adaptation of AI models
AI interpretability, alignment and safety
Data efficiency and generalization

See my research posts on these topics below. These pieces are works in progress! Comments and feedback are welcome.

POSTS


I generally believe that AI will benefit humanity, but only if we can properly communicate with it, it's steerable, and it generalizes sufficiently. I also believe that a human in the loop will provide not only trust and safety but also pareto improvements.

While a Ph.D. student and postdoc, I was a theoretical computer scientist (computational learning theory, mostly). See the below sections for more details.

ACADEMIC RESEARCH

While at BU and Harvard, I focused mainly on theoretical work. I worked on research problems in a variety of areas, including:

  • Machine Learning theory, especially complexity separations (e.g., random features vs. deep learning with gradient descent, multimodal vs. unimodal learning)
  • Machine Learning interpretability, including data attribution and verifiability of attribution
  • Computational Learning and Complexity theory, especially meta-complexity and the relationship between circuit lower bounds and computational learning theory (my thesis)
  • Cryptography and Machine Learning security, including the model stealing problem and "Covert Learning"
The best way to learn about my research in these areas is from my refereed publications below.

REFEREED PUBLICATIONS

  1. Efficiently Verifiable Proofs of Data Attribution. NeurIPS 2025.
    Ari Karchmer, Martin Pawelczyk and Seth Neel.
    ArXiv preprint.

  2. The Power of Random Features and the Limits of Distribution Free Gradient Descent. ICML 2025.
    Ari Karchmer and Eran Malach.
    ArXiv preprint.

  3. On Stronger Computational Separations Between Multimodal and Unimodal Machine Learning. ICML 2024.
    Ari Karchmer.
    ArXiv preprint. "Spotlight" paper (3.5% acceptance rate).

  4. Agnostic Membership Query Learning with Nontrivial Savings: New Results and Techniques. ALT 2024.
    Ari Karchmer.
    ArXiv preprint.

  5. Distributional PAC-Learning from Nisan's Natural Proofs. ITCS 2024.
    Ari Karchmer.
    ArXiv preprint. Winner of best student paper award.

  6. Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses. SaTML 2023.
    Ari Karchmer.
    IACR ePrint.

  7. Covert Learning: How to Learn with an Untrusted Intermediary. TCC 2021. (ab)
    Ran Canetti and Ari Karchmer.
    IACR ePrint.

QUOTES

Is there a better description of a cube than that of its construction? — The Brutalist (2024)

Vita brevis, ars longa — Hippocrates (460-370 B.C.E.)

CAFE MUSEUMS

I try to take photos of cafes I visit.

berkeley cafe museum
new york city cafe museum