NEW: Call for Book Chapters

Artificial Intelligence and Security: Recent Developments

To be published in the Springer Lecture Notes in Computer Science (LNCS) series

The integration of artificial intelligence (AI) and security, especially cryptography, is advancing rapidly. AI techniques are increasingly used in the analysis, design, and attack of cryptographic systems—spanning applications such as deep learning-based cryptanalysis, side-channel attacks, and AI-assisted cryptographic design. Conversely, cryptographic techniques are becoming essential tools for enhancing the security, privacy, and accountability of AI systems, especially in light of growing threats such as model extraction, backdooring, and malicious use of generative models.

Building on the themes explored in the AICrypt workshop series, this edited volume aims to present a comprehensive overview of recent developments at the intersection of AI and security. It will serve as a reference for both researchers and practitioners interested in the interplay between cryptography and AI, showcasing novel methodologies, results, and perspectives from academia and industry alike.

We invite contributions of chapters about original, unpublished research or surveys and systematizations of knowledge in this interdisciplinary area.

Submission Guidelines

Each chapter is expected to be around 20 pages, formatted according to the Springer LNCS guidelines. Instructions and templates will be provided to accepted contributors.

Authors are invited to submit an abstract of up to 1 page (excluding references), outlining the intended scope and main contributions of the proposed chapter. Abstracts should be submitted via email to the editors:

• Lejla Batina (Radboud University): lejla@cs.ru.nl

• Luca Mariot (University of Twente): l.mariot@utwente.nl

• Stjepan Picek (Radboud University): stjepan.picek@ru.nl

Topics of Interest

Topics relevant to this volume include, but are not limited to:

• Deep learning-based cryptanalysis (e.g., neural distinguishers)

• Implementation attacks and AI

• Application of homomorphic encryption and secure MPC for privacy-preserving ML

• Fuzzing

• Malware analysis

• LLMs and security (bug fixes, code generation, etc.)

• Physically Unclonable Functions and AI

• Adversarial attacks

• Backdoor attacks

• Federated learning

• Watermarking of AI models

• Privacy attacks

We are particularly interested in works that bridge the gap between AI and cryptography, either by applying AI to improve security techniques, or by using cryptographic tools to enhance the trustworthiness and accountability of AI systems.

Important dates (AoE)

Abstract submission deadline: September 5, 2025

Notification of abstract acceptance: September 15, 2025

Full chapter submission deadline: December 15, 2025

Revised version deadline (after editorial review): March 15, 2025

LNCS

About AICrypt

The intersection of artificial intelligence (AI) and security has gained significant attention, driven by the need for secure solutions that deploy AI. Cryptography, in particular, stands as a notable example of the benefits of AI integration. AI techniques and methods are already being applied to address challenges in cryptography, such as improving cryptanalysis and physical attacks and relevant countermeasures. Additionally, the use of cryptography to address security and privacy issues in AI systems is emerging as a crucial area of focus. As attacks on AI systems become more powerful, there is a growing need to explore how cryptographic strategies can mitigate these threats. Examples include the development of cryptographic backdoors in neural networks, the use of cryptographic techniques to watermark the output of LLMs, or model stealing attacks based on cryptanalysis techniques. Our goal is to bring together experts from academia and industry, each with a unique perspective on cryptography and AI, to foster knowledge exchange and collaborative innovation. We are particularly interested in exploring how techniques can be transferred across different cryptographic applications and in enhancing AI security mechanisms. Moreover, we will review recent advancements, including those discussed at previous AICrypt events, to provide a comprehensive understanding of this rapidly evolving field.

Topics of Interest

Authors interested in giving a contributed talk in AICrypt are invited to submit an extended abstract of at most 2 pages (excluding references) on Easychair.

Topics of interest for this workshop include, but are not limited to:

  • - Deep learning-based cryptanalysis (e.g., neural distinguishers)
  • - Explainability and interpretability of AI models for cryptanalysis
  • - Deep learning techniques for Side-Channel Analysis
  • - AI-assisted design of cryptographic primitives and protocols
  • - AI-driven attacks on cryptographic protocols (e.g., searchable symmetric encryption)
  • - Application of homomorphic encryption and secure multiparty computation for privacy-preserving ML
  • - Cryptographic approaches to enforce security and traceability of AI models (e.g., cryptographic backdoors of neural networks and statistical watermarking of LLM-generated content)

Submitted abstracts for contributed talks will be reviewed by the workshop chairs for suitability and interest to the AICrypt audience. There are no formal proceedings published in this workshop. Thus, authors can submit extended abstracts related to works submitted or recently published in other venues or work in progress that they plan to submit elsewhere.

Submission

We encourage researchers working on all aspects of AI and cryptography to take the opportunity and use AICrypt to share their work and participate in discussions. The authors are invited to submit an extended abstract using the EasyChair submission system. All submitted abstracts must follow the original LNCS format with a page limit of up to 2 pages (excluding references). The abstracts should be submitted electronically in PDF format.

There are no formal proceedings published in this workshop, thus authors can submit extended abstracts related to works submitted or recently published in other venues, or work in progress that they plan to submit elsewhere.

The speakers will be invited to present their work based on the evaluation of the workshop chairs for suitability and interest to the AICrypt audience. Every accepted submission must have at least one author registered for the workshop.

Important dates (AoE)

Abstract submission deadline: MARCH 14, 2025

Notification to authors: MARCH 28, 2025

Workshop date: May 3, 2025

IACR LNCS

Registration

Workshop registration goes through the Eurocrypt registration process. Check this page for further information.

Keynote

Some thoughts on AI for Crypto, and Crypto for AI

In ten years Large Language Models (LLMs) have grown from something that can sometimes spell some words correctly, to something that can solve PhD level math problems and write code at the level of competitive programmers.

In this talk, I lay out several directions where I think AI could be used to solve problems in the crypto community, and where the crypto community can help solve problems in AI. These range from concrete technical problems that need to be solved and where crypto could help (e.g., model stealing or watermarking) to indirect applications where having a background in crypto would (e.g., formalizing definitions of robustness or unlearning). And in reverse, I also consider various directions where it may be possible to use recent advances in LLMs to solve problems in crypto (e.g., applications towards cryptanalysis).

Nicholas Carlini is a research scientist at Anthropic working at the intersection of machine learning and computer security, and for this work has received best paper awards from USENIX Security, ICML, and IEEE S&P. He received his PhD from UC Berkeley in under David Wagner.

Accepted Abstracts

Adversarial-Resistant AI Using Cryptographic Primitives: A Commitment-Based Approach to Secure Explainability and Confidentiality

Sumitra Biswal

Willow: Secure Aggregation with One-Shot Clients

James Bell-Clark, Adria Gascon, Baiyu Li, Mariana Raykova and Phillipp Schoppmann

Spot-Check: Integrity Verification for Outsourced ML via Hidden Backdoors

Artem Grigor and Ivan Martinovic

Private Deep Neural Network Inference Engines with Homomorphic Encryption

Antonio J. Peña, Lena Martens, Priyam Mehta, Zaira Pindado and Thomas Spendlhofer

Farfetch'd: Side-Channel Privacy Attacks in Confidential VMs

Ruiyi Zhang, Albert Cheu, Adria Gascon, Daniel Moghimi, Phillipp Schoppmann, Michael Schwarz and Octavian Suciu

An LLM Framework For Cryptography Over Chat Channels

Danilo Gligoroski, Mayank Raikwar and Sonu Kumar Jha

Private Deep Learning on Vertically Partitioned Datasets

Parker Newton

Oblivious Defense in ML Models: Backdoor Removal without Detection

Shafi Goldwasser, Jonathan Shafer, Neekon Vafa and Vinod Vaikuntanathan

Generic Partial Decryption as Feature Engineering for Neural Distinguishers

Rocco Brunelli, David Gerault, Emanuele Bellini, Anna Hambitzer and Marco Pedicini

Program

The program starts at 09:25 am, CEST time (UTC + 2).

TIME
CEST (UTC+2)
SESSION/TITLE
Session 1: Cryptographic Backdoors in ML
09:25 - 10:30
09:25 - 09:30 Opening Remarks
09:30 - 10:00 Oblivious Defense in ML Models: Backdoor Removal without Detection
Shafi Goldwasser, Jonathan Shafer, Neekon Vafa and Vinod Vaikuntanathan
10:00 - 10:30 Spot-Check: Integrity Verification for Outsourced ML via Hidden Backdoors
Artem Grigor and Ivan Martinovic
10:30 - 11:00 Coffee Break
Session 2: Cryptography for Privacy-Preserving ML
11:00 - 13:00
11:00 - 11:30 Willow: Secure Aggregation with One-Shot Clients
James Bell-Clark, Adria Gascon, Baiyu Li, Mariana Raykova and Phillipp Schoppmann
11:30 - 12:00 Private Deep Neural Network Inference Engines with Homomorphic Encryption
Antonio J. Peña, Lena Martens, Priyam Mehta, Zaira Pindado and Thomas Spendlhofer
12:00 - 12:30 Private Deep Learning on Vertically Partitioned Datasets
Parker Newton
12:30 - 13:00 Farfetch'd: Side-Channel Privacy Attacks in Confidential VMs
Ruiyi Zhang, Albert Cheu, Adria Gascon, Daniel Moghimi, Phillipp Schoppmann, Michael Schwarz and Octavian Suciu
13:00 - 14:15 Lunch Break
Session 3: Keynote talk
14:15 - 15:15
14:15 - 15:15 Keynote Talk: Some thoughts on AI for Crypto,
and Crypto for AI
Nicholas Carlini
15:15 - 15:45 Coffee break
Session 4: Neural Distinguishers, Adversarial Resistance and LLM for cryptography
15:45 - 17:15
15:45 - 16:15 Adversarial-Resistant AI Using Cryptographic Primitives: A Commitment-Based Approach to Secure Explainability and Confidentiality
Sumitra Biswal
16:15 - 16:45 Generic Partial Decryption as Feature Engineering for Neural Distinguishers
Rocco Brunelli, David Gerault, Emanuele Bellini, Anna Hambitzer and Marco Pedicini
16:45 - 17:15 An LLM Framework For Cryptography Over Chat Channels
Danilo Gligoroski, Mayank Raikwar and Sonu Kumar Jha
17:15 - 17:20 Closing Remarks

Organizers

Stjepan Picek

Associate Professor

Radboud University

Luca Mariot

Assistant Professor

University of Twente