Publications
Notation ‘*’ Indicates equal contribution. Also see my Google Scholar.
- ACE: A Security Architecture for LLM-Integrated App Systems. Evan Li*, Tushin Mallick*, Evan Rose*, William Robertson, Alina Oprea, and Cristina Nita-Rotaru. In Proceedings of the Network and Distributed System Security Symposium (NDSS), 2026.
LLM-integrated app systems extend the utility of Large Language Models (LLMs) with third-party apps that are invoked by a system LLM using interleaved planning and execution phases to answer user queries. These systems introduce new attack vectors where malicious apps can cause integrity violation of planning or execution, availability breakdown, or privacy compromise during execution. In this work, we identify new attacks impacting the integrity of planning, as well as the integrity and availability of execution in LLM-integrated apps, and demonstrate them against IsolateGPT, a recent solution designed to mitigate attacks from malicious apps. We propose Abstract-Concrete-Execute (ACE), a new secure architecture for LLM-integrated app systems that provides security guarantees for system planning and execution. Specifically, ACE decouples planning into two phases by first creating an abstract execution plan using only trusted information, and then mapping the abstract plan to a concrete plan using installed system apps. We verify that the plans generated by our system satisfy user-specified secure information flow constraints via static analysis on the structured plan output. During execution, ACE enforces data and capability barriers between apps, and ensures that the execution is conducted according to the trusted abstract plan. We show experimentally that our system is secure against attacks from the INJECAGENT benchmark, a standard benchmark for control flow integrity in the face of indirect prompt injection attacks, and our newly introduced attacks. Our architecture represents a significant advancement towards hardening LLM-based systems containing system facilities of varying levels of trustworthiness.
@inproceedings{li2026ace, title = {{ACE}: A Security Architecture for {LLM}-Integrated App Systems}, author = {Li, Evan and Mallick, Tushin and Rose, Evan and Robertson, William and Oprea, Alina and Nita-Rotaru, Cristina}, booktitle = {Proceedings of the {Network and Distributed System Security Symposium} ({NDSS})}, year = {2026}, address = {San Diego, California}, publisher = {Internet Society}, url = {https://arxiv.org/abs/2504.20984}, }
- Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks. Isha Gupta, Hidde Lycklama, Emanuel Opel, Evan Rose, and Anwar Hithnawi. Preprint, arXiv:2410.08872, 2024.
As machine learning models become increasingly complex, concerns about their robustness and trustworthiness have become more pressing. A critical vulnerability of these models is data poisoning attacks, where adversaries deliberately alter training data to degrade model performance. One particularly stealthy form of these attacks is subpopulation poisoning, which targets distinct subgroups within a dataset while leaving overall performance largely intact. The ability of these attacks to generalize within subpopulations poses a significant risk in real-world settings, as they can be exploited to harm marginalized or underrepresented groups within the dataset. In this work, we investigate how model complexity influences susceptibility to subpopulation poisoning attacks. We introduce a theoretical framework that explains how overparameterized models, due to their large capacity, can inadvertently memorize and misclassify targeted subpopulations. To validate our theory, we conduct extensive experiments on large-scale image and text datasets using popular model architectures. Our results show a clear trend: models with more parameters are significantly more vulnerable to subpopulation poisoning. Moreover, we find that attacks on smaller, human-interpretable subgroups often go undetected by these models. These results highlight the need to develop defenses that specifically address subpopulation vulnerabilities.
@misc{gupta2024fragile, title = {Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks}, author = {Gupta, Isha and Lycklama, Hidde and Opel, Emanuel and Rose, Evan and Hithnawi, Anwar}, year = {2024}, eprint = {2410.08872}, journal = {arXiv}, archiveprefix = {arXiv}, primaryclass = {cs.LG}, url = {https://arxiv.org/abs/2410.08872}, }
- UTrace: Poisoning Forensics for Private Collaborative Learning. Evan Rose, Hidde Lycklama, Harsh Chaudhari, Anwar Hithnawi, and Alina Oprea. Preprint, arXiv:2409.15126, 2024.
Privacy-preserving machine learning (PPML) enables multiple data owners to contribute their data privately to a set of servers that run a secure multi-party computation (MPC) protocol to train a joint ML model. In these protocols, the input data remains private throughout the training process, and only the resulting model is made available. While this approach benefits privacy, it also exacerbates the risks of data poisoning, where compromised data owners induce undesirable model behavior by contributing malicious datasets. Existing MPC mechanisms can mitigate certain poisoning attacks, but these measures are not exhaustive. To complement existing poisoning defenses, we introduce UTrace: a framework for User-level Traceback of poisoning attacks in PPML. Utrace computes user responsibility scores using gradient similarity metrics aggregated across the most relevant samples in an owner’s dataset. UTrace is effective at low poisoning rates and is resilient to poisoning attacks distributed across multiple data owners, unlike existing unlearning-based methods. We introduce methods for checkpointing gradients with low storage overhead, enabling traceback in the absence of data owners at deployment time. We also design several optimizations that reduce traceback time and communication in MPC. We provide a comprehensive evaluation of UTrace across four datasets from three data modalities (vision, text, and malware) and show its effectiveness against 10 poisoning attacks.
@misc{rose2024utrace, title = {UTrace: Poisoning Forensics for Private Collaborative Learning}, author = {Rose, Evan and Lycklama, Hidde and Chaudhari, Harsh and Hithnawi, Anwar and Oprea, Alina}, year = {2024}, eprint = {2409.15126}, journal = {arXiv}, archiveprefix = {arXiv}, primaryclass = {cs.CR}, url = {https://arxiv.org/abs/2409.15126}, }
- Poisoning Attacks and Subpopulation Susceptibility. Evan Rose, Fnu Suya, and David Evans. In The 5th Workshop on Visualization for AI Explainability, 2022.
Machine learning is susceptible to poisoning attacks, in which an attacker controls a small fraction of the training data and chooses that data with the goal of inducing some behavior unintended by the model developer in the trained model. We consider a realistic setting in which the adversary with the ability to insert a limited number of data points attempts to control the model’s behavior on a specific subpopulation. Inspired by previous observations on disparate effectiveness of random label-flipping attacks on different subpopulations, we investigate the properties that can impact the effectiveness of state-of-the-art poisoning attacks against different subpopulations. For a family of 2-dimensional synthetic datasets, we empirically find that dataset separability plays a dominant role in subpopulation vulnerability for less separable datasets. However, well-separated datasets exhibit more dependence on individual subpopulation properties. We further discover that a crucial subpopulation property is captured by the difference in loss on the clean dataset between the clean model and a target model that misclassifies the subpopulation, and a subpopulation is much easier to attack if the loss difference is small. This property also generalizes to high-dimensional benchmark datasets. For the Adult benchmark dataset, we show that we can find semantically-meaningful subpopulation properties that are related to the susceptibilities of a selected group of subpopulations. The results in this paper are accompanied by a fully interactive web-based visualization of subpopulation poisoning attacks found at https://uvasrg.github.io/visualizing-poisoning.
@inproceedings{rose2022poisoning, title = {Poisoning Attacks and Subpopulation Susceptibility}, author = {Rose, Evan and Suya, Fnu and Evans, David}, booktitle = {The 5th Workshop on Visualization for AI Explainability}, year = {2022}, }
- Machines as Craftsmen: Localized Parameter Setting Optimization for Fused Filament Fabrication 3D Printing. John Gardner, Kevin Hunt, Austin Ebel, Evan Rose, Sean Zylich, Benjamin Jensen, Kristopher Wise, Emilie Siochi, and Godfrey Sauti. Advanced Materials Technologies, 2019.
Abstract Quality control and repeatability of 3D printing must be enhanced to fully unlock its utility beyond prototyping and noncritical applications. Machine learning is a potential solution to improving 3D printing performance and is explored for areas including flaw identification and property prediction. However, critical problems must be resolved before machine learning can truly enable 3D printing to reach its potential, including the very large data sets required for training and the inherently local nature of 3D printing where the optimum parameter settings vary throughout the part. This work outlines an end-to-end tool for integrating machine learning into the 3D printing process. The tool selects the ideal parameter settings at each location, taking into consideration factors such as geometry, hardware and material response times, and operator priorities. The tool demonstrates its usefulness by correcting for visual flaws common in fused filament fabrication parts. An image recognition neural network classifies local flaws in parts to create training data. A gradient boosting classifier then predicts the local flaws in future parts, based on location, geometry, and parameter settings. The tool selects optimum parameter settings based on the aforementioned factors. The resulting prints show increased quality over prints that use global parameters only.
@article{gardner2019machines, author = {Gardner, John and Hunt, Kevin and Ebel, Austin and Rose, Evan and Zylich, Sean and Jensen, Benjamin and Wise, Kristopher and Siochi, Emilie and Sauti, Godfrey}, title = {Machines as Craftsmen: Localized Parameter Setting Optimization for Fused Filament Fabrication 3D Printing}, journal = {Advanced Materials Technologies}, volume = {4}, number = {3}, pages = {1800653}, keywords = {3D printing, additive manufacturing, artificial intelligence, machine learning, process automation}, doi = {https://doi.org/10.1002/admt.201800653}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/admt.201800653}, eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/admt.201800653}, year = {2019} }