default search action
Florian Tramèr
Florian Simon Tramèr
Person information
- affiliation: ETH Zurich, Switzerland
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c53]Javier Rando, Florian Tramèr:
Universal Jailbreak Backdoors from Poisoned Human Feedback. ICLR 2024 - [c52]Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing part of a production language model. ICML 2024 - [c51]Shanglun Feng, Florian Tramèr:
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models. ICML 2024 - [c50]Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari:
Extracting Training Data From Document-Based VQA Models. ICML 2024 - [c49]Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining. ICML 2024 - [c48]Lukas Fluri, Daniel Paleka, Florian Tramèr:
Evaluating Superhuman Models with Consistency Checks. SaTML 2024: 194-232 - [c47]Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr:
Evading Black-box Classifiers Without Breaking Eggs. SaTML 2024: 408-424 - [c46]Edoardo Debenedetti, Giorgio Severi, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Eric Wallace, Nicholas Carlini, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. USENIX Security Symposium 2024 - [i69]Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, Milad Nasr:
Query-Based Adversarial Prompt Generation. CoRR abs/2402.12329 (2024) - [i68]Nicholas Carlini, Daniel Paleka, Krishnamurthy (Dj) Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing Part of a Production Language Model. CoRR abs/2403.06634 (2024) - [i67]Shanglun Feng, Florian Tramèr:
Privacy Backdoors: Stealing Data with Corrupted Pretrained Models. CoRR abs/2404.00473 (2024) - [i66]Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramèr, Hamed Hassani, Eric Wong:
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models. CoRR abs/2404.01318 (2024) - [i65]Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, José Hernández-Orallo, Lewis Hammond, Eric J. Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Zhang, Ruiqi Zhong, Seán Ó hÉigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Yoshua Bengio, Danqi Chen, Samuel Albanie, Tegan Maharaj, Jakob N. Foerster, Florian Tramèr, He He, Atoosa Kasirzadeh, Yejin Choi, David Krueger:
Foundational Challenges in Assuring Alignment and Safety of Large Language Models. CoRR abs/2404.09932 (2024) - [i64]Javier Rando, Francesco Croce, Krystof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tramèr:
Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs. CoRR abs/2404.14461 (2024) - [i63]Michael Aerni, Jie Zhang, Florian Tramèr:
Evaluations of Machine Learning Privacy Defenses are Misleading. CoRR abs/2404.17399 (2024) - [i62]Edoardo Debenedetti, Javier Rando, Daniel Paleka, Silaghi Fineas Florin, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella Béguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr:
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition. CoRR abs/2406.07954 (2024) - [i61]Robert Hönig, Javier Rando, Nicholas Carlini, Florian Tramèr:
Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI. CoRR abs/2406.12027 (2024) - [i60]Edoardo Debenedetti, Jie Zhang, Mislav Balunovic, Luca Beurer-Kellner, Marc Fischer, Florian Tramèr:
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents. CoRR abs/2406.13352 (2024) - [i59]Debeshee Das, Jie Zhang, Florian Tramèr:
Blind Baselines Beat Membership Inference Attacks for Foundation Models. CoRR abs/2406.16201 (2024) - [i58]Fredrik Nestaas, Edoardo Debenedetti, Florian Tramèr:
Adversarial Search Engine Optimization for Large Language Models. CoRR abs/2406.18382 (2024) - [i57]Francesco Pinto, Nathalie Rauschmayr, Florian Tramèr, Philip Torr, Federico Tombari:
Extracting Training Data from Document-Based VQA Models. CoRR abs/2407.08707 (2024) - 2023
- [c45]Maura Pintor, Florian Simon Tramèr, Xinyun Chen:
AISec '23: 16th ACM Workshop on Artificial Intelligence and Security. CCS 2023: 3666-3668 - [c44]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. ICLR 2023 - [c43]Nicholas Carlini, Florian Tramèr, Krishnamurthy (Dj) Dvijotham, Leslie Rice, Mingjie Sun, J. Zico Kolter:
(Certified!!) Adversarial Robustness for Free! ICLR 2023 - [c42]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. ICLR 2023 - [c41]Chawin Sitawarin, Florian Tramèr, Nicholas Carlini:
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. ICML 2023: 32008-32032 - [c40]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy. INLG 2023: 28-53 - [c39]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei Koh, Daphne Ippolito, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? NeurIPS 2023 - [c38]Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramèr:
Students Parrot Their Teachers: Membership Inference on Model Distillation. NeurIPS 2023 - [c37]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. NeurIPS 2023 - [c36]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. SP 2023: 400-417 - [c35]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. USENIX Security Symposium 2023: 1631-1648 - [c34]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. USENIX Security Symposium 2023: 5253-5270 - [e2]Maura Pintor, Xinyun Chen, Florian Tramèr:
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, Copenhagen, Denmark, 30 November 2023. ACM 2023 [contents] - [i56]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. CoRR abs/2301.13188 (2023) - [i55]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. CoRR abs/2302.07956 (2023) - [i54]Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr:
Poisoning Web-Scale Training Datasets is Practical. CoRR abs/2302.10149 (2023) - [i53]Keane Lucas, Matthew Jagielski, Florian Tramèr, Lujo Bauer, Nicholas Carlini:
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. CoRR abs/2302.13464 (2023) - [i52]Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr:
Evading Black-box Classifiers Without Breaking Eggs. CoRR abs/2306.02895 (2023) - [i51]Lukas Fluri, Daniel Paleka, Florian Tramèr:
Evaluating Superhuman Models with Consistency Checks. CoRR abs/2306.09983 (2023) - [i50]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? CoRR abs/2306.15447 (2023) - [i49]Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Backdoor Attacks for In-Context Learning with Language Models. CoRR abs/2307.14692 (2023) - [i48]Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. CoRR abs/2309.05610 (2023) - [i47]Javier Rando, Florian Tramèr:
Universal Jailbreak Backdoors from Poisoned Human Feedback. CoRR abs/2311.14455 (2023) - [i46]Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee:
Scalable Extraction of Training Data from (Production) Language Models. CoRR abs/2311.17035 (2023) - 2022
- [c33]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CCS 2022: 2779-2792 - [c32]Ambra Demontis, Xinyun Chen, Florian Tramèr:
AISec '22: 15th ACM Workshop on Artificial Intelligence and Security. CCS 2022: 3549-3551 - [c31]Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr:
What Does it Mean for a Language Model to Preserve Privacy? FAccT 2022: 2280-2292 - [c30]Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto:
Large Language Models Can Be Strong Differentially Private Learners. ICLR 2022 - [c29]Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr:
Data Poisoning Won't Save You From Facial Recognition. ICLR 2022 - [c28]Florian Tramèr:
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. ICML 2022: 21692-21702 - [c27]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. NeurIPS 2022 - [c26]Roland S. Zimmermann, Wieland Brendel, Florian Tramèr, Nicholas Carlini:
Increasing Confidence in Adversarial Robustness Evaluations. NeurIPS 2022 - [c25]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. SP 2022: 1897-1914 - [e1]Ambra Demontis, Xinyun Chen, Florian Tramèr:
Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, AISec 2022, Los Angeles, CA, USA, 11 November 2022. ACM 2022, ISBN 978-1-4503-9880-0 [contents] - [i45]Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr:
What Does it Mean for a Language Model to Preserve Privacy? CoRR abs/2202.05520 (2022) - [i44]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. CoRR abs/2202.07646 (2022) - [i43]Florian Tramèr, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini:
Debugging Differential Privacy: A Case Study for Privacy Auditing. CoRR abs/2202.12219 (2022) - [i42]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CoRR abs/2204.00032 (2022) - [i41]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. CoRR abs/2206.10469 (2022) - [i40]Nicholas Carlini, Florian Tramèr, Krishnamurthy Dvijotham, J. Zico Kolter:
(Certified!!) Adversarial Robustness for Free! CoRR abs/2206.10550 (2022) - [i39]Roland S. Zimmermann, Wieland Brendel, Florian Tramèr, Nicholas Carlini:
Increasing Confidence in Adversarial Robustness Evaluations. CoRR abs/2206.13991 (2022) - [i38]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. CoRR abs/2207.00099 (2022) - [i37]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. CoRR abs/2208.12348 (2022) - [i36]Chawin Sitawarin, Florian Tramèr, Nicholas Carlini:
Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems. CoRR abs/2210.03297 (2022) - [i35]Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, Florian Tramèr:
Red-Teaming the Stable Diffusion Safety Filter. CoRR abs/2210.04610 (2022) - [i34]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. CoRR abs/2210.17546 (2022) - [i33]Florian Tramèr, Gautam Kamath, Nicholas Carlini:
Considerations for Differentially Private Learning with Large-Scale Public Pretraining. CoRR abs/2212.06470 (2022) - 2021
- [b1]Florian Tramèr:
Measuring and enhancing the security of machine learning. Stanford University, USA, 2021 - [j5]Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao:
Advances and Open Problems in Federated Learning. Found. Trends Mach. Learn. 14(1-2): 1-210 (2021) - [c24]Hui Xu, Guanpeng Li, Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Fourth International Workshop on Dependable and Secure Machine Learning - DSML 2021. DSN Workshops 2021: xvi - [c23]Florian Tramèr, Dan Boneh:
Differentially Private Learning Needs Better Features (or Much More Data). ICLR 2021 - [c22]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. ICML 2021: 1964-1974 - [c21]Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramèr, Giulia Fanti, Ari Juels:
SquirRL: Automating Attack Analysis on Blockchain Incentive Mechanisms with Deep Reinforcement Learning. NDSS 2021 - [c20]Mani Malek Esmaeili, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr:
Antipodes of Label Differential Privacy: PATE and ALIBI. NeurIPS 2021: 6934-6945 - [c19]Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta, Florian Tramèr:
Is Private Learning Possible with Instance Encoding? SP 2021: 410-427 - [c18]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. USENIX Security Symposium 2021: 2633-2650 - [i32]Mani Malek, Ilya Mironov, Karthik Prasad, Igor Shilov, Florian Tramèr:
Antipodes of Label Differential Privacy: PATE and ALIBI. CoRR abs/2106.03408 (2021) - [i31]Evani Radiya-Dixit, Florian Tramèr:
Data Poisoning Won't Save You From Facial Recognition. CoRR abs/2106.14851 (2021) - [i30]Florian Tramèr:
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. CoRR abs/2107.11630 (2021) - [i29]Nicholas Carlini, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Florian Tramèr:
NeuraCrypt is not private. CoRR abs/2108.07256 (2021) - [i28]Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto:
Large Language Models Can Be Strong Differentially Private Learners. CoRR abs/2110.05679 (2021) - [i27]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. CoRR abs/2112.03570 (2021) - [i26]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. CoRR abs/2112.12938 (2021) - 2020
- [c17]Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Third International Workshop on Dependable and Secure Machine Learning - DSML 2020. DSN Workshops 2020: x - [c16]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML 2020: 9561-9571 - [c15]Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. NeurIPS 2020 - [c14]Edward Chou, Florian Tramèr, Giancarlo Pellegrino:
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems. SP (Workshops) 2020: 48-54 - [c13]Florian Tramèr, Dan Boneh, Kenny Paterson:
Remote Side-Channel Attacks on Anonymous Transactions. USENIX Security Symposium 2020: 2739-2756 - [i25]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. CoRR abs/2002.04599 (2020) - [i24]Florian Tramèr, Nicholas Carlini, Wieland Brendel, Aleksander Madry:
On Adaptive Attacks to Adversarial Example Defenses. CoRR abs/2002.08347 (2020) - [i23]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. CoRR abs/2007.14321 (2020) - [i22]Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramèr:
An Attack on InstaHide: Is Private Learning Possible with Instance Encoding? CoRR abs/2011.05315 (2020) - [i21]Florian Tramèr, Dan Boneh:
Differentially Private Learning Needs Better Features (or Much More Data). CoRR abs/2011.11660 (2020) - [i20]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. CoRR abs/2012.07805 (2020) - [i19]Florian Tramèr, Dan Boneh, Kenneth G. Paterson:
Remote Side-Channel Attacks on Anonymous Transactions. IACR Cryptol. ePrint Arch. 2020: 220 (2020)
2010 – 2019
- 2019
- [j4]Lorenz Breidenbach, Philip Daian, Florian Tramèr, Ari Juels:
The Hydra Framework for Principled, Automated Bug Bounties. IEEE Secur. Priv. 17(4): 53-61 (2019) - [c12]Florian Tramèr, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, Dan Boneh:
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning. CCS 2019: 2005-2021 - [c11]Florian Tramèr, Dan Boneh:
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. ICLR 2019 - [c10]Florian Tramèr, Dan Boneh:
Adversarial Training and Robustness for Multiple Perturbations. NeurIPS 2019: 5858-5868 - [i18]Jörn-Henrik Jacobsen, Jens Behrmann, Nicholas Carlini, Florian Tramèr, Nicolas Papernot:
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. CoRR abs/1903.10484 (2019) - [i17]Florian Tramèr, Dan Boneh:
Adversarial Training and Robustness for Multiple Perturbations. CoRR abs/1904.13000 (2019) - [i16]Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramèr, Giulia Fanti, Ari Juels:
SquirRL: Automating Attack Discovery on Blockchain Incentive Mechanisms with Deep Reinforcement Learning. CoRR abs/1912.01798 (2019) - [i15]Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao:
Advances and Open Problems in Federated Learning. CoRR abs/1912.04977 (2019) - 2018
- [c9]Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, Patrick D. McDaniel:
Ensemble Adversarial Training: Attacks and Defenses. ICLR (Poster) 2018 - [c8]Lorenz Breidenbach, Philip Daian, Florian Tramèr, Ari Juels:
Enter the Hydra: Towards Principled Bug Bounties and Exploit-Resistant Smart Contracts. USENIX Security Symposium 2018: 1335-1352 - [c7]Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramèr, Atul Prakash, Tadayoshi Kohno:
Physical Adversarial Examples for Object Detectors. WOOT @ USENIX Security Symposium 2018 - [i14]Florian Tramèr, Dan Boneh:
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. CoRR abs/1806.03287 (2018) - [i13]Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramèr, Atul Prakash, Tadayoshi Kohno, Dawn Song:
Physical Adversarial Examples for Object Detectors. CoRR abs/1807.07769 (2018) - [i12]Florian Tramèr, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, Dan Boneh:
Ad-versarial: Defeating Perceptual Ad-Blocking. CoRR abs/1811.03194 (2018) - [i11]Edward Chou, Florian Tramèr, Giancarlo Pellegrino, Dan Boneh:
SentiNet: Detecting Physical Attacks Against Deep Learning Systems. CoRR abs/1812.00292 (2018) - 2017
- [j3]Jean Louis Raisaro, Florian Tramèr,