Nicholas Carlini
#142,166
Most Influential Person Now
Nicholas Carlini's AcademicInfluence.com Rankings
Nicholas Carlinicomputer-science Degrees
Computer Science
#6888
World Rank
#7255
Historical Rank
Machine Learning
#2380
World Rank
#2410
Historical Rank
Artificial Intelligence
#2663
World Rank
#2704
Historical Rank
Database
#3968
World Rank
#4127
Historical Rank

Download Badge
Computer Science
Nicholas Carlini's Degrees
- PhD Computer Science University of California, Berkeley
- Bachelors Computer Science University of California, Berkeley
Similar Degrees You Can Earn
Why Is Nicholas Carlini Influential?
(Suggest an Edit or Addition)Nicholas Carlini's Published Works
Number of citations in a given year to any of this author's works
Total number of citations to an author for the works they published in a given year. This highlights publication of the most important work(s) by the author
Published Works
- Towards Evaluating the Robustness of Neural Networks (2016) (5940)
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (2018) (2499)
- MixMatch: A Holistic Approach to Semi-Supervised Learning (2019) (1810)
- FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence (2020) (1562)
- Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods (2017) (1522)
- Audio Adversarial Examples: Targeted Attacks on Speech-to-Text (2018) (872)
- On Evaluating Adversarial Robustness (2019) (583)
- The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (2018) (576)
- On Adaptive Attacks to Adversarial Example Defenses (2020) (539)
- Extracting Training Data from Large Language Models (2020) (516)
- Hidden Voice Commands (2016) (514)
- ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring (2019) (410)
- Technical Report on the CleverHans v2.1.0 Adversarial Examples Library (2016) (370)
- Control-Flow Bending: On the Effectiveness of Control-Flow Integrity (2015) (356)
- Adversarial Example Defense: Ensembles of Weak Defenses are not Strong (2017) (337)
- ROP is Still Dangerous: Breaking Modern Defenses (2014) (331)
- Defensive Distillation is Not Robust to Adversarial Examples (2016) (298)
- Measuring Robustness to Natural Distribution Shifts in Image Classification (2020) (288)
- Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition (2019) (282)
- Adversarial Examples Are a Natural Consequence of Test Error in Noise (2019) (268)
- ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring (2020) (256)
- MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples (2017) (211)
- Label-Only Membership Inference Attacks (2020) (190)
- High Accuracy and High Fidelity Extraction of Neural Networks (2019) (189)
- The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets (2018) (164)
- On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses (2018) (143)
- Deduplicating Training Data Makes Language Models Better (2021) (123)
- Membership Inference Attacks From First Principles (2021) (104)
- An Evaluation of the Google Chrome Extension Security Architecture (2012) (103)
- Provably Minimally-Distorted Adversarial Examples (2017) (98)
- Quantifying Memorization Across Neural Language Models (2022) (95)
- Evading Deepfake-Image Detectors with White- and Black-Box Attacks (2020) (94)
- Unsolved Problems in ML Safety (2021) (91)
- Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning (2021) (89)
- Unrestricted Adversarial Examples (2018) (83)
- Cryptanalytic Extraction of Neural Network Models (2020) (82)
- Ground-Truth Adversarial Examples (2017) (76)
- Stateful Detection of Black-Box Adversarial Attacks (2019) (64)
- Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations (2020) (64)
- Poisoning and Backdooring Contrastive Learning (2021) (55)
- AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation (2021) (43)
- Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness (2019) (39)
- Extracting Training Data from Diffusion Models (2023) (37)
- An Attack on InstaHide: Is Private Learning Possible with Instance Encoding? (2020) (35)
- High-Fidelity Extraction of Neural Network Models (2019) (32)
- Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples? (2019) (31)
- Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications (2019) (27)
- Poisoning the Unlabeled Dataset of Semi-Supervised Learning (2021) (27)
- Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets (2022) (25)
- Counterfactual Memorization in Neural Language Models (2021) (23)
- Is Private Learning Possible with Instance Encoding? (2021) (22)
- MLSys: The New Frontier of Machine Learning Systems (2019) (21)
- Handcrafted Backdoors in Deep Neural Networks (2021) (20)
- SysML: The New Frontier of Machine Learning Systems (2019) (20)
- Prototypical Examples in Deep Learning: Metrics, Characteristics, and Utility (2018) (18)
- When Robustness Doesn’t Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet (2019) (15)
- Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent (2021) (15)
- (Certified!!) Adversarial Robustness for Free! (2022) (15)
- Measuring Forgetting of Memorized Training Examples (2022) (14)
- Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2022) (14)
- Considerations for Differentially Private Learning with Large-Scale Public Pretraining (2022) (10)
- The Privacy Onion Effect: Memorization is Relative (2022) (9)
- Debugging Differential Privacy: A Case Study for Privacy Auditing (2022) (9)
- NeuraCrypt is not private (2021) (8)
- A critique of the DeepSec Platform for Security Analysis of Deep Learning Models (2019) (8)
- A Partial Break of the Honeypots Defense to Catch Adversarial Attacks (2020) (7)
- Operator-Assisted Tabulation of Optical Scan Ballots (2012) (6)
- Adversarial Forces of Physical Models (2020) (5)
- Poisoning Web-Scale Training Datasets is Practical (2023) (5)
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples (2021) (5)
- Increasing Confidence in Adversarial Robustness Evaluations (2022) (5)
- Evaluation and Design of Robust Neural Network Defenses (2018) (5)
- No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy" (2022) (4)
- Improved Support for Machine-assisted Ballot-level Audits (2013) (4)
- Lessons Learned from Evaluating the Robustness of Defenses to Adversarial Examples (2019) (3)
- Part-Based Models Improve Adversarial Robustness (2022) (2)
- On Evaluating Adversarial Robustness 2 Principles of Rigorous Evaluations 2 (2019) (1)
- Session details: Session 1: Adversarial Machine Learning (2020) (0)
- Students Parrot Their Teachers: Membership Inference on Model Distillation (2023) (0)
- Security of Machine Learning (2019) (0)
- Session details: Session 2A: Machine Learning for Cybersecurity (2021) (0)
- D ATA P OISONING W ON ’ T S AVE Y OU F ROM F ACIAL R ECOGNITION (2022) (0)
- Analysis of the Russian Presidential Election 2012 Using Experiments (2013) (0)
- AISec'19: 12th ACM Workshop on Artificial Intelligence and Security (2019) (0)
- ControlFlag: A Self-supervised Idiosyncratic Pattern Detection System for Software Control Structures (2020) (0)
- Tight Auditing of Differentially Private Machine Learning (2023) (0)
- AISec'20: 13th Workshop on Artificial Intelligence and Security (2020) (0)
- Supplementary Material for Imperceptible , Robust , and Targeted Adversarial Examples for Automatic Speech Recognition (2019) (0)
- Language-based isolation for cloud computing : An analysis of Google App Engine (2011) (0)
- Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators (2023) (0)
- Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems (2022) (0)
- Deep Learning and Security Workshop (DLS 2020) (2020) (0)
- Session details: Session 1: Adversarial Machine Learning (2021) (0)
- Publishing Efficient On-device Models Increases Adversarial Vulnerability (2022) (0)
- Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning (2020) (0)
- Verifying a Binary Micro-Hypervisor Intercept Handler (2013) (0)
- Effective Robustness against Natural Distribution Shifts for Models with Different Training Data (2023) (0)
- A ug 2 02 1 NeuraCrypt is not private (2021) (0)
- Security of Machine Learning (Dagstuhl Seminar 22281) (2022) (0)
- Anatomically Constrained ResNets Exhibit Opponent Receptive Fields; So What? (2020) (0)
- Poster: A critique of the DEEPSEC Platform for Security Analysis of Deep Learning Models (2019) (0)
This paper list is powered by the following services: