Himabindu Lakkaraju
Indian-American computer scientist
Himabindu Lakkaraju's AcademicInfluence.com Rankings
Download Badge
Computer Science
Himabindu Lakkaraju's Degrees
- PhD Computer Science Stanford University
- Masters Computer Science Stanford University
Similar Degrees You Can Earn
Why Is Himabindu Lakkaraju Influential?
(Suggest an Edit or Addition)According to Wikipedia, Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.
Himabindu Lakkaraju's Published Works
Published Works
- Human Decisions and Machine Predictions (2017) (697)
- Interpretable Decision Sets: A Joint Framework for Description and Prediction (2016) (590)
- Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods (2019) (391)
- Interpretable & Explorable Approximations of Black Box Models (2017) (207)
- Faithful and Customizable Explanations of Black Box Models (2019) (198)
- "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations (2019) (156)
- Mining big data to extract patterns and predict real-life outcomes. (2016) (142)
- Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration (2016) (142)
- What's in a Name? Understanding the Interplay between Titles, Content, and Communities in Social Media (2013) (134)
- A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes (2015) (133)
- Exploiting Coherence for the Simultaneous Discovery of Latent Facets and associated Sentiments (2011) (117)
- The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables (2017) (111)
- Aspect Specific Sentiment Analysis Using Hierarchical Deep Learning (2014) (94)
- Learning Cost-Effective and Interpretable Treatment Regimes (2017) (73)
- Towards a Unified Framework for Fair and Stable Graph Representation Learning (2021) (64)
- The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective (2022) (62)
- How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods (2019) (58)
- Reliable Post hoc Explanations: Modeling Uncertainty in Explainability (2020) (55)
- Who, when, and why: a machine learning approach to prioritizing students at risk of not graduating high school on time (2015) (54)
- Counterfactual Explanations Can Be Manipulated (2021) (48)
- Robust and Stable Black Box Explanations (2020) (45)
- Attention prediction on social media brand pages (2011) (43)
- Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses (2020) (41)
- Towards Robust and Reliable Algorithmic Recourse (2021) (35)
- Towards the Unification and Robustness of Perturbation and Gradient Based Explanations (2021) (26)
- Rethinking Explainability as a Dialogue: A Practitioner's Perspective (2022) (23)
- Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring (2020) (22)
- Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis (2021) (22)
- A Bayesian Framework for Modeling Human Evaluations (2015) (21)
- Fair Influence Maximization: A Welfare Optimization Approach (2020) (21)
- Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses (2020) (19)
- Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods (2021) (17)
- Algorithmic Recourse in the Wild: Understanding the Impact of Data and Model Shifts (2020) (17)
- How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations (2020) (16)
- Dynamic Multi-relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media (2012) (15)
- OpenXAI: Towards a Transparent Evaluation of Model Explanations (2022) (14)
- Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations (2022) (13)
- When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making (2020) (13)
- Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations (2022) (12)
- Discovering Blind Spots of Predictive Models: Representations and Policies for Guided Exploration (2016) (11)
- Interpretable and Interactive Summaries of Actionable Recourses (2020) (10)
- Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making (2016) (10)
- On the Connections between Counterfactual Explanations and Adversarial Examples (2021) (8)
- Learning Models for Actionable Recourse (2020) (8)
- Algorithmic Recourse in the Face of Noisy Human Responses (2022) (7)
- Incorporating Interpretable Output Constraints in Bayesian Neural Networks (2020) (7)
- What will it take to generate fairness-preserving explanations? (2021) (7)
- Smart news feeds for social networks using scalable joint latent factor models (2011) (7)
- Rethinking Stability for Attribution-based Explanations (2022) (7)
- Evaluating explainability for graph neural networks (2022) (6)
- Learning Cost-Effective Treatment Regimes using Markov Decision Processes (2016) (6)
- Learning Cost-Effective and Interpretable Regimes for Treatment Recommendation (2016) (5)
- Discovering Unknown Unknowns of Predictive Models (2016) (4)
- Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations (2021) (4)
- Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse (2022) (4)
- On the Privacy Risks of Algorithmic Recourse (2022) (3)
- TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues (2022) (3)
- Efficient Training of Low-Curvature Neural Networks (2022) (2)
- Model Monitoring in Practice: Lessons Learned and Open Challenges (2022) (2)
- Feature Attributions and Counterfactual Explanations Can Be Manipulated (2021) (2)
- Learning Under Adversarial and Interventional Shifts (2021) (2)
- TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations (2022) (2)
- Learning Cost-Effective and Interpretable Treatment Regimes for Judicial Bail Decisions (2016) (2)
- Robust Black Box Explanations Under Distribution Shift (2020) (2)
- General Co-Chairs (2022) (1)
- Towards Reliable and Practicable Algorithmic Recourse (2021) (1)
- TEM: a novel perspective to modeling content onmicroblogs (2012) (1)
- A Human-Centric Take on Model Monitoring (2022) (1)
- Human-centric machine learning: enabling machine learning for high-stakes decision-making (2018) (1)
- Psycho-Demographic Analysis of the Facebook Rainbow Campaign (2016) (1)
- Ensuring Actionable Recourse via Adversarial Training (2020) (1)
- A Non Parametric Theme Event Topic Model for Characterizing Microblogs (2011) (1)
- Unified Modeling of User Activities on Social Networking Sites (2011) (1)
- Towards Robust Off-Policy Evaluation via Human Inputs (2022) (1)
- Data poisoning attacks on off-policy policy evaluation methods (2022) (0)
- T HE R ECON A PPROACH : A N EW D IRECTION FOR M ACHINE L EARNING IN C RIMINAL L AW (2022) (0)
- Tutorials at The Web Conference 2023 (2023) (0)
- OpenXAI: Towards a Transparent Evaluation of Post hoc Model Explanations (2022) (0)
- Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten (2023) (0)
- A Human-Centric Perspective on Model Monitoring (2022) (0)
- R ETHINKING S TABILITY FOR A TTRIBUTION - BASED E X PLANATIONS (2022) (0)
- Efficiently Training Low-Curvature Neural Networks (2022) (0)
- L ET U SERS D ECIDE : N AVIGATING THE T RADE - OFFS BETWEEN C OSTS AND R OBUSTNESS IN A LGORITHMIC R ECOURSE (2022) (0)
- Flatten the Curve: Efficiently Training Low-Curvature Neural Networks (2022) (0)
This paper list is powered by the following services:
Other Resources About Himabindu Lakkaraju
What Schools Are Affiliated With Himabindu Lakkaraju?
Himabindu Lakkaraju is affiliated with the following schools: