Timezone: »
Author Information
Krishnaram Kenthapadi (Fiddler AI)
Hima Lakkaraju (Harvard)
Nazneen Rajani (Hugging Face)

Nazneen is a Research Lead at HuggingFace, a startup with a mission to democratize ML, leading the LLMs trained using RLHF direction. Before HF, she worked at Salesforce Research with Richard Socher and led a team of researchers focused on building robust natural language generation systems based on LLMs. Her expertise lies in training and evaluating LLMs, focusing on interpretability, robustness, factuality, and commonsense reasoning. She completed her Ph.D. in CS at UT-Austin. Nazneen has over 50 papers published at ACL, EMNLP, NAACL, NeurIPs, and ICLR and has her research covered by Quanta magazine, VentureBeat, SiliconAngle, ZDNet, and Datanami. More details about her work can be found here https://www.nazneenrajani.com/
More from the Same Authors
-
2021 : Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations »
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju -
2021 : What will it take to generate fairness-preserving explanations? »
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju -
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards Robust and Reliable Algorithmic Recourse »
Sohini Upadhyay · Shalmali Joshi · Hima Lakkaraju -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Towards a Unified Framework for Fair and Stable Graph Representation Learning »
Chirag Agarwal · Hima Lakkaraju · Marinka Zitnik -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2023 : Fair Machine Unlearning: Data Removal while Mitigating Disparities »
Alex Oesterling · Jiaqi Ma · Flavio Calmon · Hima Lakkaraju -
2023 : Evaluating the Casual Reasoning Abilities of Large Language Models »
Isha Puri · Hima Lakkaraju -
2023 : Himabindu Lakkaraju - Regulating Explainable AI: Technical Challenges and Opportunities »
Hima Lakkaraju -
2023 : Efficient Estimation of Local Robustness of Machine Learning Models »
Tessa Han · Suraj Srinivas · Hima Lakkaraju -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Generating Distributional Adversarial Examples to Evade Statistical Detectors »
Yigitcan Kaya · Muhammad Bilal Zafar · Sergul Aydore · Nathalie Rauschmayr · Krishnaram Kenthapadi -
2022 Spotlight: Generating Distributional Adversarial Examples to Evade Statistical Detectors »
Yigitcan Kaya · Muhammad Bilal Zafar · Sergul Aydore · Nathalie Rauschmayr · Krishnaram Kenthapadi -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju -
2021 : Towards Robust and Reliable Model Explanations for Healthcare »
Hima Lakkaraju -
2021 Poster: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2021 Oral: Differentially Private Query Release Through Adaptive Projection »
Sergul Aydore · William Brown · Michael Kearns · Krishnaram Kenthapadi · Luca Melis · Aaron Roth · Ankit Siva -
2021 Poster: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 Spotlight: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : Key Takeaways, Conclusion, and Discussion (including Q&A) »
Krishnaram Kenthapadi -
2021 : Responsible AI Case Studies at Amazon »
Krishnaram Kenthapadi -
2021 : Responsible AI Case Studies at LinkedIn »
Krishnaram Kenthapadi -
2021 : Introduction and Brief Overview of Responsible AI »
Krishnaram Kenthapadi -
2021 Tutorial: Responsible AI in Industry: Practical Challenges and Lessons Learned »
Krishnaram Kenthapadi · Ben Packer · Mehrnoosh Sameki · Nashlie Sephus -
2021 : Opening remarks »
Krishnaram Kenthapadi -
2020 Poster: Robust and Stable Black Box Explanations »
Hima Lakkaraju · Nino Arsov · Osbert Bastani