default search action
Aadirupa Saha
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c38]Aadirupa Saha, Vitaly Feldman, Yishay Mansour, Tomer Koren:
Faster Convergence with MultiWay Preferences. AISTATS 2024: 433-441 - [c37]Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin M. Stangl:
On the Vulnerability of Fairness Constrained Learning to Malicious Noise. AISTATS 2024: 4096-4104 - [c36]Rohan Deb, Aadirupa Saha, Arindam Banerjee:
Think Before You Duel: Understanding Complexities of Preference Learning under Constrained Resources. AISTATS 2024: 4546-4554 - [c35]Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang:
Dueling Optimization with a Monotone Adversary. ALT 2024: 221-243 - [c34]Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu:
Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation. ICLR 2024 - [c33]Aadirupa Saha, Branislav Kveton:
Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling. ICLR 2024 - [i35]Aadirupa Saha, Pierre Gaillard:
Stop Relying on No-Choice and Do not Repeat the Moves: Optimal, Efficient and Practical Algorithms for Assortment Optimization. CoRR abs/2402.18917 (2024) - [i34]Aadirupa Saha, Hilal Asi:
DP-Dueling: Learning from Preference Feedback without Compromising User Privacy. CoRR abs/2403.15045 (2024) - [i33]Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu:
Strategic Linear Contextual Bandits. CoRR abs/2406.00551 (2024) - 2023
- [c32]Thomas Kleine Buening, Aadirupa Saha:
ANACONDA: An Improved Dynamic Regret Algorithm for Adaptive Non-Stationary Dueling Bandits. AISTATS 2023: 3854-3878 - [c31]Aadirupa Saha, Aldo Pacchiano, Jonathan Lee:
Dueling RL: Reinforcement Learning with Trajectory Preferences. AISTATS 2023: 6263-6289 - [c30]Pierre Gaillard, Aadirupa Saha, Soham Dan:
One Arrow, Two Kills: A Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits. AISTATS 2023: 7755-7773 - [c29]Kumar Kshitij Patel, Lingxiao Wang, Aadirupa Saha, Nathan Srebro:
Federated Online and Bandit Convex Optimization. ICML 2023: 27439-27460 - [c28]Han Shao, Lee Cohen, Avrim Blum, Yishay Mansour, Aadirupa Saha, Matthew R. Walter:
Eliciting User Preferences for Personalized Multi-Objective Decision Making through Comparative Feedback. NeurIPS 2023 - [i32]Han Shao, Lee Cohen, Avrim Blum, Yishay Mansour, Aadirupa Saha, Matthew R. Walter:
Eliciting User Preferences for Personalized Multi-Objective Decision Making through Comparative Feedback. CoRR abs/2302.03805 (2023) - [i31]Aadirupa Saha, Branislav Kveton:
Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling. CoRR abs/2303.09033 (2023) - [i30]Avrim Blum, Princewill Okoroafor, Aadirupa Saha, Kevin Stangl:
On the Vulnerability of Fairness Constrained Learning to Malicious Noise. CoRR abs/2307.11892 (2023) - [i29]Avrim Blum, Meghal Gupta, Gene Li, Naren Sarayu Manoj, Aadirupa Saha, Yuanyuan Yang:
Dueling Optimization with a Monotone Adversary. CoRR abs/2311.11185 (2023) - [i28]Thomas Kleine Buening, Aadirupa Saha, Christos Dimitrakakis, Haifeng Xu:
Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation. CoRR abs/2311.15647 (2023) - [i27]Kumar Kshitij Patel, Lingxiao Wang, Aadirupa Saha, Nati Sebro:
Federated Online and Bandit Convex Optimization. CoRR abs/2311.17586 (2023) - [i26]Aadirupa Saha, Vitaly Feldman, Tomer Koren, Yishay Mansour:
Faster Convergence with Multiway Preferences. CoRR abs/2312.11788 (2023) - [i25]Rohan Deb, Aadirupa Saha:
Think Before You Duel: Understanding Complexities of Preference Learning under Constrained Resources. CoRR abs/2312.17229 (2023) - 2022
- [c27]Aadirupa Saha, Suprovat Ghoshal:
Exploiting Correlation to Achieve Faster Learning Rates in Low-Rank Preference Bandits. AISTATS 2022: 456-482 - [c26]Aadirupa Saha, Akshay Krishnamurthy:
Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability. ALT 2022: 968-994 - [c25]Viktor Bengs, Aadirupa Saha, Eyke Hüllermeier:
Stochastic Contextual Dueling Bandits under Linear Stochastic Transitivity Models. ICML 2022: 1764-1786 - [c24]Aadirupa Saha, Pierre Gaillard:
Versatile Dueling Bandits: Best-of-both World Analyses for Learning from Relative Preferences. ICML 2022: 19011-19026 - [c23]Aadirupa Saha, Shubham Gupta:
Optimal and Efficient Dynamic Regret Algorithms for Non-Stationary Dueling Bandits. ICML 2022: 19027-19049 - [i24]Viktor Bengs, Aadirupa Saha, Eyke Hüllermeier:
Stochastic Contextual Dueling Bandits under Linear Stochastic Transitivity Models. CoRR abs/2202.04593 (2022) - [i23]Aadirupa Saha, Pierre Gaillard:
Versatile Dueling Bandits: Best-of-both-World Analyses for Online Learning from Preferences. CoRR abs/2202.06694 (2022) - [i22]Suprovat Ghoshal, Aadirupa Saha:
Exploiting Correlation to Achieve Faster Learning Rates in Low-Rank Preference Bandits. CoRR abs/2202.11795 (2022) - [i21]Aadirupa Saha, Tomer Koren, Yishay Mansour:
Dueling Convex Optimization with General Preferences. CoRR abs/2210.02562 (2022) - [i20]Thomas Kleine Buening, Aadirupa Saha:
ANACONDA: An Improved Dynamic Regret Algorithm for Adaptive Non-Stationary Dueling Bandits. CoRR abs/2210.14322 (2022) - [i19]Pierre Gaillard, Aadirupa Saha, Soham Dan:
One Arrow, Two Kills: An Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits. CoRR abs/2210.14998 (2022) - 2021
- [c22]Yonathan Efroni, Nadav Merlis, Aadirupa Saha, Shie Mannor:
Confidence-Budget Matching for Sequential Budgeted Learning. ICML 2021: 2937-2947 - [c21]Aadirupa Saha, Tomer Koren, Yishay Mansour:
Adversarial Dueling Bandits. ICML 2021: 9235-9244 - [c20]Aadirupa Saha, Tomer Koren, Yishay Mansour:
Dueling Convex Optimization. ICML 2021: 9245-9254 - [c19]Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain:
Optimal regret algorithm for Pseudo-1d Bandit Convex Optimization. ICML 2021: 9255-9264 - [c18]Aadirupa Saha, Pierre Gaillard:
Dueling Bandits with Adversarial Sleeping. NeurIPS 2021: 27761-27771 - [c17]Aadirupa Saha:
Optimal Algorithms for Stochastic Contextual Preference Bandits. NeurIPS 2021: 30050-30062 - [c16]Robert Tyler Loftin, Aadirupa Saha, Sam Devlin, Katja Hofmann:
Strategically efficient exploration in competitive multi-agent reinforcement learning. UAI 2021: 1587-1596 - [i18]Yonathan Efroni, Nadav Merlis, Aadirupa Saha, Shie Mannor:
Confidence-Budget Matching for Sequential Budgeted Learning. CoRR abs/2102.03400 (2021) - [i17]Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain:
Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization. CoRR abs/2102.07387 (2021) - [i16]Shubham Gupta, Aadirupa Saha, Sumeet Katariya:
Pure Exploration with Structured Preference Feedback. CoRR abs/2104.05294 (2021) - [i15]Aadirupa Saha, Pierre Gaillard:
Dueling Bandits with Adversarial Sleeping. CoRR abs/2107.02274 (2021) - [i14]Robert Tyler Loftin, Aadirupa Saha, Sam Devlin, Katja Hofmann:
Strategically Efficient Exploration in Competitive Multi-agent Reinforcement Learning. CoRR abs/2107.14698 (2021) - [i13]Shubham Gupta, Aadirupa Saha:
Optimal and Efficient Dynamic Regret Algorithms for Non-Stationary Dueling Bandits. CoRR abs/2111.03917 (2021) - [i12]Aldo Pacchiano, Aadirupa Saha, Jonathan Lee:
Dueling RL: Reinforcement Learning with Trajectory Preferences. CoRR abs/2111.04850 (2021) - [i11]Aadirupa Saha, Akshay Krishnamurthy:
Efficient and Optimal Algorithms for Contextual Dueling Bandits under Realizability. CoRR abs/2111.12306 (2021) - [i10]Prateek Chanda, Aadirupa Saha:
A Sketch Based Game Theoretic Approach to Detect Anomalous Dense Sub-Communities in Large Data Streams. CoRR abs/2111.15525 (2021) - 2020
- [c15]Aadirupa Saha:
Polytime Decomposition of Generalized Submodular Base Polytopes with Efficient Sampling. ACML 2020: 625-640 - [c14]Aadirupa Saha, Aditya Gopalan:
Best-item Learning in Random Utility Models with Subset Choices. AISTATS 2020: 4281-4291 - [c13]Aadirupa Saha, Pierre Gaillard, Michal Valko:
Improved Sleeping Bandits with Stochastic Action Sets and Adversarial Rewards. ICML 2020: 8357-8366 - [c12]Aadirupa Saha, Aditya Gopalan:
From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model. ICML 2020: 8367-8376 - [i9]Aadirupa Saha, Aditya Gopalan:
Best-item Learning in Random Utility Models with Subset Choices. CoRR abs/2002.07994 (2020) - [i8]Aadirupa Saha, Pierre Gaillard, Michal Valko:
Improved Sleeping Bandits with Stochastic Actions Sets and Adversarial Rewards. CoRR abs/2004.06248 (2020) - [i7]Aadirupa Saha, Tomer Koren, Yishay Mansour:
Adversarial Dueling Bandits. CoRR abs/2010.14563 (2020)
2010 – 2019
- 2019
- [c11]Aadirupa Saha, Rakesh Shivanna, Chiranjib Bhattacharyya:
How Many Pairwise Preferences Do We Need to Rank a Graph Consistently? AAAI 2019: 4830-4837 - [c10]Aadirupa Saha, Aditya Gopalan:
Active Ranking with Subset-wise Preferences. AISTATS 2019: 3312-3321 - [c9]Aadirupa Saha, Aditya Gopalan:
PAC Battling Bandits in the Plackett-Luce Model. ALT 2019: 700-737 - [c8]Aadirupa Saha, Aditya Gopalan:
Combinatorial Bandits with Relative Feedback. NeurIPS 2019: 983-993 - [c7]Aadirupa Saha, Shreyas Sheshadri, Chiranjib Bhattacharyya:
Be Greedy: How Chromatic Number meets Regret Minimization in Graph Bandits. UAI 2019: 595-605 - [i6]Aadirupa Saha, Aditya Gopalan:
Regret Minimisation in Multinomial Logit Bandits. CoRR abs/1903.00543 (2019) - [i5]Aadirupa Saha, Aditya Gopalan:
From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model. CoRR abs/1903.00558 (2019) - 2018
- [c6]Siddharth Barman, Aditya Gopalan, Aadirupa Saha:
Online Learning for Structured Loss Spaces. AAAI 2018: 2696-2703 - [c5]Aadirupa Saha, Aditya Gopalan:
Battle of Bandits. UAI 2018: 805-814 - [i4]Aditya Gopalan, Aadirupa Saha:
PAC-Battling Bandits with Plackett-Luce: Tradeoff between Sample Complexity and Subset Size. CoRR abs/1808.04008 (2018) - [i3]Aadirupa Saha, Aditya Gopalan:
Active Ranking with Subset-wise Preferences. CoRR abs/1810.10321 (2018) - [i2]Aadirupa Saha, Rakesh Shivanna, Chiranjib Bhattacharyya:
How Many Pairwise Preferences Do We Need to Rank A Graph Consistently? CoRR abs/1811.02161 (2018) - 2017
- [i1]Siddharth Barman, Aditya Gopalan, Aadirupa Saha:
Online Learning for Structured Loss Spaces. CoRR abs/1706.04125 (2017) - 2015
- [c4]Harikrishna Narasimhan, Harish G. Ramaswamy, Aadirupa Saha, Shivani Agarwal:
Consistent Multiclass Algorithms for Complex Performance Measures. ICML 2015: 2398-2407 - 2014
- [c3]Aadirupa Saha, Chandrahas Dewangan, Harikrishna Narasimhan, Sriram Sampath, Shivani Agarwal:
Learning Score Systems for Patient Mortality Prediction in Intensive Care Units via Orthogonal Matching Pursuit. ICMLA 2014: 93-98 - 2013
- [c2]Amrita Ghosal, Aadirupa Saha, Sipra Das Bit:
Energy Saving Replay Attack Prevention in Clustered Wireless Sensor Networks. PAISI 2013: 82-96 - 2011
- [c1]Subir Halder, Amrita Ghosal, Aadirupa Saha, DasBit Sipra:
Energy-Balancing and Lifetime Enhancement of Wireless Sensor Network with Archimedes Spiral. UIC 2011: 420-434
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:08 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint