default search action
Andrea Tirinzoni
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c24]Matteo Pirotta, Andrea Tirinzoni, Ahmed Touati, Alessandro Lazaric, Yann Ollivier:
Fast Imitation via Behavior Foundation Models. ICLR 2024 - [c23]Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati:
Simple Ingredients for Offline Reinforcement Learning. ICML 2024 - [i21]Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati:
Simple Ingredients for Offline Reinforcement Learning. CoRR abs/2403.13097 (2024) - 2023
- [c22]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
On the Complexity of Representation Learning in Contextual Linear Bandits. AISTATS 2023: 7871-7896 - [c21]Liyu Chen, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path. ALT 2023: 310-357 - [c20]Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann:
Optimistic PAC Reinforcement Learning: the Instance-Dependent View. ALT 2023: 1460-1480 - [c19]Aymen Al Marjani, Andrea Tirinzoni, Emilie Kaufmann:
Active Coverage for PAC Reinforcement Learning. COLT 2023: 5044-5109 - [c18]Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta:
Layered State Discovery for Incremental Autonomous Exploration. ICML 2023: 4953-5001 - [i20]Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta:
Layered State Discovery for Incremental Autonomous Exploration. CoRR abs/2302.03789 (2023) - [i19]Aymen Al Marjani, Andrea Tirinzoni, Emilie Kaufmann:
Active Coverage for PAC Reinforcement Learning. CoRR abs/2306.13601 (2023) - [i18]Aymen Al Marjani, Andrea Tirinzoni, Emilie Kaufmann:
Towards Instance-Optimality in Online PAC Reinforcement Learning. CoRR abs/2311.05638 (2023) - 2022
- [j3]Lorenzo Bisi, Davide Santambrogio, Federico Sandrelli, Andrea Tirinzoni, Brian D. Ziebart, Marcello Restelli:
Risk-averse policy optimization via risk-neutral policy optimization. Artif. Intell. 311: 103765 (2022) - [c17]Andrea Tirinzoni, Rémy Degenne:
On Elimination Strategies for Bandit Fixed-Confidence Identification. NeurIPS 2022 - [c16]Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann:
Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs. NeurIPS 2022 - [c15]Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta:
Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees. NeurIPS 2022 - [i17]Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann:
Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs. CoRR abs/2203.09251 (2022) - [i16]Andrea Tirinzoni, Rémy Degenne:
On Elimination Strategies for Bandit Fixed-Confidence Identification. CoRR abs/2205.10936 (2022) - [i15]Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann:
Optimistic PAC Reinforcement Learning: the Instance-Dependent View. CoRR abs/2207.05852 (2022) - [i14]Liyu Chen, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path. CoRR abs/2210.04946 (2022) - [i13]Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta:
Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees. CoRR abs/2210.13083 (2022) - [i12]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
On the Complexity of Representation Learning in Contextual Linear Bandits. CoRR abs/2212.09429 (2022) - 2021
- [b1]Andrea Tirinzoni:
Exploiting structure for transfer in reinforcement learning. Polytechnic University of Milan, Italy, 2021 - [j2]Amarildo Likmeta, Alberto Maria Metelli, Giorgia Ramponi, Andrea Tirinzoni, Matteo Giuliani, Marcello Restelli:
Dealing with multiple experts and non-stationarity in inverse reinforcement learning: an application to real-life problems. Mach. Learn. 110(9): 2541-2576 (2021) - [c14]Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Leveraging Good Representations in Linear Contextual Bandits. ICML 2021: 8371-8380 - [c13]Riccardo Poiani, Andrea Tirinzoni, Marcello Restelli:
Meta-Reinforcement Learning by Tracking Task Non-stationarity. IJCAI 2021: 2899-2905 - [c12]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. NeurIPS 2021: 16371-16383 - [c11]Clémence Réda, Andrea Tirinzoni, Rémy Degenne:
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification. NeurIPS 2021: 25489-25501 - [i11]Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Leveraging Good Representations in Linear Contextual Bandits. CoRR abs/2104.03781 (2021) - [i10]Riccardo Poiani, Andrea Tirinzoni, Marcello Restelli:
Meta-Reinforcement Learning by Tracking Task Non-stationarity. CoRR abs/2105.08834 (2021) - [i9]Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric:
A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs. CoRR abs/2106.13013 (2021) - [i8]Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta:
Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection. CoRR abs/2110.14798 (2021) - [i7]Clémence Réda, Andrea Tirinzoni, Rémy Degenne:
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification. CoRR abs/2111.01479 (2021) - 2020
- [j1]Amarildo Likmeta, Alberto Maria Metelli, Andrea Tirinzoni, Riccardo Giol, Marcello Restelli, Danilo Romano:
Combining reinforcement learning with rule-based controllers for transparent and general decision-making in autonomous driving. Robotics Auton. Syst. 131: 103568 (2020) - [c10]Pierluca D'Oro, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, Marcello Restelli:
Gradient-Aware Model-Based Policy Search. AAAI 2020: 3801-3808 - [c9]Giorgia Ramponi, Amarildo Likmeta, Alberto Maria Metelli, Andrea Tirinzoni, Marcello Restelli:
Truly Batch Model-Free Inverse Reinforcement Learning about Multiple Intentions. AISTATS 2020: 2359-2369 - [c8]Andrea Tirinzoni, Alessandro Lazaric, Marcello Restelli:
A Novel Confidence-Based Algorithm for Structured Bandits. AISTATS 2020: 3175-3185 - [c7]Andrea Tirinzoni, Riccardo Poiani, Marcello Restelli:
Sequential Transfer in Reinforcement Learning with a Generative Model. ICML 2020: 9481-9492 - [c6]Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric:
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits. NeurIPS 2020 - [i6]Andrea Tirinzoni, Alessandro Lazaric, Marcello Restelli:
A Novel Confidence-Based Algorithm for Structured Bandits. CoRR abs/2005.11593 (2020) - [i5]Andrea Tirinzoni, Riccardo Poiani, Marcello Restelli:
Sequential Transfer in Reinforcement Learning with a Generative Model. CoRR abs/2007.00722 (2020) - [i4]Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric:
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits. CoRR abs/2010.12247 (2020)
2010 – 2019
- 2019
- [c5]Andrea Tirinzoni, Mattia Salvini, Marcello Restelli:
Transfer of Samples in Policy Search via Multiple Importance Sampling. ICML 2019: 6264-6274 - [c4]Mario Beraha, Alberto Maria Metelli, Matteo Papini, Andrea Tirinzoni, Marcello Restelli:
Feature Selection via Mutual Information: New Theoretical Insights. IJCNN 2019: 1-9 - [i3]Mario Beraha, Alberto Maria Metelli, Matteo Papini, Andrea Tirinzoni, Marcello Restelli:
Feature Selection via Mutual Information: New Theoretical Insights. CoRR abs/1907.07384 (2019) - [i2]Pierluca D'Oro, Alberto Maria Metelli, Andrea Tirinzoni, Matteo Papini, Marcello Restelli:
Gradient-Aware Model-based Policy Search. CoRR abs/1909.04115 (2019) - 2018
- [c3]Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, Marcello Restelli:
Importance Weighted Transfer of Samples in Reinforcement Learning. ICML 2018: 4943-4952 - [c2]Andrea Tirinzoni, Rafael Rodríguez-Sánchez, Marcello Restelli:
Transfer of Value Functions via Variational Methods. NeurIPS 2018: 6182-6192 - [c1]Andrea Tirinzoni, Marek Petrik, Xiangli Chen, Brian D. Ziebart:
Policy-Conditioned Uncertainty Sets for Robust Markov Decision Processes. NeurIPS 2018: 8953-8963 - [i1]Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, Marcello Restelli:
Importance Weighted Transfer of Samples in Reinforcement Learning. CoRR abs/1805.10886 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-04 01:20 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint