default search action
Matthew E. Taylor
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j43]Carl Orge Retzlaff, Srijita Das, Christabel Wayllace, Payam Mousavi, Mohammad Afshari, Tianpei Yang, Anna Saranti, Alessa Angerschmid, Matthew E. Taylor, Andreas Holzinger:
Human-in-the-Loop Reinforcement Learning: A Survey and Position on Requirements, Challenges, and Opportunities. J. Artif. Intell. Res. 79: 359-415 (2024) - [j42]Brittany Davis Pierson, Dustin Arendt, John Miller, Matthew E. Taylor:
Comparing explanations in RL. Neural Comput. Appl. 36(1): 505-516 (2024) - [c132]Jizhou Wu, Jianye Hao, Tianpei Yang, Xiaotian Hao, Yan Zheng, Weixun Wang, Matthew E. Taylor:
PORTAL: Automatic Curricula Generation for Multiagent Reinforcement Learning. AAAI 2024: 15934-15942 - [c131]Tianpei Yang, Heng You, Jianye Hao, Yan Zheng, Matthew E. Taylor:
A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning. AAAI 2024: 16352-16360 - [c130]Bram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy, Decebal Constantin Mocanu:
MaDi: Learning to Mask Distractions for Generalization in Visual Deep Reinforcement Learning. AAMAS 2024: 733-742 - [c129]Simone Parisi, Montaser Mohammedalamen, Alireza Kazemipour, Matthew E. Taylor, Michael Bowling:
Monitored Markov Decision Processes. AAMAS 2024: 1549-1557 - [c128]Chaitanya Kharyal, Sai Krishna Gottipati, Tanmay Kumar Sinha, Srijita Das, Matthew E. Taylor:
GLIDE-RL: Grounded Language Instruction through DEmonstration in RL. AAMAS 2024: 2333-2335 - [c127]Calarina Muslimani, Matthew E. Taylor:
Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning. AAMAS 2024: 2399-2401 - [c126]Hao Zhang, Tianpei Yang, Yan Zheng, Jianye Hao, Matthew E. Taylor:
PADDLE: Logic Program Guided Policy Reuse in Deep Reinforcement Learning. AAMAS 2024: 2585-2587 - [c125]Raechel Walker, Olivia Dias, Matthew E. Taylor, Cynthia Breazeal:
Alleviating the Danger Of A Single Story Through Liberatory Computing Education. RESPECT 2024: 169-178 - [i65]Qianxi Li, Yingyue Cao, Jikun Kang, Tianpei Yang, Xi Chen, Jun Jin, Matthew E. Taylor:
LaFFi: Leveraging Hybrid Natural Language Feedback for Fine-tuning Language Models. CoRR abs/2401.00907 (2024) - [i64]Chaitanya Kharyal, Sai Krishna Gottipati, Tanmay Kumar Sinha, Srijita Das, Matthew E. Taylor:
GLIDE-RL: Grounded Language Instruction through DEmonstration in RL. CoRR abs/2401.02991 (2024) - [i63]Simone Parisi, Montaser Mohammedalamen, Alireza Kazemipour, Matthew E. Taylor, Michael Bowling:
Monitored Markov Decision Processes. CoRR abs/2402.06819 (2024) - [i62]Shang Wang, Deepak Ranganatha Sastry Mamillapalli, Tianpei Yang, Matthew E. Taylor:
FPGA Divide-and-Conquer Placement using Deep Reinforcement Learning. CoRR abs/2404.13061 (2024) - [i61]Calarina Muslimani, Matthew E. Taylor:
Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement Learning. CoRR abs/2405.00746 (2024) - [i60]Calarina Muslimani, Bram Grooten, Deepak Ranganatha Sastry Mamillapalli, Mykola Pechenizkiy, Decebal Constantin Mocanu, Matthew E. Taylor:
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity. CoRR abs/2406.06495 (2024) - [i59]Atefeh Shahroudnejad, Payam Mousavi, Oleksii Perepelytsia, Sahir, David Staszak, Matthew E. Taylor, Brent Bawel:
A Novel Framework for Automated Warehouse Layout Generation. CoRR abs/2407.08633 (2024) - [i58]Manan Tomar, Philippe Hansen-Estruch, Philip Bachman, Alex Lamb, John Langford, Matthew E. Taylor, Sergey Levine:
Video Occupancy Models. CoRR abs/2407.09533 (2024) - [i57]Matan Shamir, Osher Elhadad, Matthew E. Taylor, Reuth Mirsky:
ODGR: Online Dynamic Goal Recognition. CoRR abs/2407.16220 (2024) - [i56]Yuxuan Li, Srijita Das, Matthew E. Taylor:
CANDERE-COACH: Reinforcement Learning from Noisy Feedback. CoRR abs/2409.15521 (2024) - 2023
- [j41]Tianpei Yang, Weixun Wang, Jianye Hao, Matthew E. Taylor, Yong Liu, Xiaotian Hao, Yujing Hu, Yingfeng Chen, Changjie Fan, Chunxu Ren, Ye Huang, Jiangcheng Zhu, Yang Gao:
ASN: action semantics network for multiagent reinforcement learning. Auton. Agents Multi Agent Syst. 37(2): 45 (2023) - [j40]Adam Bignold, Francisco Cruz, Matthew E. Taylor, Tim Brys, Richard Dazeley, Peter Vamplew, Cameron Foale:
A conceptual framework for externally-influenced agents: an assisted reinforcement learning review. J. Ambient Intell. Humaniz. Comput. 14(4): 3621-3644 (2023) - [j39]Matthew E. Taylor, Nicholas Nissen, Yuan Wang, Neda Navidi:
Improving reinforcement learning with human assistance: an argument for human subject studies with HIPPO Gym. Neural Comput. Appl. 35(32): 23429-23439 (2023) - [j38]Calarina Muslimani, Alex Lewandowski, Dale Schuurmans, Matthew E. Taylor, Jun Luo:
Reinforcement Teaching. Trans. Mach. Learn. Res. 2023 (2023) - [j37]Manan Tomar, Utkarsh A. Mishra, Amy Zhang, Matthew E. Taylor:
Learning Representations for Pixel-based Control: What Matters and Why? Trans. Mach. Learn. Res. 2023 (2023) - [j36]Su Zhang, Srijita Das, Sriram Ganapathi Subramanian, Matthew E. Taylor:
Two-Level Actor-Critic Using Multiple Teachers. Trans. Mach. Learn. Res. 2023 (2023) - [c124]David Mguni, Taher Jafferjee, Jianhong Wang, Nicolas Perez Nieves, Wenbin Song, Feifei Tong, Matthew E. Taylor, Tianpei Yang, Zipeng Dai, Hui Chen, Jiangcheng Zhu, Kun Shao, Jun Wang, Yaodong Yang:
Learning to Shape Rewards Using a Game of Two Partners. AAAI 2023: 11604-11612 - [c123]Todd W. Neller, Raechel Walker, Olivia Dias, Zeynep Yalçin, Cynthia Breazeal, Matthew E. Taylor, Michele Donini, Erin J. Talvitie, Charlie Pilgrim, Paolo Turrini, James Maher, Matthew Boutell, Justin Wilson, Narges Norouzi, Jonathan Scott:
Model AI Assignments 2023. AAAI 2023: 16104-16105 - [c122]Michael Guevarra, Srijita Das, Christabel Wayllace, Carrie Demmans Epp, Matthew E. Taylor, Alan Tay:
Augmenting Flight Training with AI to Efficiently Train Pilots. AAAI 2023: 16437-16439 - [c121]Calarina Muslimani, Saba Gul, Matthew E. Taylor, Carrie Demmans Epp, Christabel Wayllace:
C2Tutor: Helping People Learn to Avoid Present Bias During Decision Making. AIED 2023: 733-738 - [c120]Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Learning from Multiple Independent Advisors in Multi-agent Reinforcement Learning. AAMAS 2023: 1144-1153 - [c119]Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu:
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning. AAMAS 2023: 1932-1941 - [c118]Chaitanya Kharyal, Tanmay Kumar Sinha, Sai Krishna Gottipati, Fatemeh Abdollahi, Srijita Das, Matthew E. Taylor:
Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement Learning. AAMAS 2023: 2457-2459 - [c117]Jizhou Wu, Tianpei Yang, Xiaotian Hao, Jianye Hao, Yan Zheng, Weixun Wang, Matthew E. Taylor:
PORTAL: Automatic Curricula Generation for Multiagent Reinforcement Learning. AAMAS 2023: 2460-2462 - [c116]Su Zhang, Srijita Das, Sriram Ganapathi Subramanian, Matthew E. Taylor:
Two-Level Actor-Critic Using Multiple Teachers. AAMAS 2023: 2589-2591 - [c115]Mara Cairo, Bevin Eldaphonse, Payam Mousavi, Sahir, Sheikh Jubair, Matthew E. Taylor, Graham Doerksen, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Sean Murphy, Johannes Günther, Laura Petrich, Talat Syed:
Multi-Robot Warehouse Optimization: Leveraging Machine Learning for Improved Performance. AAMAS 2023: 3047-3049 - [c114]Sai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars, Matthew E. Taylor:
Hiking up that HILL with Cogment-Verse: Train & Operate Multi-agent Systems Learning from Humans. AAMAS 2023: 3065-3067 - [c113]Xiaoxue Du, Sharifa Alghowinem, Matthew E. Taylor, Kate Darling, Cynthia Breazeal:
Innovating AI Leadership Education. FIE 2023: 1-8 - [c112]Upma Gandhi, Erfan Aghaeekiasaraee, Ismail S. K. Bustany, Payam Mousavi, Matthew E. Taylor, Laleh Behjat:
RL-Ripper: : A Framework for Global Routing Using Reinforcement Learning and Smart Net Ripping Techniques. ACM Great Lakes Symposium on VLSI 2023: 197-201 - [c111]Matthew E. Taylor:
Reinforcement Learning Requires Human-in-the-Loop Framing and Approaches. HHAI 2023: 351-360 - [c110]Fatemeh Abdollahi, Saqib Ameen, Matthew E. Taylor, Levi H. S. Lelis:
Can You Improve My Code? Optimizing Programs with Local Search. IJCAI 2023: 2940-2948 - [c109]Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Multi-Agent Advisor Q-Learning (Extended Abstract). IJCAI 2023: 6884-6889 - [c108]Manan Tomar, Riashat Islam, Matthew E. Taylor, Sergey Levine, Philip Bachman:
Ignorance is Bliss: Robust Control via Information Gating. NeurIPS 2023 - [i55]Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Learning from Multiple Independent Advisors in Multi-agent Reinforcement Learning. CoRR abs/2301.11153 (2023) - [i54]Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu:
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning. CoRR abs/2302.06548 (2023) - [i53]Fatemeh Abdollahi, Saqib Ameen, Matthew E. Taylor, Levi H. S. Lelis:
Can You Improve My Code? Optimizing Programs with Local Search. CoRR abs/2307.05603 (2023) - [i52]Afia Abedin, Abdul Bais, Cody Buntain, Laura Courchesne, Brian McQuinn, Matthew E. Taylor, Muhib Ullah:
A Call to Arms: AI Should be Critical for Social Media Analysis of Conflict Zones. CoRR abs/2311.00810 (2023) - [i51]Laila El Moujtahid, Sai Krishna Gottipati, Clodéric Mars, Matthew E. Taylor:
Human-Machine Teaming for UAVs: An Experimentation Platform. CoRR abs/2312.11718 (2023) - [i50]Rupali Bhati, Sai Krishna Gottipati, Clodéric Mars, Matthew E. Taylor:
Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning. CoRR abs/2312.11768 (2023) - [i49]Md Saiful Islam, Srijita Das, Sai Krishna Gottipati, William Duguay, Clodéric Mars, Jalal Arabneydi, Antoine Fagette, Matthew Guzdial, Matthew E. Taylor:
Human-AI Collaboration in Real-World Complex Environment with Reinforcement Learning. CoRR abs/2312.15160 (2023) - [i48]Bram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy, Decebal Constantin Mocanu:
MaDi: Learning to Mask Distractions for Generalization in Visual Deep Reinforcement Learning. CoRR abs/2312.15339 (2023) - 2022
- [j35]Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Multi-Agent Advisor Q-Learning. J. Artif. Intell. Res. 74: 1-74 (2022) - [j34]Paniz Behboudian, Yash Satsangi, Matthew E. Taylor, Anna Harutyunyan, Michael Bowling:
Policy invariant explicit shaping: an efficient alternative to reward shaping. Neural Comput. Appl. 34(3): 1673-1686 (2022) - [j33]Yunshu Du, Garrett Warnell, Assefaw H. Gebremedhin, Peter Stone, Matthew E. Taylor:
Lucid dreaming for experience replay: refreshing past states with the current policy. Neural Comput. Appl. 34(3): 1687-1712 (2022) - [c107]Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:
Decentralized Mean Field Games. AAAI 2022: 9439-9447 - [c106]Tianyu Zhang, Aakash Krishna G. S, Mohammad Afshari, Petr Musílek, Matthew E. Taylor, Omid Ardakanian:
Diversity for transfer in learning-based control of buildings. e-Energy 2022: 556-564 - [c105]Pengyi Li, Hongyao Tang, Tianpei Yang, Xiaotian Hao, Tong Sang, Yan Zheng, Jianye Hao, Matthew E. Taylor, Wenyuan Tao, Zhen Wang:
PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration. ICML 2022: 12979-12997 - [c104]Wenhan Huang, Kai Li, Kun Shao, Tianze Zhou, Matthew E. Taylor, Jun Luo, Dongge Wang, Hangyu Mao, Jianye Hao, Jun Wang, Xiaotie Deng:
Multiagent Q-learning with Sub-Team Coordination. NeurIPS 2022 - [c103]Heng You, Tianpei Yang, Yan Zheng, Jianye Hao, Matthew E. Taylor:
Cross-domain adaptive transfer reinforcement learning based on state-action correspondence. UAI 2022: 2299-2309 - [e4]Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew E. Taylor:
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) 2022, ISBN 978-1-4503-9213-6 [contents] - [i47]Pengyi Li, Hongyao Tang, Tianpei Yang, Xiaotian Hao, Tong Sang, Yan Zheng, Jianye Hao, Matthew E. Taylor, Zhen Wang:
PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration. CoRR abs/2203.08553 (2022) - [i46]Sahir, Ercüment Ilhan, Srijita Das, Matthew E. Taylor:
Methodical Advice Collection and Reuse in Deep Reinforcement Learning. CoRR abs/2204.07254 (2022) - [i45]Alex Lewandowski, Calarina Muslimani, Matthew E. Taylor, Jun Luo, Dale Schuurmans:
Reinforcement Teaching. CoRR abs/2204.11897 (2022) - [i44]Taher Jafferjee, Juliusz Krysztof Ziomek, Tianpei Yang, Zipeng Dai, Jianhong Wang, Matthew E. Taylor, Kun Shao, Jun Wang, David Mguni:
Semi-Centralised Multi-Agent Reinforcement Learning with Policy-Embedded Training. CoRR abs/2209.01054 (2022) - [i43]Michael Guevarra, Srijita Das, Christabel Wayllace, Carrie Demmans Epp, Matthew E. Taylor, Alan Tay:
Augmenting Flight Training with AI to Efficiently Train Pilots. CoRR abs/2210.06683 (2022) - [i42]Amir Rasouli, Randy Goebel, Matthew E. Taylor, Iuliia Kotseruba, Soheil Alizadeh, Tianpei Yang, Montgomery Alban, Florian Shkurti, Yuzheng Zhuang, Adam Scibior, Kasra Rezaee, Animesh Garg, David Meger, Jun Luo, Liam Paull, Weinan Zhang, Xinyu Wang, Xi Chen:
NeurIPS 2022 Competition: Driving SMARTS. CoRR abs/2211.07545 (2022) - [i41]Hager Radi, Josiah P. Hanna, Peter Stone, Matthew E. Taylor:
Safe Evaluation For Offline Learning: Are We Ready To Deploy? CoRR abs/2212.08302 (2022) - 2021
- [c102]Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, Sarath Chandar:
Towered Actor Critic For Handling Multiple Action Types In Reinforcement Learning For Drug Discovery. AAAI 2021: 142-150 - [c101]Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou-Ammar, Jun Wang, Matthew E. Taylor:
Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems. AAMAS 2021: 51-56 - [c100]Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:
Partially Observable Mean Field Reinforcement Learning. AAMAS 2021: 537-545 - [c99]Matthew E. Taylor:
Reinforcement Learning for Electronic Design Automation: Successes and Opportunities. ISPD 2021: 3 - [c98]Amir Rasouli, Soheil Alizadeh, Iuliia Kotseruba, Yi Ma, Hebin Liang, Yuan Tian, Zhiyu Huang, Haochen Liu, Jingda Wu, Randy Goebel, Tianpei Yang, Matthew E. Taylor, Liam Paull, Xi Chen:
Driving SMARTS Competition at NeurIPS 2022: Insights and Outcome. NeurIPS (Competition and Demos) 2021: 73-84 - [i40]Nikunj Gupta, G. Srinivasaraghavan, Swarup Kumar Mohalik, Matthew E. Taylor:
HAMMER: Multi-Level Coordination of Reinforcement Learning Agents via Learned Messaging. CoRR abs/2102.00824 (2021) - [i39]Matthew E. Taylor, Nicholas Nissen, Yuan Wang, Neda Navidi:
Improving Reinforcement Learning with Human Assistance: An Argument for Human Subject Studies with HIPPO Gym. CoRR abs/2102.02639 (2021) - [i38]Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou-Ammar, Jun Wang, Matthew E. Taylor:
Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems. CoRR abs/2102.07659 (2021) - [i37]Manan Tomar, Amy Zhang, Roberto Calandra, Matthew E. Taylor, Joelle Pineau:
Model-Invariant State Abstractions for Model-Based Reinforcement Learning. CoRR abs/2102.09850 (2021) - [i36]Volodymyr Tkachuk, Sriram Ganapathi Subramanian, Matthew E. Taylor:
The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning. CoRR abs/2103.04416 (2021) - [i35]Brittany Davis Pierson, Justine Ventura, Matthew E. Taylor:
The Atari Data Scraper. CoRR abs/2104.04893 (2021) - [i34]Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson, Mark Crowley:
Multi-Agent Advisor Q-Learning. CoRR abs/2111.00345 (2021) - [i33]Manan Tomar, Utkarsh A. Mishra, Amy Zhang, Matthew E. Taylor:
Learning Representations for Pixel-based Control: What Matters and Why? CoRR abs/2111.07775 (2021) - [i32]Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:
Decentralized Mean Field Games. CoRR abs/2112.09099 (2021) - 2020
- [j32]Behzad Ghazanfari, Fatemeh Afghah, Matthew E. Taylor:
Sequential Association Rule Mining for Autonomously Extracting Hierarchical Task Structures in Reinforcement Learning. IEEE Access 8: 11782-11799 (2020) - [j31]Yang Hu, Rachel Min Wong, Olusola O. Adesope, Matthew E. Taylor:
Effects of a computer-based learning environment that teaches older adults how to install a smart home system. Comput. Educ. 149: 103816 (2020) - [j30]Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone:
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey. J. Mach. Learn. Res. 21: 181:1-181:50 (2020) - [j29]Yang Hu, Diane J. Cook, Matthew E. Taylor:
Study of Effectiveness of Prior Knowledge for Smart Home Kit Installation. Sensors 20(21): 6145 (2020) - [c97]Felipe Leno da Silva, Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Uncertainty-Aware Action Advising for Deep Reinforcement Learning Agents. AAAI 2020: 5792-5799 - [c96]Felipe Leno da Silva, Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Providing Uncertainty-Based Advice for Deep Reinforcement Learning Agents (Student Abstract). AAAI 2020: 13913-13914 - [c95]Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, Nidhi Hegde:
Multi Type Mean Field Reinforcement Learning. AAMAS 2020: 411-419 - [c94]Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
A Very Condensed Survey and Critique of Multiagent Deep Reinforcement Learning. AAMAS 2020: 2146-2148 - [e3]Matthew E. Taylor, Yang Yu, Edith Elkind, Yang Gao:
Distributed Artificial Intelligence - Second International Conference, DAI 2020, Nanjing, China, October 24-27, 2020, Proceedings. Lecture Notes in Computer Science 12547, Springer 2020, ISBN 978-3-030-64095-8 [contents] - [i31]Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, Nidhi Hegde:
Multi Type Mean Field Reinforcement Learning. CoRR abs/2002.02513 (2020) - [i30]Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone:
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey. CoRR abs/2003.04960 (2020) - [i29]Craig Sherstan, Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Work in Progress: Temporally Extended Auxiliary Tasks. CoRR abs/2004.00600 (2020) - [i28]Adam Bignold, Francisco Cruz, Matthew E. Taylor, Tim Brys, Richard Dazeley, Peter Vamplew, Cameron Foale:
A Conceptual Framework for Externally-influenced Agents: An Assisted Reinforcement Learning Review. CoRR abs/2007.01544 (2020) - [i27]Yunshu Du, Garrett Warnell, Assefaw Hadish Gebremedhin, Peter Stone, Matthew E. Taylor:
Lucid Dreaming for Experience Replay: Refreshing Past States with the Current Policy. CoRR abs/2009.13736 (2020) - [i26]Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar:
Maximum Reward Formulation In Reinforcement Learning. CoRR abs/2010.03744 (2020) - [i25]Paniz Behboudian, Yash Satsangi, Matthew E. Taylor, Anna Harutyunyan, Michael Bowling:
Useful Policy Invariant Shaping from Arbitrary Advice. CoRR abs/2011.01297 (2020) - [i24]Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart:
Partially Observable Mean Field Reinforcement Learning. CoRR abs/2012.15791 (2020)
2010 – 2019
- 2019
- [j28]Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
A survey and critique of multiagent deep reinforcement learning. Auton. Agents Multi Agent Syst. 33(6): 750-797 (2019) - [j27]Garrett Wilson, Christopher Pereyda, Nisha Raghunath, Gabriel Victor de la Cruz, Shivam Goel, Sepehr Nesaei, Bryan David Minor, Maureen Schmitter-Edgecombe, Matthew E. Taylor, Diane J. Cook:
Robot-enabled support of daily activities in smart home environments. Cogn. Syst. Res. 54: 258-272 (2019) - [j26]Gabriel Victor de la Cruz, Yunshu Du, Matthew E. Taylor:
Pre-training with non-expert human demonstration for deep reinforcement learning. Knowl. Eng. Rev. 34: e10 (2019) - [j25]Bikramjit Banerjee, Syamala Vittanala, Matthew Edmund Taylor:
Team learning from human demonstration with coordination confidence. Knowl. Eng. Rev. 34: e12 (2019) - [j24]Anestis Fachantidis, Matthew E. Taylor, Ioannis P. Vlahavas:
Learning to Teach Reinforcement Learning Agents. Mach. Learn. Knowl. Extr. 1(1): 21-42 (2019) - [j23]Yunshu Du, Assefaw H. Gebremedhin, Matthew E. Taylor:
Analysis of University Fitness Center Data Uncovers Interesting Patterns, Enables Prediction. IEEE Trans. Knowl. Data Eng. 31(8): 1478-1490 (2019) - [c93]Chao Gao, Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
On Hard Exploration for Reinforcement Learning: A Case Study in Pommerman. AIIDE 2019: 24-30 - [c92]Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Agent Modeling as Auxiliary Task for Deep Reinforcement Learning. AIIDE 2019: 31-37 - [c91]Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Terminal Prediction as an Auxiliary Task for Deep Reinforcement Learning. AIIDE 2019: 38-44 - [c90]Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Action Guidance with MCTS for Deep Reinforcement Learning. AIIDE 2019: 153-159 - [c89]Weixun Wang, Jianye Hao, Yixi Wang, Matthew E. Taylor:
Achieving cooperation through deep multiagent reinforcement learning in sequential prisoner's dilemmas. DAI 2019: 11:1-11:7 - [c88]Zhaodong Wang, Matthew E. Taylor:
Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human and Agent Demonstrations. IJCAI 2019: 3820-3827 - [c87]Kenny Young, Baoxiang Wang, Matthew E. Taylor:
Metatrace Actor-Critic: Online Step-Size Tuning by Meta-gradient Descent for Reinforcement Learning Control. IJCAI 2019: 4185-4191 - [c86]Nathan Douglas, Dianna Yim, Bilal Kartal, Pablo Hernandez-Leal, Frank Maurer, Matthew E. Taylor:
Towers of Saliency: A Reinforcement Learning Visualization Using Immersive Environments. ISS 2019: 339-342 - [e2]Edith Elkind, Manuela Veloso, Noa Agmon, Matthew E. Taylor:
Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, Montreal, QC, Canada, May 13-17, 2019. International Foundation for Autonomous Agents and Multiagent Systems 2019, ISBN 978-1-4503-6309-9 [contents] - [i23]Gabriel Victor de la Cruz, Yunshu Du, Matthew E. Taylor:
Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning. CoRR abs/1904.02206 (2019) - [i22]Bilal Kartal, Pablo Hernandez-Leal, Chao Gao, Matthew E. Taylor:
Safer Deep RL with Shallow MCTS: A Case Study in Pommerman. CoRR abs/1904.05759 (2019) - [i21]Chao Gao, Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition. CoRR abs/1905.01360 (2019) - [i20]Robert T. Loftin, Bei Peng, Matthew E. Taylor, Michael L. Littman, David L. Roberts:
Interactive Learning of Environment Dynamics for Sequential Tasks. CoRR abs/1907.08478 (2019) - [i19]Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Agent Modeling as Auxiliary Task for Deep Reinforcement Learning. CoRR abs/1907.09597 (2019) - [i18]Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Terminal Prediction as an Auxiliary Task for Deep Reinforcement Learning. CoRR abs/1907.10827 (2019) - [i17]Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Action Guidance with MCTS for Deep Reinforcement Learning. CoRR abs/1907.11703 (2019) - [i16]Chao Gao, Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
On Hard Exploration for Reinforcement Learning: a Case Study in Pommerman. CoRR abs/1907.11788 (2019) - 2018
- [j22]Ariel Rosenfeld, Moshe Cohen, Matthew E. Taylor, Sarit Kraus:
Leveraging human knowledge in tabular reinforcement learning: a study of human subjects. Knowl. Eng. Rev. 33: e14 (2018) - [j21]Bei Peng, James MacGlashan, Robert Tyler Loftin, Michael L. Littman, David L. Roberts, Matthew E. Taylor:
Curriculum Design for Machine Learners in Sequential Decision Tasks. IEEE Trans. Emerg. Top. Comput. Intell. 2(4): 268-277 (2018) - [c85]Felipe Leno da Silva, Matthew E. Taylor, Anna Helena Reali Costa:
Autonomously Reusing Knowledge in Multiagent Reinforcement Learning. IJCAI 2018: 5487-5493 - [c84]Matthew E. Taylor:
Improving Reinforcement Learning with Human Input. IJCAI 2018: 5724-5728 - [i15]Weixun Wang, Jianye Hao, Yixi Wang, Matthew E. Taylor:
Towards Cooperation in Sequential Prisoner's Dilemmas: a Deep Multiagent Reinforcement Learning Approach. CoRR abs/1803.00162 (2018) - [i14]Zhaodong Wang, Matthew E. Taylor:
Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge from Human/Agent's Demonstration. CoRR abs/1805.04493 (2018) - [i13]Kenny Young, Baoxiang Wang, Matthew E. Taylor:
Metatrace: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control. CoRR abs/1805.04514 (2018) - [i12]Ariel Rosenfeld, Moshe Cohen, Matthew E. Taylor, Sarit Kraus:
Leveraging human knowledge in tabular reinforcement learning: A study of human subjects. CoRR abs/1805.05769 (2018) - [i11]Pablo Hernandez-Leal, Bilal Kartal, Matthew E. Taylor:
Is multiagent deep reinforcement learning the answer or the question? A brief survey. CoRR abs/1810.05587 (2018) - [i10]Behzad Ghazanfari, Fatemeh Afghah, Matthew E. Taylor:
Autonomous Extraction of a Hierarchical Structure of Tasks in Reinforcement Learning, A Sequential Associate Rule Mining Approach. CoRR abs/1811.08275 (2018) - [i9]Bilal Kartal, Pablo Hernandez-Leal, Matthew E. Taylor:
Using Monte Carlo Tree Search as a Demonstrator within Asynchronous Deep RL. CoRR abs/1812.00045 (2018) - [i8]Gabriel Victor de la Cruz, Yunshu Du, Matthew E. Taylor:
Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning. CoRR abs/1812.08904 (2018) - 2017
- [j20]Pablo Hernandez-Leal, Yusen Zhan, Matthew E. Taylor, Luis Enrique Sucar, Enrique Munoz de Cote:
Efficiently detecting switches against non-stationary opponents. Auton. Agents Multi Agent Syst. 31(4): 767-789 (2017) - [j19]Pablo Hernandez-Leal, Yusen Zhan, Matthew E. Taylor, Luis Enrique Sucar, Enrique Munoz de Cote:
An exploration strategy for non-stationary opponents. Auton. Agents Multi Agent Syst. 31(5): 971-1002 (2017) - [j18]Tim Brys, Anna Harutyunyan, Peter Vrancx, Ann Nowé, Matthew E. Taylor:
Multi-objectivization and ensembles of shapings in reinforcement learning. Neurocomputing 263: 48-59 (2017) - [j17]Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor:
Nonconvex Policy Search Using Variational Inequalities. Neural Comput. 29(10): 2800-2824 (2017) - [j16]Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor:
Scalable lifelong reinforcement learning. Pattern Recognit. 72: 407-418 (2017) - [j15]Yunxiang Ye, Zhaodong Wang, Dylan Jones, Long He, Matthew E. Taylor, Geoffrey A. Hollinger, Qin Zhang:
Bin-Dog: A Robotic Platform for Bin Management in Orchards. Robotics 6(2): 12 (2017) - [c83]Salam El Bsat, Haitham Bou-Ammar, Matthew E. Taylor:
Scalable Multitask Policy Gradient Reinforcement Learning. AAAI 2017: 1847-1853 - [c82]Matthew E. Taylor, Sakire Arslan Ay:
AI Projects for Computer Science Capstone Classes (Extended Abstract). AAAI 2017: 4819-4821 - [c81]Amanda Leah Zulas, Kaitlyn I. Franz, Darrin Griechen, Matthew E. Taylor:
Solar Decathlon Competition: Towards a Solar-Powered Smart Home. AAAI Workshops 2017 - [c80]Pablo Hernandez-Leal, Yusen Zhan, Matthew E. Taylor, Luis Enrique Sucar, Enrique Munoz de Cote:
Detecting Switches Against Non-Stationary Opponents. AAMAS 2017: 920-921 - [c79]Pablo Hernandez-Leal, Yusen Zhan, Matthew E. Taylor, Luis Enrique Sucar, Enrique Munoz de Cote:
An Exploration Strategy Facing Non-Stationary Agents. AAMAS 2017: 922-923 - [c78]Bei Peng, James MacGlashan, Robert T. Loftin, Michael L. Littman, David L. Roberts, Matthew E. Taylor:
Curriculum Design for Machine Learners in Sequential Decision Tasks. AAMAS 2017: 1682-1684 - [c77]Ariel Rosenfeld, Matthew E. Taylor, Sarit Kraus:
Speeding up Tabular Reinforcement Learning Using State-Action Similarities. AAMAS 2017: 1722-1724 - [c76]James MacGlashan, Mark K. Ho, Robert Tyler Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, Michael L. Littman:
Interactive Learning from Policy-Dependent Human Feedback. ICML 2017: 2285-2294 - [c75]Zhaodong Wang, Matthew E. Taylor:
Improving Reinforcement Learning with Confidence-Based Demonstrations. IJCAI 2017: 3027-3033 - [c74]Ariel Rosenfeld, Matthew E. Taylor, Sarit Kraus:
Leveraging Human Knowledge in Tabular Reinforcement Learning: A Study of Human Subjects. IJCAI 2017: 3823-3830 - [i7]James MacGlashan, Mark K. Ho, Robert Tyler Loftin, Bei Peng, David L. Roberts, Matthew E. Taylor, Michael L. Littman:
Interactive Learning from Policy-Dependent Human Feedback. CoRR abs/1701.06049 (2017) - [i6]Anestis Fachantidis, Matthew E. Taylor, Ioannis P. Vlahavas:
Learning to Teach Reinforcement Learning Agents. CoRR abs/1707.09079 (2017) - [i5]Gabriel Victor de la Cruz, Yunshu Du, Matthew E. Taylor:
Pre-training Neural Networks with Human Demonstrations for Deep Reinforcement Learning. CoRR abs/1709.04083 (2017) - [i4]Behzad Ghazanfari, Matthew E. Taylor:
Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning. CoRR abs/1709.04579 (2017) - 2016
- [j14]Robert T. Loftin, Bei Peng, James MacGlashan, Michael L. Littman, Matthew E. Taylor, Jeff Huang, David L. Roberts:
Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning. Auton. Agents Multi Agent Syst. 30(1): 30-59 (2016) - [c73]Pablo Hernandez-Leal, Matthew E. Taylor, Benjamin Rosman, Luis Enrique Sucar, Enrique Munoz de Cote:
Identifying and Tracking Switching, Non-Stationary Opponents: A Bayesian Approach. AAAI Workshop: Multiagent Interaction without Prior Coordination 2016 - [c72]William Curran, Tim Brys, David W. Aha, Matthew E. Taylor, William D. Smart:
Dimensionality Reduced Reinforcement Learning for Assistive Robots. AAAI Fall Symposia 2016 - [c71]Robert Tyler Loftin, James MacGlashan, Bei Peng, Matthew E. Taylor, Michael L. Littman, David L. Roberts:
Towards Behavior-Aware Model Learning from Human-Generated Trajectories. AAAI Fall Symposia 2016 - [c70]Zhaodong Wang, Matthew E. Taylor:
Effective Transfer via Demonstrations in Reinforcement Learning: A Preliminary Study. AAAI Spring Symposia 2016 - [c69]Halit Bener Suay, Tim Brys, Matthew E. Taylor, Sonia Chernova:
Learning from Demonstration for Shaping through Inverse Reinforcement Learning. AAMAS 2016: 429-437 - [c68]Bei Peng, James MacGlashan, Robert Tyler Loftin, Michael L. Littman, David L. Roberts, Matthew E. Taylor:
A Need for Speed: Adapting Agent Action Speed to Improve Task Learning from Non-Expert Humans. AAMAS 2016: 957-965 - [c67]Pablo Hernandez-Leal, Benjamin Rosman, Matthew E. Taylor, Luis Enrique Sucar, Enrique Munoz de Cote:
A Bayesian Approach for Learning and Tracking Switching, Non-Stationary Opponents: (Extended Abstract). AAMAS 2016: 1315-1316 - [c66]Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor:
Theoretically-Grounded Policy Advice from Multiple Teachers in Reinforcement Learning Settings with Applications to Negative Transfer. IJCAI 2016: 2315-2321 - [c65]David Isele, José-Marcio Luna, Eric Eaton, Gabriel Victor de la Cruz, James Irwin, Brandon Kallaher, Matthew E. Taylor:
Lifelong learning for disturbance rejection on mobile robots. IROS 2016: 3993-3998 - [i3]Yusen Zhan, Haitham Bou-Ammar, Matthew E. Taylor:
Theoretically-Grounded Policy Advice from Multiple Teachers in Reinforcement Learning Settings with Applications to Negative Transfer. CoRR abs/1604.03986 (2016) - 2015
- [j13]Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis P. Vlahavas:
Transfer learning with probabilistic mapping selection. Adapt. Behav. 23(1): 3-19 (2015) - [c64]Haitham Bou-Ammar, Eric Eaton, Paul Ruvolo, Matthew E. Taylor:
Unsupervised Cross-Domain Transfer in Policy Gradient Reinforcement Learning via Manifold Alignment. AAAI 2015: 2504-2510 - [c63]Gabriel Victor de la Cruz, Bei Peng, Walter Stephen Lasecki, Matthew Edmund Taylor:
Generating Real-Time Crowd Advice to Improve Reinforcement Learning Agents. AAAI Workshop: Learning for General Competency in Video Games 2015 - [c62]Yusen Zhan, Matthew E. Taylor:
Online Transfer Learning in Reinforcement Learning Domains. AAAI Fall Symposia 2015: 97- - [c61]Mitchell Scott, Bei Peng, Madeline Chili, Tanay Nigam, Francis G. Pascual, Cynthia Matuszek, Matthew E. Taylor:
On the Ability to Provide Demonstrations on a UAS: Observing 90 Untrained Participants Abusing a Flying Robot. AAAI Fall Symposia 2015: 117-121 - [c60]Tim Brys, Anna Harutyunyan, Matthew E. Taylor, Ann Nowé:
Policy Transfer using Reward Shaping. AAMAS 2015: 181-188 - [c59]Pablo Hernandez-Leal, Matthew E. Taylor, Enrique Munoz de Cote, Luis Enrique Sucar:
Bidding in Non-Stationary Energy Markets. AAMAS 2015: 1709-1710 - [c58]Tim Brys, Anna Harutyunyan, Halit Bener Suay, Sonia Chernova, Matthew E. Taylor, Ann Nowé:
Reinforcement Learning from Demonstration through Shaping. IJCAI 2015: 3352-3358 - [c57]Gabriel Victor de la Cruz, Bei Peng, Walter S. Lasecki, Matthew E. Taylor:
Towards Integrating Real-Time Crowd Advice with Reinforcement Learning. IUI Companion 2015: 17-20 - [i2]William Curran, Tim Brys, Matthew E. Taylor, William D. Smart:
Using PCA to Efficiently Represent State Spaces. CoRR abs/1505.00322 (2015) - [i1]Yusen Zhan, Matthew E. Taylor:
Online Transfer Learning in Reinforcement Learning Domains. CoRR abs/1507.00436 (2015) - 2014
- [j12]Matthew E. Taylor, Nicholas Carboni, Anestis Fachantidis, Ioannis P. Vlahavas, Lisa Torrey:
Reinforcement learning agents providing advice in complex video games. Connect. Sci. 26(1): 45-63 (2014) - [j11]Tim Brys, Tong T. Pham, Matthew E. Taylor:
Distributed learning and multi-objectivity in traffic light control. Connect. Sci. 26(1): 65-83 (2014) - [c56]Robert Tyler Loftin, James MacGlashan, Bei Peng, Matthew E. Taylor, Michael L. Littman, Jeff Huang, David L. Roberts:
A Strategy-Aware Technique for Learning Behaviors from Discrete Human Feedback. AAAI 2014: 937-943 - [c55]Tim Brys, Ann Nowé, Daniel Kudenko, Matthew E. Taylor:
Combining Multiple Correlated Reward and Shaping Signals by Measuring Confidence. AAAI 2014: 1687-1693 - [c54]Tim Brys, Kristof Van Moffaert, Ann Nowé, Matthew E. Taylor:
Adaptive objective selection for correlated objectives in multi-objective reinforcement learning. AAMAS 2014: 1349-1350 - [c53]Chris HolmesParker, Matthew E. Taylor, Adrian K. Agogino, Kagan Tumer:
CLEANing the reward: counterfactual actions to remove exploratory action noise in multiagent learning (extended abstract). AAMAS 2014: 1353-1354 - [c52]Tim Brys, Matthew E. Taylor, Ann Nowé:
Using Ensemble Techniques and Multi-Objectivization to Solve Reinforcement Learning Problems. ECAI 2014: 981-982 - [c51]Haitham Bou-Ammar, Eric Eaton, Paul Ruvolo, Matthew E. Taylor:
Online Multi-Task Learning for Policy Gradient Methods. ICML 2014: 1206-1214 - [c50]Tim Brys, Anna Harutyunyan, Peter Vrancx, Matthew E. Taylor, Daniel Kudenko, Ann Nowé:
Multi-objectivization of reinforcement learning problems by reward shaping. IJCNN 2014: 2315-2322 - [c49]Matthew E. Taylor, Lisa Torrey:
Agents Teaching Agents in Reinforcement Learning (Nectar Abstract). ECML/PKDD (3) 2014: 524-528 - [c48]Robert Tyler Loftin, Bei Peng, James MacGlashan, Michael L. Littman, Matthew E. Taylor, Jeff Huang, David L. Roberts:
Learning something from nothing: Leveraging implicit human feedback strategies. RO-MAN 2014: 607-612 - [c47]Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis P. Vlahavas:
An Autonomous Transfer Learning Algorithm for TD-Learners. SETN 2014: 57-70 - [c46]Chris HolmesParker, Matthew E. Taylor, Adrian K. Agogino, Kagan Tumer:
CLEAN Rewards to Improve Coordination by Removing Exploratory Action Noise. WI-IAT (3) 2014: 127-134 - 2013
- [j10]Marcos Augusto M. Vieira, Matthew E. Taylor, Prateek Tandon, Manish Jain, Ramesh Govindan, Gaurav S. Sukhatme, Milind Tambe:
Mitigating multi-path fading in a mobile mesh network. Ad Hoc Networks 11(4): 1510-1521 (2013) - [c45]Ravi Balasubramanian, Matthew E. Taylor:
Learning for Mobile-Robot Error Recovery (Extended Abstract). AAAI Spring Symposium: Designing Intelligent Robots 2013 - [c44]Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis P. Vlahavas:
Autonomous Selection of Inter-Task Mappings in Transfer Learning (extended abstract). AAAI Spring Symposium: Lifelong Machine Learning 2013 - [c43]Lisa Torrey, Matthew E. Taylor:
Teaching on a budget: agents advising agents in reinforcement learning. AAMAS 2013: 1053-1060 - [c42]Haitham Bou-Ammar, Decebal Constantin Mocanu, Matthew E. Taylor, Kurt Driessens, Karl Tuyls, Gerhard Weiss:
Automatically Mapped Transfer between Reinforcement Learning Tasks via Three-Way Restricted Boltzmann Machines. ECML/PKDD (2) 2013: 449-464 - 2012
- [c41]Haitham Bou-Ammar, Karl Tuyls, Matthew E. Taylor, Kurt Driessens, Gerhard Weiss:
Reinforcement learning transfer via sparse coding. AAMAS 2012: 383-390 - [c40]Lisa Torrey, Matthew E. Taylor:
Towards student/teacher learning in sequential decision tasks. AAMAS 2012: 1383-1384 - 2011
- [j9]Frank Schweitzer, Matthew E. Taylor:
Editorial: Agents and Multi-Agent Systems. Adv. Complex Syst. 14(2) (2011) - [j8]Matthew E. Taylor, Manish Jain, Prateek Tandon, Makoto Yokoo, Milind Tambe:
Distributed on-Line Multi-Agent Optimization under Uncertainty: Balancing Exploration and Exploitation. Adv. Complex Syst. 14(3): 471-528 (2011) - [j7]Matthew E. Taylor, Peter Stone:
An Introduction to Intertask Transfer for Reinforcement Learning. AI Mag. 32(1): 15-34 (2011) - [c39]Matthew Edmund Taylor, Halit Bener Suay, Sonia Chernova:
Using Human Demonstrations to Improve Reinforcement Learning. AAAI Spring Symposium: Help Me Help You: Bridging the Gaps in Human-Agent Collaboration 2011 - [c38]Shimon Whiteson, Brian Tanner, Matthew E. Taylor, Peter Stone:
Protecting against evaluation overfitting in empirical reinforcement learning. ADPRL 2011: 120-127 - [c37]Haitham Bou-Ammar, Matthew E. Taylor:
Reinforcement Learning Transfer via Common Subspaces. ALA 2011: 21-36 - [c36]Paul Scerri, Balajee Kannan, Prasanna Velagapudi, Kate Macarthur, Peter Stone, Matthew E. Taylor, John Dolan, Alessandro Farinelli, Archie C. Chapman, Bernadine Dias, George Kantor:
Flood Disaster Mitigation: A Real-World Challenge Problem for Multi-agent Unmanned Surface Vehicles. AAMAS Workshops 2011: 252-269 - [c35]Jason Tsai, Natalie Fridman, Emma Bowring, Matthew Brown, Shira Epstein, Gal A. Kaminka, Stacy Marsella, Andrew Ogden, Inbal Rika, Ankur Sheel, Matthew E. Taylor, Xuezhi Wang, Avishay Zilka, Milind Tambe:
ESCAPES: evacuation simulation with children, authorities, parents, emotions, and social comparison. AAMAS 2011: 457-464 - [c34]Matthew E. Taylor, Halit Bener Suay, Sonia Chernova:
Integrating reinforcement learning with human demonstrations of varying ability. AAMAS 2011: 617-624 - [c33]Matthew E. Taylor, Brian Kulis, Fei Sha:
Metric learning for reinforcement learning agents. AAMAS 2011: 777-784 - [c32]Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, Milind Tambe:
Teamwork in distributed POMDPs: execution-time coordination under model uncertainty. AAMAS 2011: 1261-1262 - [c31]Matthew Edmund Taylor:
Teaching Reinforcement Learning with Mario: An Argument and Case Study. EAAI 2011: 1737-1742 - [c30]Todd W. Neller, Marie desJardins, Tim Oates, Matthew E. Taylor:
Model AI Assignments 2011. EAAI 2011: 1746 - [c29]Haitham Bou-Ammar, Matthew E. Taylor, Karl Tuyls, Gerhard Weiss:
Reinforcement Learning Transfer Using a Sparse Coded Inter-task Mapping. EUMAS 2011: 1-16 - [c28]Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis P. Vlahavas:
Transfer Learning via Multiple Inter-task Mappings. EWRL 2011: 225-236 - [c27]Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, Milind Tambe:
Towards Addressing Model Uncertainty: Robust Execution-Time Coordination for Teamwork. IAT 2011: 204-207 - 2010
- [j6]Shimon Whiteson, Matthew E. Taylor, Peter Stone:
Critical factors in the empirical performance of temporal difference and evolutionary methods for reinforcement learning. Auton. Agents Multi Agent Syst. 21(1): 1-35 (2010) - [j5]Matthew E. Taylor, Christopher Kiekintveld, Craig Western, Milind Tambe:
A Framework for Evaluating Deployed Security Systems: Is There a Chink in your ARMOR? Informatica (Slovenia) 34(2): 129-140 (2010) - [c26]Jun-young Kwak, Rong Yang, Zhengyu Yin, Matthew E. Taylor, Milind Tambe:
Teamwork and Coordination under Model Uncertainty in DEC-POMDPs. Interactive Decision Theory and Game Theory 2010 - [c25]Matthew E. Taylor, Katherine E. Coons, Behnam Robatmili, Bertrand A. Maher, Doug Burger, Kathryn S. McKinley:
Evolving Compiler Heuristics to Manage Communication and Contention. AAAI 2010: 1690-1693 - [c24]Matthew E. Taylor, Manish Jain, Yanquin Jin, Makoto Yokoo, Milind Tambe:
When should there be a "Me" in "Team"?: distributed multi-agent optimization under uncertainty. AAMAS 2010: 109-116 - [e1]Matthew E. Taylor, Karl Tuyls:
Adaptive and Learning Agents, Second Workshop, ALA 2009, Held as Part of the AAMAS 2009 Conference in Budapest, Hungary, May 12, 2009, Revised Selected Papers. Lecture Notes in Computer Science 5924, Springer 2010, ISBN 978-3-642-11813-5 [contents]
2000 – 2009
- 2009
- [b1]Matthew E. Taylor:
Transfer in Reinforcement Learning Domains. Studies in Computational Intelligence 216, Springer 2009, ISBN 978-3-642-01881-7, pp. 1-218 [contents] - [j4]Razvan C. Bunescu, Vitor R. Carvalho, Jan Chomicki, Vincent Conitzer, Michael T. Cox, Virginia Dignum, Zachary Dodds, Mark Dredze, David Furcy, Evgeniy Gabrilovich, Mehmet H. Göker, Hans W. Guesgen, Haym Hirsh, Dietmar Jannach, Ulrich Junker, Wolfgang Ketter, Alfred Kobsa, Sven Koenig, Tessa A. Lau, Lundy Lewis, Eric T. Matson, Ted Metzler, Rada Mihalcea, Bamshad Mobasher, Joelle Pineau, Pascal Poupart, Anita Raja, Wheeler Ruml, Norman M. Sadeh, Guy Shani, Daniel G. Shapiro, Sarabjot Singh Anand, Matthew E. Taylor, Kiri Wagstaff, Trey Smith, William E. Walsh, Rong Zhou:
AAAI 2008 Workshop Reports. AI Mag. 30(1): 108-118 (2009) - [j3]Matthew E. Taylor, Peter Stone:
Transfer Learning for Reinforcement Learning Domains: A Survey. J. Mach. Learn. Res. 10: 1633-1685 (2009) - [c23]Matthew E. Taylor:
Assisting Transfer-Enabled Machine Learning Algorithms: Leveraging Human Knowledge for Curriculum Design. AAAI Spring Symposium: Agents that Learn from Human Teachers 2009: 141-143 - [c22]Pradeep Varakantham, Jun-young Kwak, Matthew E. Taylor, Janusz Marecki, Paul Scerri, Milind Tambe:
Exploiting Coordination Locales in Distributed POMDPs via Social Model Shaping. ICAPS 2009 - [c21]Marc J. V. Ponsen, Matthew E. Taylor, Karl Tuyls:
Abstraction and Generalization in Reinforcement Learning: A Summary and Framework. ALA 2009: 1-32 - [c20]Manish Jain, Matthew E. Taylor, Milind Tambe, Makoto Yokoo:
DCOPs Meet the Real World: Exploring Unknown Reward Matrices with Applications to Mobile Sensor Networks. IJCAI 2009: 181-186 - 2008
- [c19]Katherine E. Coons, Behnam Robatmili, Matthew E. Taylor, Bertrand A. Maher, Doug Burger, Kathryn S. McKinley:
Feature selection and policy optimization for distributed instruction placement using reinforcement learning. PACT 2008: 32-42 - [c18]Matthew E. Taylor, Gregory Kuhlmann, Peter Stone:
Transfer Learning and Intelligence: an Argument and Approach. AGI 2008: 326-337 - [c17]Matthew E. Taylor, Gregory Kuhlmann, Peter Stone:
Autonomous transfer for reinforcement learning. AAMAS (1) 2008: 283-290 - [c16]Matthew E. Taylor, Nicholas K. Jong, Peter Stone:
Transferring Instances for Model-Based Reinforcement Learning. ECML/PKDD (2) 2008: 488-505 - 2007
- [j2]Shimon Whiteson, Matthew E. Taylor, Peter Stone:
Empirical Studies in Action Selection with Reinforcement Learning. Adapt. Behav. 15(1): 33-50 (2007) - [j1]Matthew E. Taylor, Peter Stone, Yaxin Liu:
Transfer Learning via Inter-Task Mappings for Temporal Difference Learning. J. Mach. Learn. Res. 8: 2125-2167 (2007) - [c15]Matthew E. Taylor, Shimon Whiteson, Peter Stone:
Temporal Difference and Policy Search Methods for Reinforcement Learning: An Empirical Comparison. AAAI 2007: 1675-1678 - [c14]Matthew E. Taylor, Peter Stone:
Representation Transfer via Elaboration. AAAI 2007: 1906-1907 - [c13]Matthew E. Taylor:
Autonomous Inter-Task Transfer in Reinforcement Learning Domains. AAAI 2007: 1951-1952 - [c12]Matthew E. Taylor, Peter Stone:
Representation Transfer for Reinforcement Learning. AAAI Fall Symposium: Computational Approaches to Representation Change during Learning and Development 2007: 78-85 - [c11]Matthew E. Taylor, Shimon Whiteson, Peter Stone:
Transfer via inter-task mappings in policy search reinforcement learning. AAMAS 2007: 37 - [c10]Matthew E. Taylor, Peter Stone:
Towards reinforcement learning representation transfer. AAMAS 2007: 100 - [c9]Mazda Ahmadi, Matthew E. Taylor, Peter Stone:
IFSA: incremental feature-set augmentation for reinforcement learning tasks. AAMAS 2007: 186 - [c8]Matthew E. Taylor, Cynthia Matuszek, Bryan Klimt, Michael Witbrock:
Autonomous Classification of Knowledge into an Ontology. FLAIRS 2007: 140-145 - [c7]Matthew E. Taylor, Cynthia Matuszek, Pace Reagan Smith, Michael Witbrock:
Guiding Inference with Policy Search Reinforcement Learning. FLAIRS 2007: 146-151 - [c6]Matthew E. Taylor, Peter Stone:
Cross-domain transfer for reinforcement learning. ICML 2007: 879-886 - 2006
- [c5]Matthew E. Taylor, Peter Stone:
Inter-Task Action Correlation for Reinforcement Learning Tasks. AAAI 2006: 1901-1903 - [c4]Matthew E. Taylor, Shimon Whiteson, Peter Stone:
Comparing evolutionary and temporal difference methods in a reinforcement learning domain. GECCO 2006: 1321-1328 - 2005
- [c3]Matthew E. Taylor, Peter Stone, Yaxin Liu:
Value Functions for RL-Based Behavior Transfer: A Comparative Study. AAAI 2005: 880-885 - [c2]Matthew E. Taylor, Peter Stone:
Behavior transfer for value-function-based reinforcement learning. AAMAS 2005: 53-59 - [c1]Peter Stone, Gregory Kuhlmann, Matthew E. Taylor, Yaxin Liu:
Keepaway Soccer: From Machine Learning Testbed to Benchmark. RoboCup 2005: 93-105
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-17 21:29 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint