default search action
21st AAMAS 2022: Auckland, New Zealand
- Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew E. Taylor:
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) 2022, ISBN 978-1-4503-9213-6
Main Track
- Mitsuteru Abe, Fabio Henrique Kiyoiti dos Santos Tanaka, Jair Pereira Junior, Anna Bogdanova, Tetsuya Sakurai, Claus Aranha:
Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data. 1-8 - Mridul Agarwal, Vaneet Aggarwal, Tian Lan:
Multi-Objective Reinforcement Learning with Non-Linear Scalarization. 9-17 - Parand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith:
Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning. 18-26 - Ashay Aswale, Antonio López, Aukkawut Ammartayakun, Carlo Pinciroli:
Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It. 27-34 - Pranav Atreya, Joydeep Biswas:
State Supervised Steering Function for Sampling-based Kinodynamic Planning. 35-43 - Andrea Baisero, Christopher Amato:
Unbiased Asymmetric Reinforcement Learning under Partial Observability. 44-52 - Adrian Simon Bauer, Anne Köpken, Daniel Leidner:
Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation. 53-61 - Francesco Belardinelli, Wojtek Jamroga, Vadim Malvone, Munyque Mittelmann, Aniello Murano, Laurent Perrussel:
Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions. 62-71 - Amine Benamara, Jean-Claude Martin, Elise Prigent, Laurence Chaby, Mohamed Chetouani, Jean Zagdoun, Hélène Vanderstichel, Sébastien Dacunha, Brian Ravenet:
COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient. 72-81 - Márton Benedek, Péter Biró, Walter Kern, Daniël Paulusma:
Computing Balanced Solutions for Large International Kidney Exchange Schemes. 82-90 - Ziyad Benomar, Chaima Ghribi, Elie Cali, Alexander Hinsen, Benedikt Jahnel:
Agent-based Modeling and Simulation for Malware Spreading in D2D Networks. 91-99 - Jamal Bentahar, Nagat Drawel, Abdeladim Sadiki:
Quantitative Group Trust: A Two-Stage Verification Approach. 100-108 - Petra Berenbrink, Martin Hoefer, Dominik Kaaser, Pascal Lenzner, Malin Rau, Daniel Schmand:
Asynchronous Opinion Dynamics in Social Networks. 109-117 - Tom Bewley, Freddy Lécué:
Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions. 118-126 - Niclas Boehmer, Robert Bredereck, Klaus Heeger, Dusan Knop, Junjie Luo:
Multivariate Algorithmics for Eliminating Envy by Donating Goods. 127-135 - Niclas Boehmer, Markus Brill, Ulrike Schmidt-Kraepelin:
Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences. 136-144 - Kenneth D. Bogert, Prashant Doshi:
A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments. 145-153 - Allan Borodin, Omer Lev, Nisarg Shah, Tyrone Strangway:
Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution. 154-162 - Yasser Bourahla, Manuel Atencia, Jérôme Euzenat:
Knowledge Transmission and Improvement Across Generations do not Need Strong Selection. 163-171 - Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff, Amanda Jane Coles:
Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and Requirements. 172-180 - Felix Brandt, Patrick Lederer, René Romen:
Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes. 181-189 - Angelina Brilliantova, Hadi Hosseini:
Fair Stable Matching Meets Correlated Preferences. 190-198 - Axel Browne, Andrew Forney:
Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments. 199-207 - Ioannis Caragiannis, Vasilis Gkatzelis, Alexandros Psomas, Daniel Schoepflin:
Beyond Cake Cutting: Allocating Homogeneous Divisible Goods. 208-216 - Yaniel Carreno, Jun Hao Alvin Ng, Yvan R. Petillot, Ron P. A. Petrick:
Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning. 217-225 - Matteo Castiglioni, Alberto Marchesi, Nicola Gatti:
Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting. 226-234 - Mustafa Mert Çelikok, Frans A. Oliehoek, Samuel Kaski:
Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for Centaurs. 235-243 - Zi-Xuan Chen, Xin-Qiang Cai, Yuan Jiang, Zhi-Hua Zhou:
Anomaly Guided Policy Learning from Imperfect Demonstrations. 244-252 - Yang Chen, Libo Zhang, Jiamou Liu, Shuyue Hu:
Individual-Level Inverse Reinforcement Learning for Mean Field Games. 253-262 - Julian Chingoma, Ulle Endriss, Ronald de Haan:
Simulating Multiwinner Voting Rules in Judgment Aggregation. 263-271 - Shushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer, Marco Pavone:
Coordinated Multi-Agent Pathfinding for Drones and Trucks over Road Networks. 272-280 - Samuel H. Christie V., Amit K. Chopra, Munindar P. Singh:
Pippi: Practical Protocol Instantiation. 281-289 - Saar Cohen, Noa Agmon:
Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning. 290-299 - Ágnes Cseh, Tobias Friedrich, Jannik Peters:
Pareto Optimal and Popular House Allocation with Lower and Upper Quotas. 300-308 - Ágnes Cseh, Jannik Peters:
Three-Dimensional Popular Matching with Cyclic Preferences. 309-317 - Aleksander Czechowski, Georgios Piliouras:
Poincaré-Bendixson Limit Sets in Multi-Agent Learning. 318-326 - Panayiotis Danassis, Aleksei Triastcyn, Boi Faltings:
A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings. 327-335 - Gianlorenzo D'Angelo, Esmaeil Delfaraz, Hugo Gilbert:
Computation and Bribery of Voting Power in Delegative Simple Games. 336-344 - Debojit Das, Shweta Jain, Sujit Gujar:
Budgeted Combinatorial Multi-Armed Bandits. 345-353 - Ilias Diakonikolas, Chrystalla Pavlou, John Peebles, Alistair Stewart:
Efficient Approximation Algorithms for the Inverse Semivalue Problem. 354-362 - Louise Dupuis de Tarlé, Elise Bonzon, Nicolas Maudet:
Multiagent Dynamics of Gradual Argumentation Semantics. 363-371 - Soroush Ebadian, Dominik Peters, Nisarg Shah:
How to Fairly Allocate Easy and Difficult Chores. 372-380 - Vladimir Egorov, Alexey Shpilman:
Scalable Multi-Agent Model-Based Reinforcement Learning. 381-390 - Edith Elkind, Minming Li, Houyu Zhou:
Facility Location With Approval Preferences: Strategyproofness and Fairness. 391-399 - Eric Ewing, Jingyao Ren, Dhvani Kansara, Vikraman Sathiyanarayanan, Nora Ayanian:
Betweenness Centrality in Multi-Agent Path Finding. 400-408 - Roy Fairstein, Dan Vilenchik, Reshef Meir, Kobi Gal:
Welfare vs. Representation in Participatory Budgeting. 409-417 - Hélène Fargier, Paul Jourdan, Régis Sabbadin:
A Path-following Polynomial Equations Systems Approach for Computing Nash Equilibria. 418-426 - Thiago Freitas dos Santos, Nardine Osman, Marco Schorlemmer:
Ensemble and Incremental Learning for Norm Violation Detection. 427-435 - Robin Fritsch, Roger Wattenhofer:
The Price of Majority Support. 436-444 - Sébastien Gamblin, Alexandre Niveau, Maroua Bouzid:
A Symbolic Representation for Probabilistic Dynamic Epistemic Logic. 445-453 - Deepeka Garg, Maria Chli, George Vogiatzis:
Fully-Autonomous, Vision-based Traffic Signal Control: From Simulation to Reality. 454-462 - Jugal Garg, Thorben Tröbst, Vijay V. Vazirani:
One-Sided Matching Markets with Endowments: Equilibria and Algorithms. 463-471 - Anna Gautier, Alex Stephens, Bruno Lacerda, Nick Hawes, Michael J. Wooldridge:
Negotiated Path Planning for Non-Cooperative Multi-Robot Systems. 472-480 - Tzvika Geft, Dan Halperin:
Refined Hardness of Distance-Optimal Multi-Agent Path Finding. 481-488 - Matthieu Geist, Julien Pérolat, Mathieu Laurière, Romuald Elie, Sarah Perrin, Olivier Bachem, Rémi Munos, Olivier Pietquin:
Concave Utility Reinforcement Learning: The Mean-field Game Viewpoint. 489-497 - Ian Gemp, Kevin R. McKee, Richard Everett, Edgar A. Duéñez-Guzmán, Yoram Bachrach, David Balduzzi, Andrea Tacchetti:
D3C: Reducing the Price of Anarchy in Multi-Agent Learning. 498-506 - Ian Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas W. Anthony, Richard Everett, Andrea Tacchetti, Tom Eccles, János Kramár:
Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent. 507-515 - Athina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra:
Building Contrastive Explanations for Multi-Agent Team Formation. 516-524 - Ganesh Ghalme, Vineet Nair, Vishakha Patil, Yilun Zhou:
Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) Environment. 525-533 - Hiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi, Makoto Yokoo:
Fair and Truthful Mechanism with Limited Subsidy. 534-542 - Denizalp Goktas, Jiayi Zhao, Amy Greenwald:
Robust No-Regret Learning in Min-Max Stackelberg Games. 543-552 - Niko A. Grupen, Daniel D. Lee, Bart Selman:
Multi-Agent Curricula and Emergent Implicit Signaling. 553-561 - Himanshu Gupta, Bradley Hayes, Zachary Sunberg:
Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning. 562-570 - Dongge Han, Chris Xiaoxuan Lu, Tomasz P. Michalak, Michael J. Wooldridge:
Multiagent Model-based Credit Assignment for Continuous Control. 571-579 - Jiang Hao, Pradeep Varakantham:
Hierarchical Value Decomposition for Effective On-demand Ride-Pooling. 580-587 - Paul Harrenstein, Paolo Turrini:
Computing Nash Equilibria for District-based Nominations. 588-596 - Hadi Hosseini, Andrew Searns, Erel Segal-Halevi:
Ordinal Maximin Share Approximation for Chores. 597-605 - Vincent Hsiao, Dana S. Nau:
A Mean Field Game Model of Spatial Evolutionary Games. 606-614 - Shuyue Hu, Chin-Wing Leung, Ho-fung Leung, Harold Soh:
The Dynamics of Q-learning in Population Games: A Physics-inspired Continuity Equation Model. 615-623 - Matej Husár, Jirí Svancara, Philipp Obermeier, Roman Barták, Torsten Schaub:
Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning. 624-632 - Aya Hussein, Eleni Petraki, Sondoss Elsawah, Hussein A. Abbass:
Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning. 633-641 - Mohammad T. Irfan, Kim Hancock, Laura M. Friel:
Cascades and Overexposure in Social Networks: The Budgeted Case. 642-650 - Gabriel Istrate, Cosmin Bonchis:
Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games. 651-659 - Saïd Jabbour, Nizar Mhadhbi, Badran Raddaoui, Lakhdar Sais:
A Declarative Framework for Maximal k-plex Enumeration Problems. 660-668 - Alexis Jacq, Johan Ferret, Olivier Pietquin, Matthieu Geist:
Lazy-MDPs: Towards Interpretable RL by Learning When to Act. 669-677 - Devansh Jalota, Kiril Solovey, Matthew Tsao, Stephen Zoepf, Marco Pavone:
Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment. 678-686 - Jatin Jindal, Jérôme Lang, Katarína Cechlárová, Julien Lesca:
Selecting PhD Students and Projects with Limited Funding. 687-695 - Santhini K. A., Govind S. Sankar, Meghana Nasre:
Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas. 696-704 - Mustafa O. Karabag, Cyrus Neary, Ufuk Topcu:
Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss. 705-713 - Neel Karia, Faraaz Mallick, Palash Dey:
How Hard is Safe Bribery? 714-722 - Sammie Katt, Hai Nguyen, Frans A. Oliehoek, Christopher Amato:
BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs. 723-731 - Milad Kazemi, Mateo Perez, Fabio Somenzi, Sadegh Soudjani, Ashutosh Trivedi, Alvaro Velasquez:
Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning. 732-741 - Tarik Kelestemur, Robert Platt, Taskin Padir:
Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation. 742-750 - Seung Hyun Kim, Neale Van Stralen, Girish Chowdhary, Huy T. Tran:
Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning. 751-760 - Luca Kreisel, Niclas Boehmer, Vincent Froese, Rolf Niedermeier:
Equilibria in Schelling Games: Computational Hardness and Robustness. 761-769 - Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter:
Multimodal Analysis of the Predictability of Hand-gesture Properties. 770-779 - Roger Lera-Leri, Filippo Bistaffa, Marc Serramia, Maite López-Sánchez, Juan A. Rodríguez-Aguilar:
Towards Pluralistic Value Alignment: Aggregating Value Systems Through lp-Regression. 780-788 - George Z. Li, Ann Li, Madhav V. Marathe, Aravind Srinivasan, Leonidas Tsepenekas, Anil Vullikanti:
Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response. 789-797 - Yongheng Liang, Hejun Wu, Haitao Wang:
ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging. 798-806 - Grzegorz Lisowski, M. S. Ramanujan, Paolo Turrini:
Equilibrium Computation For Knockout Tournaments Played By Groups. 807-815 - Wencong Liu, Jiamou Liu, Zijian Zhang, Yiwei Liu, Liehuang Zhu:
Residual Entropy-based Graph Generative Algorithms. 816-824 - Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan, Fan Wu, Michail Basios:
The Spoofing Resistance of Frequent Call Markets. 825-832 - Emiliano Lorini, Éloan Rapion:
Logical Theories of Collective Attitudes and the Belief Base Perspective. 833-841 - Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob N. Foerster:
Lyapunov Exponents for Diversity in Differentiable Games. 842-852 - Keane Lucas, Ross E. Allen:
Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination. 853-861 - Roberto Lucchetti, Stefano Moretti, Tommaso Rea:
Coalition Formation Games and Social Ranking Solutions. 862-870 - Arnab Maiti, Palash Dey:
On Parameterized Complexity of Binary Networked Public Goods Game. 871-879 - Aditya S. Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh, Milind Tambe:
Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems. 880-888 - Joe McCalmon, Thai Le, Sarra M. Alqahtani, Dongwon Lee:
CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents. 889-897 - Kevin R. McKee, Xuechunzi Bai, Susan T. Fiske:
Warmth and Competence in Human-Agent Cooperation. 898-907 - Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed Chetouani, Francisco C. Santos:
Cooperation and Learning Dynamics under Risk Diversity and Financial Incentives. 908-916 - Mostafa Mohajeri Parizi, Giovanni Sileno, Tom M. van Engers:
Preference-Based Goal Refinement in BDI Agents. 917-925 - Paul Muller, Mark Rowland, Romuald Elie, Georgios Piliouras, Julien Pérolat, Mathieu Laurière, Raphaël Marinier, Olivier Pietquin, Karl Tuyls:
Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO. 926-934 - Oliviero Nardi, Arthur Boixel, Ulle Endriss:
A Graph-Based Algorithm for the Automated Justification of Collective Decisions. 935-943 - Grigory Neustroev, Sytze P. E. Andringa, Remco A. Verzijlbergh, Mathijs Michiel de Weerdt:
Deep Reinforcement Learning for Active Wake Control. 944-953 - Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran:
Learning Theory of Mind via Dynamic Traits Attribution. 954-962 - Dung Nguyen, Phuoc Nguyen, Svetha Venkatesh, Truyen Tran:
Learning to Transfer Role Assignment Across Team Sizes. 963-971 - Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki:
CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces. 972-981 - Liubove Orlov-Savko, Abhinav Jain, Gregory M. Gremillion, Catherine E. Neubauer, Jonroy D. Canady, Vaibhav V. Unhelkar:
Factorial Agent Markov Model: Modeling Other Agents' Behavior in presence of Dynamic Latent Decision Factors. 982-1000 - Han-Ching Ou, Christoph Siebenbrunner, Jackson A. Killian, Meredith B. Brooks, David Kempe, Yevgeniy Vorobeychik, Milind Tambe:
Networked Restless Multi-Armed Bandits for Mobile Interventions. 1001-1009 - Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, Dawn Song:
Characterizing Attacks on Deep Reinforcement Learning. 1010-1018 - Stipe Pandzic, Jan M. Broersen, Henk Aarts:
BOID*: Autonomous Goal Deliberation through Abduction. 1019-1027 - Julien Pérolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls, Olivier Pietquin:
Scaling Mean Field Games by Online Mirror Descent. 1028-1037 - Markus Peschl, Arkady Zgonnikov, Frans A. Oliehoek, Luciano Cavalcante Siebert:
MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. 1038-1046 - Thomy Phan, Felix Sommer, Philipp Altmann, Fabian Ritz, Lenz Belzner, Claudia Linnhoff-Popien:
Emergent Cooperation from Mutual Acknowledgment Exchange. 1047-1055 - Gauthier Picard:
Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions. 1056-1064 - Gauthier Picard:
Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management. 1065-1073 - Fredrik Präntare, Herman Appelgren, Mattias Tiger, David Bergström, Fredrik Heintz:
Learning Heuristics for Combinatorial Assignment by Optimally Solving Subproblems. 1074-1082 - Peizhu Qian, Vaibhav V. Unhelkar:
Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents. 1083-1091 - Dezhi Ran, Weiqiang Zheng, Yunqi Li, Kaigui Bian, Jie Zhang, Xiaotie Deng:
Revenue and User Traffic Maximization in Mobile Short-Video Advertising. 1092-1100 - Bram M. Renting, Holger H. Hoos, Catholijn M. Jonker:
Automated Configuration and Usage of Strategy Portfolios Mixed-Motive Bargaining. 1101-1109 - Mathieu Reymond, Eugenio Bargiacchi, Ann Nowé:
Pareto Conditioned Networks. 1110-1118 - Sebastian Rodriguez, John Thangarajah, Michael Winikoff, Dhirendra Singh:
Testing Requirements via User and System Stories in Agent Systems. 1119-1127 - Jingqing Ruan, Yali Du, Xuantang Xiong, Dengpeng Xing, Xiyun Li, Linghui Meng, Haifeng Zhang, Jun Wang, Bo Xu:
GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning. 1128-1136 - Heechang Ryu, Hayong Shin, Jinkyoo Park:
REMAX: Relational Representation for Multi-Agent Exploration. 1137-1145 - Lukas Schäfer, Filippos Christianos, Josiah P. Hanna, Stefano V. Albrecht:
Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration. 1146-1154 - Candice Schumann, Zhi Lang, Nicholas Mattei, John P. Dickerson:
Group Fairness in Bandits with Biased Feedback. 1155-1163 - Manisha Senadeera, Thommen George Karimpanal, Sunil Gupta, Santu Rana:
Sympathy-based Reinforcement Learning Agents. 1164-1172 - Esmaeil Seraj, Zheyuan Wang, Rohan R. Paleja, Daniel Martin, Matthew Sklar, Anirudh Patel, Matthew C. Gombolay:
Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming. 1173-1182 - Naman Shah, Siddharth Srivastava:
Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning. 1183-1191 - Yash Shukla, Christopher Thierauf, Ramtin Hosseini, Gyan Tatiya, Jivko Sinapov:
ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments. 1192-1200 - Sujoy Sikdar, Sikai Ruan, Qishen Han, Paween Pitimanaaree, Jeremy Blackthorne, Bülent Yener, Lirong Xia:
Anti-Malware Sandbox Games. 1201-1209 - Sean Sirur, Tim Muller:
Properties of Reputation Lag Attack Strategies. 1210-1218 - Aravind Srinivasan, Pan Xu:
The Generalized Magician Problem under Unknown Distributions and Related Applications. 1219-1227 - Charlie Street, Bruno Lacerda, Michal Staniaszek, Manuel Mühlig, Nick Hawes:
Context-Aware Modelling for Multi-Robot Systems Under Uncertainty. 1228-1236 - Karush Suri:
Off-Policy Evolutionary Reinforcement Learning with Maximum Mutations. 1237-1245 - Sharadhi Alape Suryanarayana, David Sarne, Sarit Kraus:
Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction. 1246-1255 - Aaquib Tabrez, Matthew B. Luebbers, Bradley Hayes:
Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming. 1256-1264 - Liangde Tao, Lin Chen, Lei Xu, Weidong Shi, Ahmed Sunny, Md Mahabub Uz Zaman:
How Hard is Bribery in Elections with Randomly Selected Voters. 1265-1273 - Julius Taylor, Eleni Nisioti, Clément Moulin-Frier:
Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient Representations. 1274-1282 - Andries van Beek, Ruben Brokkelkamp, Guido Schäfer:
Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit Auctions. 1283-1291 - Jules Vandeputte, Antoine Cornuéjols, Nicolas Darcel, Fabien Delaere, Christine Martin:
Coaching Agent: Making Recommendations for Behavior Change. A Case Study on Improving Eating Habits. 1292-1300 - Miguel Vasco, Hang Yin, Francisco S. Melo, Ana Paiva:
How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents. 1301-1309 - Alvaro Velasquez, Ismail Alkhouri, Andre Beckus, Ashutosh Trivedi, George K. Atia:
Controller Synthesis for Omega-Regular and Steady-State Specifications. 1310-1318 - Srdjan Vesic, Bruno Yun, Predrag Teovanovic:
Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation Semantics. 1319-1327 - Michael J. Vezina, Babak Esfandiari:
Epistemic Reasoning in Jason. 1328-1336 - Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, Adrian Weller:
Robust Learning from Observation with Model Misspecification. 1337-1345 - Yongzhao Wang, Gary Qiurui Ma, Michael P. Wellman:
Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis. 1346-1354 - Yutong Wang, Guillaume Sartoretti:
FCMNet: Full Communication Memory Net for Team-Level Cooperation in Multi-Agent Systems. 1355-1363 - Wanyuan Wang, Gerong Wu, Weiwei Wu, Yichuan Jiang, Bo An:
Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand Systems. 1364-1372 - Yinghui Wen, Aizhong Zhou, Jiong Guo:
Position-Based Matching with Multi-Modal Preferences. 1373-1381 - Alexander Wich, Holger Schultheis, Michael Beetz:
Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday Activities. 1382-1390 - Baicen Xiao, Bhaskar Ramasubramanian, Radha Poovendran:
Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning. 1391-1399 - Zhiwei Xu, Yunpeng Bai, Dapeng Li, Bin Zhang, Guoliang Fan:
SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement Learning. 1400-1408 - Hang Xu, Xinghua Qu, Zinovi Rabinovich:
Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement Learners. 1409-1417 - Wanqi Xue, Wei Qiu, Bo An, Zinovi Rabinovich, Svetlana Obraztsova, Chai Kiat Yeo:
Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning. 1418-1426 - Tomoki Yamauchi, Yuki Miyashita, Toshiharu Sugawara:
Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery Tasks. 1427-1435 - Jiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao, Hongyuan Zha:
Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning. 1436-1445 - Bo You, Ludwig Dierks, Taiki Todo, Minming Li, Makoto Yokoo:
Strategy-Proof House Allocation with Existing Tenants over Social Networks. 1446-1454 - D. Kai Zhang, Alexander Carver:
Segregation in Social Networks of Heterogeneous Agents Acting under Incomplete Information. 1455-1463 - Han Zhang, Jingkai Chen, Jiaoyang Li, Brian C. Williams, Sven Koenig:
Multi-Agent Path Finding for Precedence-Constrained Goal Sequences. 1464-1472 - Keyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan, Panagiotis Angeloudis:
The Competition and Inefficiency in Urban Road Last-Mile Delivery. 1473-1481 - Yuzhe Zhang, Davide Grossi:
Tracking Truth by Weighting Proxies in Liquid Democracy. 1482-1490 - Shangtong Zhang, Romain Laroche, Harm van Seijen, Shimon Whiteson, Remi Tachet des Combes:
A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms. 1491-1499 - Qizhen Zhang, Christopher Lu, Animesh Garg, Jakob N. Foerster:
Centralized Model and Exploration Policy for Multi-Agent RL. 1500-1508 - Yao Zhang, Dengji Zhao:
Incentives to Invite Others to Form Larger Coalitions. 1509-1517
Extended Abstracts
- Yehia Abd Alrahman, Shaun Azzopardi, Nir Piterman:
R-CHECK: A Model Checker for Verifying Reconfigurable MAS. 1518-1520 - Samuel Arseneault, David Vielfaure, Giovanni Beltrame:
RASS: Risk-Aware Swarm Storage. 1521-1523 - Raphaël Avalos, Mathieu Reymond, Ann Nowé, Diederik M. Roijers:
Local Advantage Networks for Cooperative Multi-Agent Reinforcement Learning. 1524-1526 - Aviram Aviv, Yaniv Oshrat, Samuel A. Assefa, Tobi Mustapha, Daniel Borrajo, Manuela Veloso, Sarit Kraus:
Advising Agent for Service-Providing Live-Chat Operators. 1527-1529 - Pinkesh Badjatiya, Mausoom Sarkar, Nikaash Puri, Jayakumar Subramanian, Abhishek Sinha, Siddharth Singh, Balaji Krishnamurthy:
Status-quo Policy Gradient in Multi-Agent Reinforcement Learning. 1530-1532 - Pallavi Bagga, Nicola Paoletti, Kostas Stathis:
Deep Learnable Strategy Templates for Multi-Issue Bilateral Negotiation. 1533-1535 - Flavia Barsotti, Rüya Gökhan Koçer, Fernando P. Santos:
Can Algorithms be Explained Without Compromising Efficiency? The Benefits of Detection and Imitation in Strategic Classification. 1536-1538 - Jad Bassil, Benoît Piranda, Abdallah Makhoul, Julien Bourgeois:
A New Porous Structure for Modular Robots. 1539-1541 - Dorothea Baumeister, Tobias Hogrebe:
On the Average-Case Complexity of Predicting Round-Robin Tournaments. 1542-1544 - Martino Bernasconi, Federico Cacciamani, Simone Fioravanti, Nicola Gatti, Francesco Trovò:
The Evolutionary Dynamics of Soft-Max Policy Gradient in Multi-Agent Settings. 1545-1547 - Niclas Boehmer, Tomohiro Koana, Rolf Niedermeier:
A Refined Complexity Analysis of Fair Districting over Graphs. 1548-1550 - AnneMarie Borg, Floris Bex:
Contrastive Explanations for Argumentation-Based Conclusions. 1551-1553 - Ulrik Brandes, Christian Laußmann, Jörg Rothe:
Voting for Centrality. 1554-1556 - Theophile Cabannes, Mathieu Laurière, Julien Pérolat, Raphaël Marinier, Sertan Girgin, Sarah Perrin, Olivier Pietquin, Alexandre M. Bayen, Eric Goubault, Romuald Elie:
Solving N-Player Dynamic Routing Games with Congestion: A Mean-Field Approach. 1557-1559 - Pierre Cardi, Laurent Gourvès, Julien Lesca:
On Fair and Efficient Solutions for Budget Apportionment. 1560-1562 - Darshan Chakrabarti, Jie Gao, Aditya Saraf, Grant Schoenebeck, Fang-Yi Yu:
Optimal Local Bayesian Differential Privacy over Markov Chains. 1563-1565 - Kishan Chandan, Jack Albertson, Shiqi Zhang:
Augmented Reality Visualizations using Imitation Learning for Collaborative Warehouse Robots. 1566-1568 - Sanjay Chandlekar, Easwar Subramanian, Sanjay P. Bhat, Praveen Paruchuri, Sujit Gujar:
Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy using DDPG in Smart-grids. 1569-1571 - Jiayu Chen, Jingdi Chen, Tian Lan, Vaneet Aggarwal:
Multi-agent Covering Option Discovery through Kronecker Product of Factor Graphs. 1572-1574 - Palash Dey:
Priced Gerrymandering. 1575-1577 - Gaurav Dixit, Kagan Tumer:
Behavior Exploration and Team Balancing for Heterogeneous Multiagent Coordination. 1578-1579 - Juncheng Dong, Suya Wu, Mohammadreza Soltani, Vahid Tarokh:
Multi-Agent Adversarial Attacks for Multi-Channel Communications. 1580-1582 - Seyed A. Esmaeili, Sharmila Duppala, Vedant Nanda, Aravind Srinivasan, John P. Dickerson:
Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and Individual. 1583-1585 - Markus Ewert, Stefan Heidekrüger, Martin Bichler:
Approaching the Overbidding Puzzle in All-Pay Auctions: Explaining Human Behavior through Bayesian Optimization and Equilibrium Learning. 1586-1588 - Angelo Ferrando, Rafael C. Cardoso:
Safety Shields, an Automated Failure Handling Mechanism for BDI Agents. 1589-1591 - Junsong Gao, Ziyu Chen, Dingding Chen, Wenxin Zhang:
Beyond Uninformed Search: Improving Branch-and-bound Based Acceleration Algorithms for Belief Propagation via Heuristic Strategies. 1592-1594 - Felipe Garrido-Lucero, Rida Laraki:
Stable Matching Games. 1595-1597 - Athina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra, Ornella Mich, Raman Kazhamiakin, Alessio Palmero Aprosio, Jean-Christophe R. Pazzaglia:
An Anytime Heuristic Algorithm for Allocating Many Teams to Many Tasks. 1598-1600 - Everardo Gonzalez, Lucie Houel, Radhika Nagpal, Melinda J. D. Malley:
Influencing Emergent Self-Assembled Structures in Robotic Collectives Through Traffic Control. 1601-1603 - Sriram Gopalakrishnan, Subbarao Kambhampati:
Minimizing Robot Navigation Graph for Position-Based Predictability by Humans. 1604-1606 - Alvaro Gunawan, Ji Ruan, Xiaowei Huang:
A Graph Neural Network Reasoner for Game Description Language. 1607-1609 - Enwei Guo, Xiumin Wang, Weiwei Wu:
Adaptive Aggregation Weight Assignment for Federated Learning: A Deep Reinforcement Learning Approach. 1610-1612 - Önder Gürcan:
Proof-of-Work as a Stigmergic Consensus Algorithm. 1613-1615 - Tesshu Hanaka, Toshiyuki Hirose, Hirotaka Ono:
Capacitated Network Design Games on a Generalized Fair Allocation Model. 1616-1617 - Helen Harman, Elizabeth I. Sklar:
Multi-agent Task Allocation for Fruit Picker Team Formation. 1618-1620 - Conor F. Hayes, Diederik M. Roijers, Enda Howley, Patrick Mannion:
Decision-Theoretic Planning for the Expected Scalarised Returns. 1621-1623 - Masanori Hirano, Kiyoshi Izumi, Hiroki Sakaji:
Implementation of Actual Data for Artificial Market Simulation. 1624-1626 - Diyi Hu, Chi Zhang, Viktor K. Prasanna, Bhaskar Krishnamachari:
Intelligent Communication over Realistic Wireless Networks in Multi-Agent Cooperative Games. 1627-1629 - Wenhan Huang, Kai Li, Kun Shao, Tianze Zhou, Jun Luo, Dongge Wang, Hangyu Mao, Jianye Hao, Jun Wang, Xiaotie Deng:
Multiagent Q-learning with Sub-Team Coordination. 1630-1632 - Halvard Hummel, Magnus Lie Hetland:
Guaranteeing Half-Maximin Shares Under Cardinality Constraints. 1633-1635 - Benjamin Irwin, Antonio Rago, Francesca Toni:
Argumentative Forecasting. 1636-1638 - Kazi Ashik Islam, Madhav V. Marathe, Henning S. Mortveit, Samarth Swarup, Anil Vullikanti:
Data-driven Agent-based Models for Optimal Evacuation of Large Metropolitan Areas for Improved Disaster Planning. 1639-1641 - Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, Nihar B. Shah:
Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design. 1642-1644 - Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang:
Learning to Advise and Learning from Advice in Cooperative Multiagent Reinforcement Learning. 1645-1647 - Samhita Kanaparthy, Sankarshan Damle, Sujit Gujar:
REFORM: Reputation Based Fair and Temporal Reward Framework for Crowdsourcing. 1648-1650 - Panagiotis Kanellopoulos, Maria Kyropoulou, Hao Zhou:
Forgiving Debt in Financial Network Games. 1651-1653 - Ilias Kazantzidis, Timothy J. Norman, Yali Du, Christopher T. Freeman:
How to Train Your Agent: Active Learning from Human Preferences and Justifications in Safety-critical Environments. 1654-1656 - Anna Maria Kerkmann, Jörg Rothe:
Popularity and Strict Popularity in Altruistic Hedonic Games and Minimum-Based Altruistic Hedonic Games. 1657-1659 - David Klaska, Antonín Kucera, Vít Musil, Vojtech Rehák:
Minimizing Expected Intrusion Detection Time in Adversarial Patrolling. 1660-1662 - Abdul Rahman Kreidieh, Yibo Zhao, Samyak Parajuli, Alexandre M. Bayen:
Learning Generalizable Multi-Lane Mixed-Autonomy Behaviors in Single Lane Representations of Traffic. 1663-1665 - Jennifer Leaf, Julie A. Adams:
Measuring Resilience in Collective Robotic Algorithms. 1666-1668 - Wilkins Leong, Julie Porteous, John Thangarajah:
Automated Story Sifting Using Story Arcs. 1669-1671 - George Z. Li, Arash Haddadan, Ann Li, Madhav V. Marathe, Aravind Srinivasan, Anil Vullikanti, Zeyu Zhao:
Theoretical Models and Preliminary Results for Contact Tracing and Isolation. 1672-1674 - Guan-Ting Liu, Guan-Yu Lin, Pu-Jen Cheng:
Improving Generalization with Cross-State Behavior Matching in Deep Reinforcement Learning. 1675-1677 - Vasilis Livanos, Ruta Mehta, Aniket Murhekar:
(Almost) Envy-Free, Proportional and Efficient Allocations of an Indivisible Mixed Manna. 1678-1680 - Jieting Luo, Mehdi Dastani:
Modeling Affective Reaction in Multi-agent Systems. 1681-1683 - Jinming Ma, Yingfeng Chen, Feng Wu, Xianpeng Ji, Yu Ding:
Multimodal Reinforcement Learning with Effective State Representation Learning. 1684-1686 - Will Ma, Pan Xu, Yifan Xu:
Group-level Fairness Maximization in Online Bipartite Matching. 1687-1689 - Rafid Ameer Mahmud, Fahim Faisal, Saaduddin Mahmud, Md. Mosaddek Khan:
A Simulation Based Online Planning Algorithm for Multi-Agent Cooperative Environments. 1690-1692 - Arnab Maiti, Palash Dey:
Parameterized Algorithms for Kidney Exchange. 1693-1695 - Giulio Mazzi, Alberto Castellini, Alessandro Farinelli:
Active Generation of Logical Rules for POMCP Shielding. 1696-1698 - Henri Meess, Jeremias Gerner, Daniel Hein, Stefanie Schmidtner, Gordon Elger:
Reinforcement Learning for Traffic Signal Control Optimization: A Concept for Real-World Implementation. 1699-1701 - Lukasz Mikulski, Wojciech Jamroga, Damian Kurpiewski:
Towards Assume-Guarantee Verification of Strategic Ability. 1702-1704 - Shivika Narang, Arpita Biswas, Yadati Narahari:
On Achieving Leximin Fairness and Stability in Many-to-One Matchings. 1705-1707 - Alison R. Panisson, Peter McBurney, Rafael H. Bordini:
Towards an Enthymeme-Based Communication Framework. 1708-1710 - Justin Payan, Yair Zick:
I Will Have Order! Optimizing Orders for Fair Reviewer Assignment. 1711-1713 - Fredrik Präntare, George Osipov, Leif Eriksson:
Concise Representations and Complexity of Combinatorial Assignment Problems. 1714-1716 - Aldo Iván Ramírez Abarca, Jan M. Broersen:
A Stit Logic of Responsibility. 1717-1719 - Diogo Rato, Marta Couto, Rui Prada:
Behavior vs Appearance: What Type of Adaptations are More Socially Motivated? 1720-1722 - Jennifer She, Jayesh K. Gupta, Mykel J. Kochenderfer:
Agent-Time Attention for Sparse Rewards Multi-Agent Reinforcement Learning. 1723-1725 - Isaac S. Sheidlower, Elaine Schaertl Short, Allison Moore:
Environment Guided Interactive Reinforcement Learning: Learning from Binary Feedback in High-Dimensional Robot Task Environments. 1726-1728 - Ishika Singh, Gargi Singh, Ashutosh Modi:
Pre-trained Language Models as Prior Knowledge for Playing Text-based Games. 1729-1731 - Anusha Srikanthan, Harish Ravichandar:
Resource-Aware Adaptation of Heterogeneous Strategies for Coalition Formation. 1732-1734 - Miguel Suau, Jinke He, Matthijs T. J. Spaan, Frans A. Oliehoek:
Speeding up Deep Reinforcement Learning through Influence-Augmented Local Simulators. 1735-1737 - Yohai Trabelsi, Abhijin Adiga, Sarit Kraus, S. S. Ravi:
Maximizing Resource Allocation Likelihood with Minimum Compromise. 1738-1740 - Dimitrios Troullinos, Georgios Chalkiadakis, Vasilis Samoladas, Markos Papageorgiou:
Max-sum with Quadtrees for Continuous DCOPs with Application to Lane-Free Autonomous Driving. 1741-1743 - Paul Tylkin, Tsun-Hsuan Wang, Tim Seyde, Kyle Palko, Ross E. Allen, Alexander Amini, Daniela Rus:
Autonomous Flight Arcade Challenge: Single- and Multi-Agent Learning Environments for Aerial Vehicles. 1744-1746 - Christos K. Verginis, Zhe Xu, Ufuk Topcu:
Non-Parametric Neuro-Adaptive Coordination of Multi-Agent Systems. 1747-1749 - Vignesh Viswanathan, Megha Bose, Praveen Paruchuri:
Moving Target Defense under Uncertainty for Web Applications. 1750-1752 - Ravi Vythilingam, Deborah Richards, Paul Formosa:
The Ethical Acceptability of Artificial Social Agents. 1753-1755 - Shang Wang, Mathieu Reymond, Athirai A. Irissappane, Diederik M. Roijers:
Near On-Policy Experience Sampling in Multi-Objective Reinforcement Learning. 1756-1758 - Francis Rhys Ward, Francesca Toni, Francesco Belardinelli:
On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios. 1759-1761 - Erik Wijmans, Irfan Essa, Dhruv Batra:
How to Train PointGoal Navigation Agents on a (Sample and Compute) Budget. 1762-1764 - Ziyi Xu, Xue Cheng, Yangbo He:
Performance of Deep Reinforcement Learning for High Frequency Market Making on Actual Tick Data. 1765-1767 - Yongjie Yang:
On the Complexity of Controlling Amendment and Successive Winners. 1768-1770 - Jaleh Zand, Jack Parker-Holder, Stephen J. Roberts:
On-the-fly Strategy Adaptation for ad-hoc Agent Coordination. 1771-1773 - Michal Zawalski, Blazej Osinski, Henryk Michalewski, Piotr Milos:
Off-Policy Correction For Multi-Agent Reinforcement Learning. 1774-1776 - Xiaoyan Zhang, Graham Coates, Sarah Dunn, Jean Hall:
An Agent-based Model for Emergency Evacuation from a Multi-floor Building. 1777-1779 - Yuanzi Zhu, Carmine Ventre:
Irrational Behaviour and Globalisation. 1780-1782
Blue Sky Ideas Track
- Rika Antonova, Ankur Handa:
Robots Teaching Humans: A New Communication Paradigm via Reverse Teleoperation. 1783-1787 - Davide Grossi:
Social Choice Around the Block: On the Computational Social Choice of Blockchain. 1788-1793 - Rafik Hadfi, Takayuki Ito:
Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media? 1794-1798 - Robert Müller, Steffen Illium, Thomy Phan, Tom Haider, Claudia Linnhoff-Popien:
Towards Anomaly Detection in Reinforcement Learning. 1799-1803 - Amanda Prorok, Jan Blumenkamp, Qingbiao Li, Ryan Kortvelesy, Zhe Liu, Ethan Stump:
The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts. 1804-1808 - Alessandro Ricci:
"Go to the Children": Rethinking Intelligent Agent Design and Programming in a Developmental Learning Perspective. 1809-1813 - Ehud Shapiro, Nimrod Talmon:
Foundations for Grassroots Democratic Metaverse. 1814-1818 - Tomas Trescak, Roger Lera-Leri, Filippo Bistaffa, Juan A. Rodríguez-Aguilar:
Agent-Assisted Life-Long Education and Learning. 1819-1823 - Jessica Woodgate, Nirav Ajmeri:
Macro Ethics for Governing Equitable Sociotechnical Systems. 1824-1828
Doctoral Consortium
- Raphaël Avalos:
Exploration and Communication for Partially Observable Collaborative Multi-Agent Reinforcement Learning. 1829-1832 - Nicholas Bishop:
Manipulation of Machine Learning Algoirhtms. 1833-1835 - Filippos Christianos:
Collaborative Training of Multiple Autonomous Agents. 1836-1838 - Kevin Delcourt:
Towards Multi-Agent Interactive Reinforcement Learning for Opportunistic Software Composition in Ambient Environments. 1839-1840 - Le Cong Dinh:
Online Learning against Strategic Adversary. 1841-1842 - Anna Gautier:
Non-Cooperative Multi-Robot Planning Under Shared Resources. 1843-1845 - Devansh Jalota:
Incentive Design for Equitable Resource Allocation: Artificial Currencies and Allocation Constraints. 1846-1848 - Piotr Januszewski:
Model-free and Model-based Reinforcement Learning, the Intersection of Learning and Planning. 1849-1851 - Milad Kazemi:
Data-driven Approaches for Formal Synthesis of Dynamical Systems. 1852-1853 - Xiang Liu:
Budget Feasible Mechanisms in Auction Markets: Truthfulness, Diffusion and Fairness. 1854-1856 - Justin Payan:
Fair Allocation Problems in Reviewer Assignment. 1857-1859 - Simon Rey:
Designing Mechanisms for Participatory Budgeting. 1860-1862 - Lukas Schäfer:
Task Generalisation in Multi-Agent Reinforcement Learning. 1863-1865 - Manisha Senadeera:
Empathetic Reinforcement Learning Agents. 1866-1868 - Esmaeil Seraj:
Embodied Team Intelligence in Multi-Robot Systems. 1869-1871 - Sean Sirur:
The Reputation Lag Attack. 1872-1874 - Marcio Fernando Stabile Jr.:
Using Multi-objective Optimization to Generate Timely Responsive BDI Agents. 1875-1877 - Sz-Ting Tzeng:
Engineering Normative and Cognitive Agents with Emotions and Values. 1878-1880 - Jules Vandeputte:
The Coaching Scenario: Recommender Systems with a Long Term Goal. A Case Study in Changing Dietary Habits. 1881-1883 - Hang Xu:
Transferable Environment Poisoning: Training-time Attack on Reinforcement Learner with Limited Prior Knowledge. 1884-1886
Demonstration Track
- Al-Hussein Abutaleb, Bruno Yun:
Chameleon - A Framework for Developing Conversational Agents for Medical Training Purposes. 1887-1889 - Jan Bürmann, Dimitar Georgiev, Enrico H. Gerding, Lewis Hill, Obaid Malik, Alexandru Pop, Matthew Pun, Sarvapali D. Ramchurn, Elliot Salisbury, Ivan Stojanovic:
An Agent-Based Simulator for Maritime Transport Decarbonisation. 1890-1892 - Matheus Aparecido do Carmo Alves, Amokh Varma, Yehia Elkhatib, Leandro Soriano Marcolino:
AdLeap-MAS: An Open-source Multi-Agent Simulator for Ad-hoc Reasoning. 1893-1895 - Bruno Fernandes, André Diogo, Fábio Silva, José Neves, Cesar Analide:
KnowLedger - A Multi-Agent System Blockchain for Smart Cities Data. 1896-1898 - Bruno Fernandes, Paulo Novais, Cesar Analide:
A Multi-Agent System for Automated Machine Learning. 1899-1901 - Arno Hartholt, Ed Fast, Andrew Leeds, Kevin Kim, Andrew Gordon, Kyle McCullough, Volkan Ustun, Sharon Mozgai:
Demonstrating the Rapid Integration & Development Environment (RIDE): Embodied Conversational Agent (ECA) and Multiagent Capabilities. 1902-1904 - John Harwell, London Lowmanstone, Maria L. Gini:
SIERRA: A Modular Framework for Research Automation. 1905-1907 - Hala Khodr, Barbara Bruno, Aditi Kothiyal, Pierre Dillenbourg:
Cellulan World: Interactive Platform to Learn Swarm Behaviors. 1908-1910 - Biyang Ma, Yinghui Pan, Yifeng Zeng, Zhong Ming:
Ev-IDID: Enhancing Solutions to Interactive Dynamic Influence Diagrams through Evolutionary Algorithms. 1911-1913 - Yinghui Pan, Junhan Chen, Yifeng Zeng, Zhangrui Yao, Qianwen Li, Biyang Ma, Yi Ji, Zhong Ming:
LBfT: Learning Bayesian Network Structures from Text in Autonomous Typhoon Response Systems. 1914-1916 - Naman Shah, Pulkit Verma, Trevor Angle, Siddharth Srivastava:
JEDAI: A System for Skill-Aligned Explainable Robot Planning. 1917-1919
JAAMAS Track
- Marina Bannikova, Lihi Dery, Svetlana Obraztsova, Zinovi Rabinovich, Jeffrey S. Rosenschein:
Reaching Consensus Under a Deadline. 1920-1922 - Nicolas Bougie, Ryutaro Ichise:
Goal-Driven Active Learning. 1923-1925 - Nils Bulling, Valentin Goranko:
Combining Quantitative and Qualitative Reasoning in Concurrent Multi-player Games. 1926-1928 - Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi:
Voting with Random Classifiers (VORACE): Theoretical and Experimental Analysis. 1929-1931 - Stephen Cranefield:
Enabling BDI Group Plans with Coordination Middleware: Semantics and Implementation. 1932-1934 - Dave de Jonge, Dongmo Zhang:
GDL as a Unifying Domain Description Language for Declarative Automated Negotiation. 1935-1937 - Xiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao, Hanpin Wang:
Designing Efficient and Fair Mechanisms for Multi-Type Resource Allocation. 1938-1940 - Dongjun Kim, Tae-Sub Yun, Il-Chul Moon, Jang Won Bae:
Automatic Calibration Framework of Agent-based Models for Dynamic and Heterogeneous Parameters. 1941-1943 - Esther S. Kox, Jose H. Kerstholt, T. F. Hueting, P. W. de Vries:
Trust Repair in Human-Agent Teams: The Effectiveness of Explanations and Expressing Regret. 1944-1946 - Yasser Mohammad, Shinji Nakadai:
Concurrent Negotiations with Global Utility Functions. 1947-1949 - Itshak Tkach, Sofia Amador Nelke:
Towards Addressing Dynamic Multi-agent Task Allocation in Law Enforcement. 1950-1951
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.