


Остановите войну!
for scientists:


default search action
21st AAMAS 2022: Auckland, New Zealand
- Piotr Faliszewski, Viviana Mascardi, Catherine Pelachaud, Matthew E. Taylor:
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) 2022, ISBN 978-1-4503-9213-6
Main Track
- Mitsuteru Abe, Fabio Henrique Kiyoiti dos Santos Tanaka, Jair Pereira Junior, Anna Bogdanova, Tetsuya Sakurai, Claus Aranha:
Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data. 1-8 - Mridul Agarwal, Vaneet Aggarwal, Tian Lan:
Multi-Objective Reinforcement Learning with Non-Linear Scalarization. 9-17 - Parand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith:
Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning. 18-26 - Ashay Aswale, Antonio López, Aukkawut Ammartayakun, Carlo Pinciroli:
Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It. 27-34 - Pranav Atreya, Joydeep Biswas:
State Supervised Steering Function for Sampling-based Kinodynamic Planning. 35-43 - Andrea Baisero, Christopher Amato:
Unbiased Asymmetric Reinforcement Learning under Partial Observability. 44-52 - Adrian Simon Bauer, Anne Köpken, Daniel Leidner:
Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation. 53-61 - Francesco Belardinelli, Wojtek Jamroga, Vadim Malvone, Munyque Mittelmann, Aniello Murano, Laurent Perrussel:
Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions. 62-71 - Amine Benamara, Jean-Claude Martin, Elise Prigent, Laurence Chaby, Mohamed Chetouani, Jean Zagdoun, Hélène Vanderstichel, Sébastien Dacunha, Brian Ravenet:
COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient. 72-81 - Márton Benedek, Péter Biró, Walter Kern, Daniël Paulusma:
Computing Balanced Solutions for Large International Kidney Exchange Schemes. 82-90 - Ziyad Benomar, Chaima Ghribi, Elie Cali, Alexander Hinsen, Benedikt Jahnel:
Agent-based Modeling and Simulation for Malware Spreading in D2D Networks. 91-99 - Jamal Bentahar, Nagat Drawel, Abdeladim Sadiki:
Quantitative Group Trust: A Two-Stage Verification Approach. 100-108 - Petra Berenbrink, Martin Hoefer, Dominik Kaaser, Pascal Lenzner, Malin Rau, Daniel Schmand:
Asynchronous Opinion Dynamics in Social Networks. 109-117 - Tom Bewley, Freddy Lécué:
Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions. 118-126 - Niclas Boehmer, Robert Bredereck, Klaus Heeger, Dusan Knop, Junjie Luo:
Multivariate Algorithmics for Eliminating Envy by Donating Goods. 127-135 - Niclas Boehmer, Markus Brill, Ulrike Schmidt-Kraepelin:
Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences. 136-144 - Kenneth D. Bogert, Prashant Doshi:
A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments. 145-153 - Allan Borodin, Omer Lev, Nisarg Shah, Tyrone Strangway:
Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution. 154-162 - Yasser Bourahla, Manuel Atencia, Jérôme Euzenat:
Knowledge Transmission and Improvement Across Generations do not Need Strong Selection. 163-171 - Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff, Amanda Jane Coles:
Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and Requirements. 172-180 - Felix Brandt, Patrick Lederer, René Romen:
Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes. 181-189 - Angelina Brilliantova, Hadi Hosseini:
Fair Stable Matching Meets Correlated Preferences. 190-198 - Axel Browne, Andrew Forney:
Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments. 199-207 - Ioannis Caragiannis, Vasilis Gkatzelis, Alexandros Psomas, Daniel Schoepflin:
Beyond Cake Cutting: Allocating Homogeneous Divisible Goods. 208-216 - Yaniel Carreno, Jun Hao Alvin Ng, Yvan R. Petillot, Ron P. A. Petrick:
Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning. 217-225 - Matteo Castiglioni, Alberto Marchesi, Nicola Gatti:
Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting. 226-234 - Mustafa Mert Çelikok, Frans A. Oliehoek, Samuel Kaski:
Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for Centaurs. 235-243 - Zi-Xuan Chen, Xin-Qiang Cai, Yuan Jiang, Zhi-Hua Zhou:
Anomaly Guided Policy Learning from Imperfect Demonstrations. 244-252 - Yang Chen, Libo Zhang, Jiamou Liu, Shuyue Hu:
Individual-Level Inverse Reinforcement Learning for Mean Field Games. 253-262 - Julian Chingoma, Ulle Endriss, Ronald de Haan:
Simulating Multiwinner Voting Rules in Judgment Aggregation. 263-271 - Shushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer, Marco Pavone:
Coordinated Multi-Agent Pathfinding for Drones and Trucks over Road Networks. 272-280 - Samuel H. Christie V., Amit K. Chopra, Munindar P. Singh:
Pippi: Practical Protocol Instantiation. 281-289 - Saar Cohen, Noa Agmon:
Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning. 290-299 - Ágnes Cseh, Tobias Friedrich, Jannik Peters:
Pareto Optimal and Popular House Allocation with Lower and Upper Quotas. 300-308 - Ágnes Cseh, Jannik Peters:
Three-Dimensional Popular Matching with Cyclic Preferences. 309-317 - Aleksander Czechowski, Georgios Piliouras:
Poincaré-Bendixson Limit Sets in Multi-Agent Learning. 318-326 - Panayiotis Danassis, Aleksei Triastcyn, Boi Faltings:
A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings. 327-335 - Gianlorenzo D'Angelo, Esmaeil Delfaraz, Hugo Gilbert:
Computation and Bribery of Voting Power in Delegative Simple Games. 336-344 - Debojit Das, Shweta Jain, Sujit Gujar:
Budgeted Combinatorial Multi-Armed Bandits. 345-353 - Ilias Diakonikolas, Chrystalla Pavlou, John Peebles, Alistair Stewart:
Efficient Approximation Algorithms for the Inverse Semivalue Problem. 354-362 - Louise Dupuis de Tarlé, Elise Bonzon, Nicolas Maudet:
Multiagent Dynamics of Gradual Argumentation Semantics. 363-371 - Soroush Ebadian, Dominik Peters, Nisarg Shah:
How to Fairly Allocate Easy and Difficult Chores. 372-380 - Vladimir Egorov, Alexey Shpilman:
Scalable Multi-Agent Model-Based Reinforcement Learning. 381-390 - Edith Elkind, Minming Li, Houyu Zhou:
Facility Location With Approval Preferences: Strategyproofness and Fairness. 391-399 - Eric Ewing, Jingyao Ren, Dhvani Kansara, Vikraman Sathiyanarayanan, Nora Ayanian:
Betweenness Centrality in Multi-Agent Path Finding. 400-408 - Roy Fairstein, Dan Vilenchik, Reshef Meir, Kobi Gal:
Welfare vs. Representation in Participatory Budgeting. 409-417 - Hélène Fargier, Paul Jourdan, Régis Sabbadin:
A Path-following Polynomial Equations Systems Approach for Computing Nash Equilibria. 418-426 - Thiago Freitas dos Santos, Nardine Osman, Marco Schorlemmer:
Ensemble and Incremental Learning for Norm Violation Detection. 427-435 - Robin Fritsch, Roger Wattenhofer:
The Price of Majority Support. 436-444 - Sébastien Gamblin, Alexandre Niveau, Maroua Bouzid:
A Symbolic Representation for Probabilistic Dynamic Epistemic Logic. 445-453 - Deepeka Garg, Maria Chli, George Vogiatzis:
Fully-Autonomous, Vision-based Traffic Signal Control: From Simulation to Reality. 454-462 - Jugal Garg, Thorben Tröbst, Vijay V. Vazirani:
One-Sided Matching Markets with Endowments: Equilibria and Algorithms. 463-471 - Anna Gautier, Alex Stephens, Bruno Lacerda, Nick Hawes, Michael J. Wooldridge:
Negotiated Path Planning for Non-Cooperative Multi-Robot Systems. 472-480 - Tzvika Geft, Dan Halperin:
Refined Hardness of Distance-Optimal Multi-Agent Path Finding. 481-488 - Matthieu Geist, Julien Pérolat, Mathieu Laurière, Romuald Elie, Sarah Perrin, Olivier Bachem, Rémi Munos, Olivier Pietquin:
Concave Utility Reinforcement Learning: The Mean-field Game Viewpoint. 489-497 - Ian M. Gemp, Kevin R. McKee, Richard Everett, Edgar A. Duéñez-Guzmán, Yoram Bachrach, David Balduzzi, Andrea Tacchetti:
D3C: Reducing the Price of Anarchy in Multi-Agent Learning. 498-506 - Ian M. Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas W. Anthony, Richard Everett, Andrea Tacchetti, Tom Eccles, János Kramár:
Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent. 507-515 - Athina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra:
Building Contrastive Explanations for Multi-Agent Team Formation. 516-524 - Ganesh Ghalme, Vineet Nair, Vishakha Patil, Yilun Zhou:
Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) Environment. 525-533 - Hiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi, Makoto Yokoo:
Fair and Truthful Mechanism with Limited Subsidy. 534-542 - Denizalp Goktas, Jiayi Zhao, Amy Greenwald:
Robust No-Regret Learning in Min-Max Stackelberg Games. 543-552 - Niko A. Grupen, Daniel D. Lee, Bart Selman:
Multi-Agent Curricula and Emergent Implicit Signaling. 553-561 - Himanshu Gupta, Bradley Hayes, Zachary Sunberg:
Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning. 562-570 - Dongge Han, Chris Xiaoxuan Lu, Tomasz P. Michalak, Michael J. Wooldridge:
Multiagent Model-based Credit Assignment for Continuous Control. 571-579 - Jiang Hao, Pradeep Varakantham:
Hierarchical Value Decomposition for Effective On-demand Ride-Pooling. 580-587 - Paul Harrenstein, Paolo Turrini:
Computing Nash Equilibria for District-based Nominations. 588-596 - Hadi Hosseini, Andrew Searns, Erel Segal-Halevi:
Ordinal Maximin Share Approximation for Chores. 597-605 - Vincent Hsiao, Dana S. Nau:
A Mean Field Game Model of Spatial Evolutionary Games. 606-614 - Shuyue Hu, Chin-Wing Leung, Ho-fung Leung, Harold Soh:
The Dynamics of Q-learning in Population Games: A Physics-inspired Continuity Equation Model. 615-623 - Matej Husár, Jirí Svancara, Philipp Obermeier, Roman Barták, Torsten Schaub:
Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning. 624-632 - Aya Hussein, Eleni Petraki, Sondoss Elsawah, Hussein A. Abbass:
Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning. 633-641 - Mohammad T. Irfan, Kim Hancock, Laura M. Friel:
Cascades and Overexposure in Social Networks: The Budgeted Case. 642-650 - Gabriel Istrate, Cosmin Bonchis:
Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games. 651-659 - Saïd Jabbour, Nizar Mhadhbi, Badran Raddaoui, Lakhdar Sais:
A Declarative Framework for Maximal k-plex Enumeration Problems. 660-668 - Alexis Jacq, Johan Ferret, Olivier Pietquin, Matthieu Geist:
Lazy-MDPs: Towards Interpretable RL by Learning When to Act. 669-677 - Devansh Jalota, Kiril Solovey, Matthew Tsao, Stephen Zoepf, Marco Pavone:
Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment. 678-686 - Jatin Jindal, Jérôme Lang, Katarína Cechlárová, Julien Lesca:
Selecting PhD Students and Projects with Limited Funding. 687-695 - Santhini K. A., Govind S. Sankar, Meghana Nasre:
Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas. 696-704 - Mustafa O. Karabag, Cyrus Neary, Ufuk Topcu:
Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss. 705-713 - Neel Karia, Faraaz Mallick, Palash Dey:
How Hard is Safe Bribery? 714-722 - Sammie Katt, Hai Nguyen, Frans A. Oliehoek, Christopher Amato:
BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs. 723-731 - Milad Kazemi, Mateo Perez, Fabio Somenzi, Sadegh Soudjani, Ashutosh Trivedi, Alvaro Velasquez:
Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning. 732-741 - Tarik Kelestemur, Robert Platt, Taskin Padir:
Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation. 742-750 - Seung Hyun Kim, Neale Van Stralen, Girish Chowdhary, Huy T. Tran:
Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning. 751-760 - Luca Kreisel, Niclas Boehmer, Vincent Froese, Rolf Niedermeier:
Equilibria in Schelling Games: Computational Hardness and Robustness. 761-769 - Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter:
Multimodal Analysis of the Predictability of Hand-gesture Properties. 770-779 - Roger Lera-Leri, Filippo Bistaffa, Marc Serramia, Maite López-Sánchez, Juan A. Rodríguez-Aguilar:
Towards Pluralistic Value Alignment: Aggregating Value Systems Through lp-Regression. 780-788 - George Z. Li, Ann Li, Madhav V. Marathe, Aravind Srinivasan, Leonidas Tsepenekas, Anil Vullikanti:
Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response. 789-797 - Yongheng Liang, Hejun Wu, Haitao Wang:
ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging. 798-806 - Grzegorz Lisowski, M. S. Ramanujan, Paolo Turrini:
Equilibrium Computation For Knockout Tournaments Played By Groups. 807-815 - Wencong Liu, Jiamou Liu, Zijian Zhang, Yiwei Liu, Liehuang Zhu:
Residual Entropy-based Graph Generative Algorithms. 816-824 - Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan, Fan Wu, Michail Basios:
The Spoofing Resistance of Frequent Call Markets. 825-832 - Emiliano Lorini, Éloan Rapion:
Logical Theories of Collective Attitudes and the Belief Base Perspective. 833-841 - Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob N. Foerster:
Lyapunov Exponents for Diversity in Differentiable Games. 842-852 - Keane Lucas, Ross E. Allen:
Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination. 853-861 - Roberto Lucchetti, Stefano Moretti, Tommaso Rea:
Coalition Formation Games and Social Ranking Solutions. 862-870 - Arnab Maiti, Palash Dey:
On Parameterized Complexity of Binary Networked Public Goods Game. 871-879 - Aditya S. Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh, Milind Tambe:
Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems. 880-888 - Joe McCalmon, Thai Le, Sarra Alqahtani, Dongwon Lee:
CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents. 889-897 - Kevin R. McKee, Xuechunzi Bai, Susan T. Fiske:
Warmth and Competence in Human-Agent Cooperation. 898-907 - Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed Chetouani, Francisco C. Santos:
Cooperation and Learning Dynamics under Risk Diversity and Financial Incentives. 908-916 - Mostafa Mohajeri Parizi, Giovanni Sileno, Tom M. van Engers:
Preference-Based Goal Refinement in BDI Agents. 917-925 - Paul Muller, Mark Rowland, Romuald Elie, Georgios Piliouras, Julien Pérolat, Mathieu Laurière, Raphaël Marinier, Olivier Pietquin, Karl Tuyls:
Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO. 926-934 - Oliviero Nardi, Arthur Boixel, Ulle Endriss:
A Graph-Based Algorithm for the Automated Justification of Collective Decisions. 935-943 - Grigory Neustroev, Sytze P. E. Andringa, Remco A. Verzijlbergh, Mathijs Michiel de Weerdt:
Deep Reinforcement Learning for Active Wake Control. 944-953 - Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran:
Learning Theory of Mind via Dynamic Traits Attribution. 954-962 - Dung Nguyen, Phuoc Nguyen, Svetha Venkatesh, Truyen Tran:
Learning to Transfer Role Assignment Across Team Sizes. 963-971 - Keisuke Okumura, Ryo Yonetani, Mai Nishimura, Asako Kanezaki:
CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces. 972-981 - Liubove Orlov-Savko, Abhinav Jain, Gregory M. Gremillion, Catherine E. Neubauer, Jonroy D. Canady, Vaibhav V. Unhelkar:
Factorial Agent Markov Model: Modeling Other Agents' Behavior in presence of Dynamic Latent Decision Factors. 982-1000 - Han-Ching Ou, Christoph Siebenbrunner, Jackson A. Killian, Meredith B. Brooks, David Kempe, Yevgeniy Vorobeychik, Milind Tambe:
Networked Restless Multi-Armed Bandits for Mobile Interventions. 1001-1009 - Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, Dawn Song:
Characterizing Attacks on Deep Reinforcement Learning. 1010-1018 - Stipe Pandzic, Jan M. Broersen, Henk Aarts:
BOID*: Autonomous Goal Deliberation through Abduction. 1019-1027 - Julien Pérolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls, Olivier Pietquin:
Scaling Mean Field Games by Online Mirror Descent. 1028-1037 - Markus Peschl, Arkady Zgonnikov, Frans A. Oliehoek, Luciano Cavalcante Siebert:
MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. 1038-1046 - Thomy Phan, Felix Sommer, Philipp Altmann, Fabian Ritz, Lenz Belzner, Claudia Linnhoff-Popien:
Emergent Cooperation from Mutual Acknowledgment Exchange. 1047-1055 - Gauthier Picard:
Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions. 1056-1064 - Gauthier Picard:
Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management. 1065-1073