default search action
4th CoRL 2020: Virtual Event / Cambridge, MA, USA
- Jens Kober, Fabio Ramos, Claire J. Tomlin:
4th Conference on Robot Learning, CoRL 2020, 16-18 November 2020, Virtual Event / Cambridge, MA, USA. Proceedings of Machine Learning Research 155, PMLR 2020 - Junning Huang, Sirui Xie, Jiankai Sun, Gary Qiurui Ma, Chunxiao Liu, Dahua Lin, Bolei Zhou:
Learning a Decision Module by Imitating Driver's Control Behaviors. 1-10 - Xinshuo Weng, Jianren Wang, Sergey Levine, Kris Kitani, Nicholas Rhinehart:
Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting. 11-20 - Jiankai Sun, Hao Sun, Tian Han, Bolei Zhou:
Neuro-Symbolic Program Search for Autonomous Driving Decision Module Design. 21-30 - Meet Shah, Zhiling Huang, Ankit Laddha, Matthew Langford, Blake Barber, Sida Zhang, Carlos Vallespi-Gonzalez, Raquel Urtasun:
LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion. 31-48 - Chiho Choi, Srikanth Malla, Abhishek Patil, Joon Hee Choi:
DROGON: A Trajectory Prediction Model based on Intention-Conditioned Behavior Reasoning. 49-63 - Rohan Chitnis, Tom Silver, Beomjoon Kim, Leslie Pack Kaelbling, Tomás Lozano-Pérez:
CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs. 64-79 - Rohit Jena, Changliu Liu, Katia P. Sycara:
Augmenting GAIL with BC for sample efficient imitation learning. 80-90 - Deepali Jain, Ken Caluwaerts, Atil Iscen:
From pixels to legs: Hierarchical learning of quadruped locomotion. 91-102 - Huy Ha, Jingxi Xu, Shuran Song:
Learning a Decentralized Multi-Arm Motion Planner. 103-114 - Yan Xu, Zhaoyang Huang, Kwan-Yee Lin, Xinge Zhu, Jianping Shi, Hujun Bao, Guofeng Zhang, Hongsheng Li:
SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks. 115-125 - Zhenjia Xu, Zhanpeng He, Jiajun Wu, Shuran Song:
Learning 3D Dynamic Scene Representations for Robot Manipulation. 126-142 - Hengli Wang, Rui Fan, Ming Liu:
CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation. 143-155 - Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto:
SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning. 156-175 - Huy Ha, Shubham Agrawal, Shuran Song:
Fit2Form: 3D Generative Model for Robot Gripper Form Design. 176-187 - Karl Pertsch, Youngwoon Lee, Joseph J. Lim:
Accelerating Reinforcement Learning with Learned Skill Priors. 188-204 - Danfei Xu, Misha Denil:
Positive-Unlabeled Reward Learning. 205-219 - Kuang-Yu Jeng, Yueh-Cheng Liu, Zhe Yu Liu, Jen-Wei Wang, Ya-Liang Chang, Hung-Ting Su, Winston H. Hsu:
GDN: A Coarse-To-Fine (C2F) Representation for End-To-End 6-DoF Grasp Detection. 220-231 - Yi Xiao, Felipe Codevilla, Christopher J. Pal, Antonio M. López:
Action-based Representation Learning for Autonomous Driving. 232-246 - Konrad Zolna, Scott E. Reed, Alexander Novikov, Sergio Gómez Colmenarejo, David Budden, Serkan Cabi, Misha Denil, Nando de Freitas, Ziyu Wang:
Task-Relevant Adversarial Imitation Learning. 247-263 - Ming Zhou, Jun Luo, Julian Villela, Yaodong Yang, David Rusu, Jiayu Miao, Weinan Zhang, Montgomery Alban, Iman Fadakar, Zheng Chen, Chongxi Huang, Ying Wen, Kimia Hassanzadeh, Daniel Graves, Zhengbang Zhu, Yihan Ni, Nhat M. Nguyen, Mohamed Elsayed, Haitham Ammar, Alexander I. Cowen-Rivers, Sanjeevan Ahilan, Zheng Tian, Daniel Palenicek, Kasra Rezaee, Peyman Yadmellat, Kun Shao, Dong Chen, Baokuan Zhang, Hongbo Zhang, Jianye Hao, Wulong Liu, Jun Wang:
SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving. 264-285 - Tai Wang, Xinge Zhu, Dahua Lin:
Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds. 286-295 - Vladimír Petrík, Makarand Tapaswi, Ivan Laptev, Josef Sivic:
Learning Object Manipulation Skills via Approximate State Estimation from Real Videos. 296-312 - Samyak Datta, Oleksandr Maksymets, Judy Hoffman, Stefan Lee, Dhruv Batra, Devi Parikh:
Integrating Egocentric Localization for More Realistic Point-Goal Navigation Agents. 313-328 - Thibault Buhet, Émilie Wirbel, Andrei Bursuc, Xavier Perrotton:
PLOP: Probabilistic Polynomial Objects trajectory Prediction for autonomous driving. 329-338 - Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, Chelsea Finn:
Reinforcement Learning with Videos: Combining Offline Observations with Interaction. 339-354 - Robin Strudel, Ricardo Garcia Pinel, Justin Carpentier, Jean-Paul Laumond, Ivan Laptev, Cordelia Schmid:
Learning Obstacle Representations for Neural Motion Planning. 355-364 - Jianren Wang, Yujie Lu, Hang Zhao:
CLOUD: Contrastive Learning of Unsupervised Dynamics. 365-376 - Michael Danielczuk, Ashwin Balakrishna, Daniel S. Brown, Ken Goldberg:
Exploratory Grasping: Asymptotically Optimal Algorithms for Grasping Challenging Polyhedral Objects. 377-393 - Sasha Salter, Dushyant Rao, Markus Wulfmeier, Raia Hadsell, Ingmar Posner:
Attention-Privileged Reinforcement Learning. 394-408 - John Houston, Guido Zuidhof, Luca Bergamini, Yawei Ye, Long Chen, Ashesh Jain, Sammy Omari, Vladimir Iglovikov, Peter Ondruska:
One Thousand and One Hours: Self-driving Motion Prediction Dataset. 409-418 - Ze Yang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wei-Chiu Ma, Raquel Urtasun:
Recovering and Simulating Pedestrians in the Wild. 419-431 - Xingyu Lin, Yufei Wang, Jake Olkin, David Held:
SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation. 432-448 - Mel Vecerík, Jean-Baptiste Regli, Oleg Sushkov, David Barker, Rugile Pevceviciute, Thomas Rothörl, Raia Hadsell, Lourdes Agapito, Jonathan Scholz:
S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency. 449-460 - Yu Xiang, Christopher Xie, Arsalan Mousavian, Dieter Fox:
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation. 461-470 - Sören Pirk, Karol Hausman, Alexander Toshev, Mohi Khansari:
Modeling Long-horizon Tasks as Sequential Interaction Landscapes. 471-484 - Prasoon Goyal, Scott Niekum, Raymond J. Mooney:
PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards. 485-497 - Joel Ye, Dhruv Batra, Erik Wijmans, Abhishek Das:
Auxiliary Tasks Speed Up Learning Point Goal Navigation. 498-516 - Anwesan Pal, Yiding Qiu, Henrik I. Christensen:
Learning hierarchical relationships for object-goal navigation. 517-528 - Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Ben Eysenbach:
f-IRL: Inverse Reinforcement Learning via State Marginal Matching. 529-551 - Ignat Georgiev, Christoforos Chatzikomis, Timo Völkl, Joshua Smith, Michael N. Mistry:
Iterative Semi-parametric Dynamics Model Learning For Autonomous Racing. 552-563 - Wilson Yan, Ashwin Vangipuram, Pieter Abbeel, Lerrel Pinto:
Learning Predictive Representations for Deformable Objects Using Contrastive Estimation. 564-574 - Annie Xie, Dylan P. Losey, Ryan Tolsma, Chelsea Finn, Dorsa Sadigh:
Learning Latent Representations to Influence Multi-Agent Interaction. 575-588 - Jun Yamada, Youngwoon Lee, Gautam Salhotra, Karl Pertsch, Max Pflueger, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert:
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments. 589-603 - Yuchen Cui, Qiping Zhang, W. Bradley Knox, Alessandro Allievi, Peter Stone, Scott Niekum:
The EMPATHIC Framework for Task Learning from Implicit Human Feedback. 604-626 - Alex Bewley, Pei Sun, Thomas Mensink, Dragomir Anguelov, Cristian Sminchisescu:
Range Conditioned Dilated Convolutions for Scale Invariant 3D Object Detection. 627-641 - Kai Ploeger, Michael Lutter, Jan Peters:
High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards. 642-653 - Sarah Dean, Andrew J. Taylor, Ryan K. Cosner, Benjamin Recht, Aaron D. Ames:
Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions. 654-670 - Peter Anderson, Ayush Shrivastava, Joanne Truong, Arjun Majumdar, Devi Parikh, Dhruv Batra, Stefan Lee:
Sim-to-Real Transfer for Vision-and-Language Navigation. 671-681 - Snehal Jauhri, Carlos Celemin, Jens Kober:
Interactive Imitation Learning in State-Space. 682-692 - Lucas Manuelli, Yunzhu Li, Peter R. Florence, Russ Tedrake:
Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning. 693-710 - Rose E. Wang, J. Chase Kew, Dennis Lee, Tsang-Wei Edward Lee, Tingnan Zhang, Brian Ichter, Jie Tan, Aleksandra Faust:
Model-based Reinforcement Learning for Decentralized Multiagent Rendezvous. 711-725 - Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Vikas Sindhwani, Johnny Lee:
Transporter Networks: Rearranging the Visual World for Robotic Manipulation. 726-747 - Siddharth Reddy, Sergey Levine, Anca D. Dragan:
Assisted Perception: Optimizing Observations to Communicate State. 748-764 - Vaisakh Shaj, Philipp Becker, Dieter Büchler, Harit Pandya, Niels van Duijkeren, C. James Taylor, Marc Hanheide, Gerhard Neumann:
Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning. 765-781 - Jennifer Grannen, Priya Sundaresan, Brijen Thananjeyan, Jeffrey Ichnowski, Ashwin Balakrishna, Vainavi Viswanath, Michael Laskey, Joseph Gonzalez, Ken Goldberg:
Untangling Dense Knots by Learning Task-Relevant Keypoints. 782-800 - Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar A. Duéñez-Guzmán, Mohammad Ghavamzadeh:
Safe Policy Learning for Continuous Control. 801-821 - Mohit Sharma, Jacky Liang, Jialiang Zhao, Alex LaGrassa, Oliver Kroemer:
Learning to Compose Hierarchical Object-Centric Controllers for Robotic Manipulation. 822-844 - Mohit Sharma, Oliver Kroemer:
Relational Learning for Skill Preconditions. 845-861 - Bruno Ferreira de Brito, Hai Zhu, Wei Pan, Javier Alonso-Mora:
Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians. 862-872 - Liuyue Xie, Tomotake Furuhata, Kenji Shimada:
MuGNet: Multi-Resolution Graph Neural Network for Segmenting Large-Scale Pointclouds. 873-882 - Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Anima Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg:
Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion. 883-894 - Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Benjamin Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, Congcong Li, Dragomir Anguelov:
TNT: Target-driven Trajectory Prediction. 895-904 - Yutao Han, Jacopo Banfi, Mark Campbell:
Planning Paths Through Unknown Space by Imagining What Lies Therein. 905-914 - Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Daniel Zheng, Alexandre Galashov, Nicolas Heess, Francesco Nori:
Learning Dexterous Manipulation from Suboptimal Experts. 915-934 - Carolyn Matl, Yashraj S. Narang, Dieter Fox, Ruzena Bajcsy, Fabio Ramos:
STReSSD: Sim-To-Real from Sound for Stochastic Dynamics. 935-958 - Xiao Ma, Siwei Chen, David Hsu, Wee Sun Lee:
Contrastive Variational Reinforcement Learning for Complex Observations. 959-972 - Sean Segal, Eric Kee, Wenjie Luo, Abbas Sadat, Ersin Yumer, Raquel Urtasun:
Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs. 973-983 - Girish Joshi, Jasvir Virdi, Girish Chowdhary:
Asynchronous Deep Model Reference Adaptive Control. 984-1000 - Sushant Veer, Anirudha Majumdar:
Probably Approximately Correct Vision-Based Planning using Motion Primitives. 1001-1014 - Maria Bauzá Villalonga, Alberto Rodriguez, Bryan Lim, Eric Valls, Theo Sechopoulos:
Tactile Object Pose Estimation from the First Touch with Geometric Contact Rendering. 1015-1029 - Yufei Wang, Gautham Narayan Narasimhan, Xingyu Lin, Brian Okorn, David Held:
ROLL: Visual Self-Supervised Reinforcement Learning with Object Reasoning. 1030-1048 - Cristina Pinneri, Shambhuraj Sawant, Sebastian Blaes, Jan Achterhold, Joerg Stueckler, Michal Rolínek, Georg Martius:
Sample-efficient Cross-Entropy Method for Real-time Planning. 1049-1065 - Sandeep Singh Sandha, Luis Garcia, Bharathan Balaji, Fatima M. Anwar, Mani B. Srivastava:
Sim2Real Transfer for Deep Reinforcement Learning with Stochastic State Transition Delays. 1066-1083 - Roland Hafner, Tim Hertweck, Philipp Klöppner, Michael Bloesch, Michael Neunert, Markus Wulfmeier, Saran Tunyasuvunakool, Nicolas Heess, Martin A. Riedmiller:
Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion. 1084-1099 - Sadegh Rabiee, Joydeep Biswas:
IV-SLAM: Introspective Vision for Simultaneous Localization and Mapping. 1100-1109 - Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan:
Learning to Walk in the Real World with Minimal Human Effort. 1110-1120 - Nicolás Cruz, Javier Ruiz-del-Solar:
Generation of Realistic Images for Learning in Simulation using FeatureGAN. 1121-1136 - Felix Schiel, Annette Hagengruber, Jörn Vogel, Rudolph Triebel:
Incremental learning of EMG-based control commands using Gaussian Processes. 1137-1146 - Yunlong Song, Selim Naji, Elia Kaufmann, Antonio Loquercio, Davide Scaramuzza:
Flightmare: A Flexible Quadrotor Simulator. 1147-1157 - Tianjian Chen, Zhanpeng He, Matei T. Ciocarlie:
Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning. 1158-1173 - Davi Frossard, Shun Da Suo, Sergio Casas, James Tu, Raquel Urtasun:
StrObe: Streaming Object Detection from LiDAR Packets. 1174-1183 - Jialiang Zhao, Daniel Troniak, Oliver Kroemer:
Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented Grasps. 1184-1194 - Nicholas Vadivelu, Mengye Ren, James Tu, Jingkang Wang, Raquel Urtasun:
Learning to Communicate and Correct Pose Errors. 1195-1210 - Muhammad Haris, Mathias Franzius, Ute Bauer-Wersing, Sai Krishna Kaushik Karanam:
Visual Localization and Mapping with Hybrid SFA. 1211-1220 - Takayuki Murooka, Masashi Hamaya, Felix von Drigalski, Kazutoshi Tanaka, Yoshihisa Ijiri:
EXI-Net: EXplicitly/Implicitly Conditioned Network for Multiple Environment Sim-to-Real Transfer. 1221-1230 - Wout Boerdijk, Martin Sundermeyer, Maximilian Durner, Rudolph Triebel:
Self-Supervised Object-in-Gripper Segmentation from Robotic Motions. 1231-1245 - Zihao Zhao, Anusha Nagabandi, Kate Rakelly, Chelsea Finn, Sergey Levine:
MELD: Meta-Reinforcement Learning from Images via Latent State Models. 1246-1261 - Letian Chen, Rohan R. Paleja, Matthew C. Gombolay:
Learning from Suboptimal Demonstration via Self-Supervised Reward Regression. 1262-1277 - Alexander Lambert, Fabio Ramos, Byron Boots, Dieter Fox, Adam Fishman:
Stein Variational Model Predictive Control. 1278-1297 - Giovanni Franzese, Carlos Celemin, Jens Kober:
Learning Interactively to Resolve Ambiguity in Reference Frame Selection. 1298-1311 - Le Chen, Yunke Ao, Florian Tschopp, Andrei Cramariuc, Michel Breyer, Jen Jen Chung, Roland Siegwart, César Cadena:
Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning. 1312-1325 - Bhairav Mehta, Ankur Handa, Dieter Fox, Fabio Ramos:
A User's Guide to Calibrating Robotic Simulators. 1326-1340 - Nicholas M. Boffi, Stephen Tu, Nikolai Matni, Jean-Jacques E. Slotine, Vikas Sindhwani:
Learning Stability Certificates from Data. 1341-1350 - Lars Lindemann, Haimin Hu, Alexander Robey, Hanwen Zhang, Dimos V. Dimarogonas, Stephen Tu, Nikolai Matni:
Learning Hybrid Control Barrier Functions from Data. 1351-1370 - Lingyao Zhang, Po-Hsun Su, Jerrick Hoang, Galen Clark Haynes, Micol Marchetti-Bowick:
Map-Adaptive Goal-Based Trajectory Prediction. 1371-1383 - Shurjo Banerjee, Jesse Thomason, Jason J. Corso:
The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation. 1384-1393 - Jan Blumenkamp, Amanda Prorok:
The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning. 1394-1414 - Martina Zambelli, Yusuf Aytar, Francesco Visin, Yuxiang Zhou, Raia Hadsell:
Learning rich touch representations through cross-modal self-supervision. 1415-1425 - Allen Z. Ren, Sushant Veer, Anirudha Majumdar:
Generalization Guarantees for Imitation Learning. 1426-1442 - Tianchen Ji, Sri Theja Vuppala, Girish Chowdhary, Katherine Rose Driggs-Campbell:
Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments. 1443-1455 - Emmanuel Pignat, Hakan Girgin, Sylvain Calinon:
Generative adversarial training of product of policies for robust and adaptive movement primitives. 1456-1470 - Jarrett Holtz, Arjun Guha, Joydeep Biswas:
Robot Action Selection Learning via Layered Dimension Informed Program Synthesis. 1471-1480 - Dian Wang, Colin Kohler, Robert Platt Jr.:
Policy learning in SE(3) action spaces. 1481-1497 - William Agnew, Christopher Xie, Aaron Walsman, Octavian Murad, Yubo Wang, Pedro Domingos, Siddhartha S. Srinivasa:
Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity. 1498-1508 - David Surovik, Oliwier Melon, Mathieu Geisert, Maurice F. Fallon, Ioannis Havoutis:
Learning an Expert Skill-Space for Replanning Dynamic Quadruped Locomotion over Obstacles. 1509-1518 - Dawei Sun, Susmit Jha, Chuchu Fan:
Learning Certified Control Using Contraction Metric. 1519-1539 - Adithyavairavan Murali, Weiyu Liu, Kenneth Marino, Sonia Chernova, Abhinav Gupta:
Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping. 1540-1557 - Ajay Kumar Tanwani:
DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer. 1558-1571 - Kevin Chen, Nithin Shrivatsav Srikanth, David Kent, Harish Ravichandar, Sonia Chernova:
Learning Hierarchical Task Networks with Preferences from Unannotated Demonstrations. 1572-1581 - Anthony Simeonov, Yilun Du, Beomjoon Kim, Francois Robert Hogan, Joshua B. Tenenbaum, Pulkit Agrawal, Alberto Rodriguez:
A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects. 1582-1601 - Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Siegwart, Juan I. Nieto:
Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter. 1602-1611 - Glen Chou, Dmitry Berenson, Necmiye Ozay:
Uncertainty-Aware Constraint Learning for Adaptive Safe Motion Planning from Demonstrations. 1612-1639 - Hai Nguyen, Brett Daley, Xinchao Song, Christopher Amato, Robert Platt:
Belief-Grounded Networks for Accelerated Robot Learning under Partial Observability. 1640-1653 - Paul Duckworth, Bruno Lacerda, Nick Hawes:
Time-Bounded Mission Planning in Time-Varying Domains with Semi-MDPs and Gaussian Processes. 1654-1668 - Hsiao-Yu Tung, Zhou Xian, Mihir Prabhudesai, Shamit Lal, Katerina Fragkiadaki:
3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators. 1669-1683 - Ayzaan Wahid, Austin Stone, Kevin Chen, Brian Ichter, Alexander Toshev:
Learning Object-conditioned Exploration using Distributed Soft Actor Critic. 1684-1695 - Rasmus Laurvig Haugaard, Jeppe Langaa, Christoffer Sloth, Anders Glent Buch:
Fast robust peg-in-hole insertion with continuous visual servoing. 1696-1705 - Christopher Wang, Candace Ross, Yen-Ling Kuo, Boris Katz, Andrei Barbu:
Learning a natural-language to LTL executable semantic parser for grounded robotics. 1706-1718 - Wenxuan Zhou, Sujay Bajracharya, David Held:
PLAS: Latent Action Space for Offline Reinforcement Learning. 1719-1735 - Mike Kasper, Fernando Nobre, Christoffer Heckman, Nima Keivan:
Unsupervised Metric Relocalization Using Transform Consistency Loss. 1736-1745 - Florian Achermann, Andrey Kolobov, Debadeepta Dey, Timo Hinzmann, Jen Jen Chung, Roland Siegwart, Nicholas R. J. Lawrance:
MultiPoint: Cross-spectral registration of thermal and optical aerial imagery. 1746-1760 - Wenshan Wang, Yaoyu Hu, Sebastian A. Scherer:
TartanVO: A Generalizable Learning-based VO. 1761-1772 - Yitong Deng, Yaorui Zhang, Xingzhe He, Shuqi Yang, Yunjin Tong, Michael Zhang, Daniel M. DiPietro, Bo Zhu:
Soft Multicopter Control Using Neural Dynamics Identification. 1773-1782