


default search action
Transactions on Machine Learning Research, Volume 2025
Volume 2025, 2025
- Benjamin Cohen-Wang, Joshua Vendrow, Aleksander Madry:
Ask Your Distribution Shift if Pre-Training is Right for You. - Shubhankar Gupta, Saksham Sharma, Suresh Sundaram:
Reward-based Autonomous Online Learning Framework for Resilient Cooperative Target Monitoring using a Swarm of Robots. - Wenhao Lu, Xufeng Zhao, Josua Spisak, Jae Hee Lee, Stefan Wermter:
Mental Modelling of Reinforcement Learning Agents by Language Models. - Debarshi Brahma, Anuska Roy, Soma Biswas:
Prompt Tuning Vision Language Models with Margin Regularizer for Few-Shot Learning under Distribution Shifts. - Myeongho Jeon, Suhwan Choi, Hyoje Lee, Teresa Yeo:
An Analysis of Model Robustness across Concurrent Distribution Shifts. - Madison Cooley, Varun Shankar, Mike Kirby, Shandian Zhe:
Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases. - Weijian Luo:
Diff-Instruct++: Training One-step Text-to-image Generator Model to Align with Human Preferences. - David Chiang:
Transformers in Uniform TC0. - Steven Jecmen, Nihar B. Shah, Fei Fang, Leman Akoglu:
On the Detection of Reviewer-Author Collusion Rings From Paper Bidding. - Yuan Zang, Tian Yun, Hao Tan, Trung Bui, Chen Sun:
Pre-trained Vision-Language Models Learn Discoverable Visual Concepts. - Peihong Yu, Manav Mishra, Alec Koppel, Carl E. Busart, Priya Narayan, Dinesh Manocha, Amrit Singh Bedi, Pratap Tokekar:
Beyond Joint Demonstrations: Personalized Expert Guidance for Efficient Multi-Agent Reinforcement Learning. - Tim Z. Xiao, Johannes Zenn, Robert Bamler:
A Note on Generalization in Variational Autoencoders: How Effective Is Synthetic Data and Overparameterization? - Dominik Fay, Sebastian Mair, Jens Sjölund:
Personalized Privacy Amplification via Importance Sampling. - Alexander Larionov, Niall M. Adams, Kevin N. Webster:
Investigating the impact of missing value handling on Boosted trees and Deep learning for Tabular data: A Claim Reserving case study. - Franka Bause, Fabian Jogl, Patrick Indri, Tamara Drucks, David Penz, Nils Morten Kriege, Thomas Gärtner, Pascal Welke, Maximilian Thiessen:
Maximally Expressive GNNs for Outerplanar Graphs. - Yihang Gao, Chuanyang Zheng, Enze Xie, Han Shi, Tianyang Hu, Yu Li, Michael Ng, Zhenguo Li, Zhaoqiang Liu:
AlgoFormer: An Efficient Transformer Framework with Algorithmic Structures. - Yulei Qin, Yuncheng Yang, Pengcheng Guo, Gang Li, Hang Shao, Yuchen Shi, Zihan Xu, Yun Gu, Ke Li, Xing Sun:
Unleashing the Power of Data Tsunami: A Comprehensive Survey on Data Assessment and Selection for Instruction Tuning of Language Models. - Dinghuai Zhang, Yizhe Zhang, Jiatao Gu, Ruixiang Zhang, Joshua M. Susskind, Navdeep Jaitly, Shuangfei Zhai:
Improving GFlowNets for Text-to-Image Diffusion Alignment. - Sahil Verma, Gantavya Bhatt, Avi Schwarzschild, Soumye Singhal, Arnav Mohanty Das, Chirag Shah, John P. Dickerson, Pin-Yu Chen, Jeff Bilmes:
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective. - Manu Gaur, Darshan Singh S, Makarand Tapaswi:
No Detail Left Behind: Revisiting Self-Retrieval for Fine-Grained Image Captioning. - Miles Everett, Mingjun Zhong, Georgios Leontidis:
Masked Capsule Autoencoders. - Suryam Arnav Kalra, Arindam Biswas, Pabitra Mitra, Biswajit Basu:
Sparse Neural Architectures via Deterministic Ramanujan Graphs. - Chloe Loughridge, Qinyi Sun, Seth Ahrenbach, Federico Cassano, Chuyue Sun, Ying Sheng, Anish Mudide, Md Rakib Hossain Misu, Nada Amin, Max Tegmark:
DafnyBench: A Benchmark for Formal Software Verification. - Clément Bonet, Kimia Nadjahi, Thibault Séjourné, Kilian Fatras, Nicolas Courty:
Slicing Unbalanced Optimal Transport. - Amitangshu Mukherjee, Timur Ibrayev, Kaushik Roy:
On Inherent Adversarial Robustness of Active Vision Systems. - Marco Paul E. Apolinario, Kaushik Roy:
S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks. - Tobias Leemann, Alina Fastowski, Felix Pfeiffer, Gjergji Kasneci:
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers. - Yifei He, Yuzheng Hu, Yong Lin, Tong Zhang, Han Zhao:
Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic. - Peter Matthew Jacobs, Lekha Patel, Anirban Bhattacharya, Debdeep Pati:
Minimax Posterior Contraction Rates for Unconstrained Distribution Estimation on [0, 1]d under Wasserstein Distance. - Kangfu Mei, Zhengzhong Tu, Mauricio Delbracio, Hossein Talebi, Vishal M. Patel, Peyman Milanfar:
Bigger is not Always Better: Scaling Properties of Latent Diffusion Models. - Bingxin Zhou, Outongyi Lv, Jing Wang, Xiang Xiao, Weishu Zhao:
ODNet: Opinion Dynamics-Inspired Neural Message Passing for Graphs and Hypergraphs. - Seth Neel:
PRIMO: Private Regression in Multiple Outcomes. - Tobias Fuchs, Florian Kalinke, Klemens Böhm:
Partial-Label Learning with a Reject Option. - Stefano Peluchetti:
BM2: Coupled Schrödinger Bridge Matching. - Vidhi Lalchand, Anna-Christina Eilers:
Shared Stochastic Gaussian Process Latent Variable Models: A Multi-modal Generative model for Quasar spectra. - Pedro Cisneros-Velarde, Zhijie Chen, Sanmi Koyejo, Arindam Banerjee:
Optimization and Generalization Guarantees for Weight Normalization. - Eduardo Fernandes Montesuma, Fred Maurice Ngolè Mboula, Antoine Souloumiac:
Optimal Transport for Domain Adaptation through Gaussian Mixture Models. - Zidu Yin, Zhen Zhang, Dong Gong, Stefano V. Albrecht, Javen Qinfeng Shi:
Highway Graph to Accelerate Reinforcement Learning. - Saeideh Ghanbari Azar, Lorenzo Tronchin, Attila Simkó, Tufve Nyholm, Tommy Löfstedt:
From Promise to Practice: A Study of Common Pitfalls Behind the Generalization Gap in Machine Learning. - Arman Rahbar, Niklas Åkerblom, Morteza Haghir Chehreghani:
Cost-Efficient Online Decision Making: A Combinatorial Multi-Armed Bandit Approach. - Yikai Zhang, Jiahe Lin, Fengpei Li, Songzhu Zheng, Anant Raj, Anderson Schneider, Yuriy Nevmyvaka:
Reweighting Improves Conditional Risk Bounds. - Lei Zhao, Lin Cai, Wu-Sheng Lu:
Federated Learning with Efficient Local Adaptation for Realized Volatility Prediction. - Dominik Baumann, Erfaun Noorani, James Price, Ole Peters, Colm Connaughton, Thomas B. Schön:
Reinforcement learning with non-ergodic reward increments: robustness via ergodicity transformations. - Marc T. Law, Karsten Kreis, Haggai Maron:
Directed Graph Generation with Heat Kernels. - Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Anh Tuan Luu:
A Survey of Recent Backdoor Attacks and Defenses in Large Language Models. - Nimrod Berman, Eitan Kosman, Dotan Di Castro, Omri Azencot:
Reviving Life on the Edge: Joint Score-Based Graph Generation of Rich Edge Attributes. - Nayoung Kim, Minsu Kim, Sungsoo Ahn, Jinkyoo Park:
Decoupled Sequence and Structure Generation for Realistic Antibody Design. - Nicolas Boizard, Kevin El Haddad, Céline Hudelot, Pierre Colombo:
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs. - Adarsh Kappiyath, Anmol Garg, Ramya Hebbalaguppe, Prathosh AP:
Lifelong Learning in StyleGAN through Latent Subspaces. - Leah Bar, Boaz Lerner, Nir Darshan, Rami Ben-Ari:
Active Learning via Classifier Impact and Greedy Selection for Interactive Image Retrieval. - Alejandro Guerra-Manzanares, Farah Shamout:
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks. - Lorenzo Perini, Maja Rudolph, Sabrina Schmedding, Chen Qiu:
Uncertainty-aware Evaluation of Auxiliary Anomalies with the Expected Anomaly Posterior. - Guiliang Liu, Sheng Xu, Shicheng Liu, Ashish Gaurav, Sriram Ganapathi Subramanian, Pascal Poupart:
A Comprehensive Survey on Inverse Constrained Reinforcement Learning: Definitions, Progress and Challenges. - Subba Reddy Oota, Zijiao Chen, Manish Gupta, Bapi Raju Surampudi, Gaël Jobard, Frédéric Alexandre, Xavier Hinaut:
Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey). - Eugene A. Golikov:
A Generalization Bound for Nearly-Linear Networks. - Weicheng Zhu, Sheng Liu, Carlos Fernandez-Granda, Narges Razavian:
Making Self-supervised Learning Robust to Spurious Correlation via Learning-speed Aware Sampling. - Hiroyuki Sakai, Hideaki Iiduka:
A general framework of Riemannian adaptive optimization methods with a convergence analysis. - Tal Reiss, Yedid Hoshen:
An Attribute-based Method for Video Anomaly Detection. - Chun-Yin Huang, Ruinan Jin, Can Zhao, Daguang Xu, Xiaoxiao Li:
Federated Learning on Virtual Heterogeneous Data with Local-Global Dataset Distillation. - Oskar Nordenfors, Fredrik Ohlsson, Axel Flinth:
Optimization Dynamics of Equivariant and Augmented Neural Networks. - Paul Brunzema, Alexander von Rohr, Friedrich Solowjow, Sebastian Trimpe:
Event-Triggered Time-Varying Bayesian Optimization. - Thomas Pethick, Parameswaran Raman, Lenon Minorics, Mingyi Hong, Shoham Sabach, Volkan Cevher:
νSAM: Memory-Efficient Sharpness-Aware Minimization via Nuclear Norm Constraints. - Netta Ollikka, Amro Abbas, Andrea Perin, Markku Kilpeläinen, Stéphane Deny:
A comparison between humans and AI at recognizing objects in unusual poses. - David Mueller, Mark Dredze, Nicholas Andrews:
Can Optimization Trajectories Explain Multi-Task Transfer? - Cen-You Li, Olaf Dünnbier, Marc Toussaint, Barbara Rakitsch, Christoph Zimmer:
Global Safe Sequential Learning via Efficient Knowledge Transfer. - Stanislas Strasman, Antonio Ocello, Claire Boyer, Sylvain Le Corff, Vincent Lemaire:
An analysis of the noise schedule for score-based generative models. - Pawel Czyz, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx:
On the Properties and Estimation of Pointwise Mutual Information Profiles. - Luciana Ferrer, Daniel Ramos:
Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration. - Koki Okajima, Tomoyuki Obuchi:
Transfer Learning in ℓ1 Regularized Regression: Hyperparameter Selection Strategy based on Sharp Asymptotic Analysis. - Ruisu Zhang, Yicong Chen, Kangwook Lee:
Improving CLIP Counting Accuracy via Parameter-Efficient Fine-Tuning. - Chuanhui Liu, Xiao Wang:
Doubly Robust Conditional VAE via Decoder Calibration: An Implicit KL Annealing Approach. - Luca Simi:
A Scalable Approach for Mapper via Efficient Spatial Search. - Alexey Kravets, Vinay P. Namboodiri:
Zero-shot CLIP Class Forgetting via Text-image Space Adaptation. - Kunwoong Kim, Insung Kong, Jongjin Lee, Minwoo Chae, Sangchul Park, Yongdai Kim:
Fairness Through Matching. - Sai Saketh Rambhatla, Ishan Misra:
SelfEval: Leveraging discriminative nature of generative models for evaluation. - Zhiyu Guo, Hidetaka Kamigaito, Taro Watanabe:
Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models. - Prithviraj Tarale, Edward A. Rietman, Hava T. Siegelmann:
Distributed Multi-Agent Lifelong Learning. - Yu Wang, Chi Han, Tongtong Wu, Xiaoxin He, Wangchunshu Zhou, Nafis Sadeq, Xiusi Chen, Zexue He, Wei Wang, Gholamreza Haffari, Heng Ji, Julian J. McAuley:
Towards LifeSpan Cognitive Systems. - Zhuoran Yu, Chenchen Zhu, Sean Culatana, Raghuraman Krishnamoorthi, Fanyi Xiao, Yong Jae Lee:
Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with Synthetic Images. - Tanguy Bosser, Souhaib Ben Taieb:
Preventing Conflicting Gradients in Neural Marked Temporal Point Processes. - Lokesh Nagalapatti, Pranava Singhal, Avishek Ghosh, Sunita Sarawagi:
Leveraging a Simulator for Learning Causal Representations from Post-Treatment Covariates for CATE. - Chao-Kai Chiang, Masashi Sugiyama:
Unified Risk Analysis for Weakly Supervised Learning. - Dun Zeng, Zenglin Xu, Yu Pan, Xu Luo, Qifan Wang, Xiaoying Tang:
Enhanced Federated Optimization: Adaptive Unbiased Client Sampling with Reduced Variance. - Riccardo Majellaro, Jonathan Collu, Aske Plaat, Thomas M. Moerland:
Explicitly Disentangled Representations in Object-Centric Learning. - Nicholas Krämer:
Numerically Robust Fixed-Point Smoothing Without State Augmentation. - Joseph Paul Cohen, Louis Blankemeier, Akshay S. Chaudhari:
Identifying Spurious Correlations using Counterfactual Alignment. - Hari Chandana Kuchibhotla, Sai Srinivas Kancheti, Abbavaram Gowtham Reddy, Vineeth N. Balasubramanian:
Semantic Alignment for Prompt-Tuning in Vision Language Models. - Georgios Vlassis, David Belius, Volodymyr Fomichov:
A thorough reproduction and evaluation of µP. - Zhuo Zhi, Yuxuan Sun, Qiangqiang Wu, Ziquan Liu, Miguel R. D. Rodrigues:
Wasserstein Modality Alignment Makes Your Multimodal Transformer More Robust. - Ilana Sebag, Muni Sreenivas Pydi, Jean-Yves Franceschi, Alain Rakotomamonjy, Mike Gartrell, Jamal Atif, Alexandre Allauzen:
Differentially Private Gradient Flow based on the Sliced Wasserstein Distance. - Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, Soheil Feizi:
Can AI-Generated Text be Reliably Detected? Stress Testing AI Text Detectors Under Various Attacks. - Nauman Ahad, Mark A. Davenport, Eva L. Dyer:
Time Series Domain Adaptation via Channel-Selective Representation Alignment. - Naveen Karunanayake, Suranga Seneviratne, Sanjay Chawla:
ExCeL: Combined Extreme and Collective Logit Information for Out-of-Distribution Detection. - Meher Chaitanya, Kshitijaa Jaglan, Ulrik Brandes:
Adjacency Search Embeddings. - Michal Derezinski:
Stochastic Variance-Reduced Newton: Accelerating Finite-Sum Minimization with Large Batches. - Bolian Li, Ruqi Zhang:
Making Reliable and Flexible Decisions in Long-tailed Classification. - Georgios Sidiropoulos, Samarth Bhargav, Panagiotis Eustratiadis, Evangelos Kanoulas:
Multivariate Dense Retrieval: A Reproducibility Study under a Memory-limited Setup. - Shayan Mohajer Hamidi, Linfeng Ye:
Distributed Quasi-Newton Method for Fair and Fast Federated Learning. - Spandan Madan, Tomotake Sasaki, Hanspeter Pfister, Tzu-Mao Li, Xavier Boix:
In-distribution adversarial attacks on object recognition models using gradient-free search. - Hongyi Ling, Zhimeng Jiang, Na Zou, Shuiwang Ji:
Counterfactual Fairness on Graphs: Augmentations, Hidden Confounders, and Identifiability. - Anastasis Kratsios, Haitz Sáez de Ocáriz Borde, Takashi Furuya, Marc T. Law:
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of Experts. - Zhepeng Cen, Yao Liu, Siliang Zeng, Pratik Chaudhari, Huzefa Rangwala, George Karypis, Rasool Fakoor:
Bridging the Training-Inference Gap in LLMs by Leveraging Self-Generated Tokens. - Jieru Mei, Liang-Chieh Chen, Alan L. Yuille, Cihang Xie:
SPFormer: Enhancing Vision Transformer with Superpixel Representation. - Yuzhu Mao, Zihao Zhao, Siqi Ping, Yang Liu, Wenbo Ding:
Enhancing Parameter Efficiency and Generalization in Large Models: A Regularized and Masked Low-Rank Adaptation Approach. - Carlos Mougan, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, Steffen Staab:
Explanation Shift: How Did the Distribution Shift Impact the Model? - Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer G. Dy:
ADAPT to Robustify Prompt Tuning Vision Transformers. - Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal:
Private Fine-tuning of Large Language Models with Zeroth-order Optimization. - Chao Zhou, Huishuai Zhang, Jiang Bian, Weiming Zhang, Nenghai Yu:
©Plug-in Authorization for Human Copyright Protection in Text-to-Image Model. - Boyi Li, Philipp Wu, Pieter Abbeel, Jitendra Malik:
Interactive Task Planning with Language Models. - Edvin Listo Zec, Tom Hagander, Eric Ihre-Thomason, Sarunas Girdzijauskas:
On the effects of similarity metrics in decentralized deep learning under distribution shift. - Gabriel Dubé, Mario Marchand:
Shapley Values of Structured Additive Regression Models and Application to RKHS Weightings of Functions. - Florian Kalinke, Marco Heyden, Georg Gntuni, Edouard Fouché, Klemens Böhm:
Maximum Mean Discrepancy on Exponential Windows for Online Change Detection. - Michele Miranda, Elena Sofia Ruzzetti, Andrea Santilli, Fabio Massimo Zanzotto, Sébastien Bratières, Emanuele Rodolà:
Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions. - Muhammed Fatih Balin, Dominique LaSalle, Ümit V. Çatalyürek:
Cooperative Minibatching in Graph Neural Networks. - Nicholas Bai, Rahul A. Iyer, Tuomas P. Oikarinen, Akshay R. Kulkarni, Tsui-Wei Weng:
Interpreting Neurons in Deep Vision Networks with Language Models. - Noureddine Henka, Mohamad Assaad, Sami Tazi:
Mixture Degree-Corrected Stochastic Block Model for Multi-Group Community Detection in Multiplex Graphs. - Sebastian Wankerl, Jan Pfister, Andrzej Dulny, Gerhard Götz, Andreas Hotho:
Identifying Axiomatic Mathematical Transformation Steps using Tree-Structured Pointer Networks. - Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth:
Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity. - Ahmad-Reza Ehyaei, Golnoosh Farnadi, Samira Samadi:
Bridging Causality, Individual Fairness, and Adversarial Robustness in the Absence of Structural Causal Model. - Michele Caprio, David Stutz, Shuo Li, Arnaud Doucet:
Conformalized Credal Regions for Classification with Ambiguous Ground Truth. - Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani:
Graph-level Representation Learning with Joint-Embedding Predictive Architectures. - Zidan Wang, Rui Shen, Bradly C. Stadie:
Wonderful Team: Zero-Shot Physical Task Planning with Visual LLMs. - Bas van der Heijden, Jens Kober, Robert Babuska, Laura Ferranti:
REX: GPU-Accelerated Sim2Real Framework with Delay and Dynamics Estimation. - Travis E. Gibson, Sawal Acharya, Anjali Parashar, Joseph E. Gaudio, Anuradha Annaswamy:
On the stability of gradient descent with second order dynamics for time-varying cost functions. - Motasem Alfarra, Alvaro H. C. Correia, Bernard Ghanem, Christos Louizos:
Test-Time Adaptation with Source Based Auxiliary Tasks. - Saptarshi Chakraborty:
Minimax Lower Bounds for Estimating Distributions on Low-dimensional Spaces. - Saleh Gholam Zadeh, Vaisakh Shaj, Patrick Jahnke, Gerhard Neumann, Tim Breitenbach:
Towards Measuring Predictability: To which extent data-driven approaches can extract deterministic relations from data exemplified with time series prediction and classification. - Théo Vincent, Daniel Palenicek, Boris Belousov, Jan Peters, Carlo D'Eramo:
Iterated Q-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning. - Chandramouli Shama Sastry, Mahdi Gilany, Kry Yik-Chau Lui, Martin Magill, Alexander Pashevich:
DeepRRTime: Robust Time-series Forecasting with a Regularized INR Basis. - Jackson Petty, Sjoerd van Steenkiste, Tal Linzen:
How Does Code Pretraining Affect Language Model Task Performance? - Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Israfil Bahceci, Akram Bin Sediq, Hamza Umit Sokun, Nicolas Papernot:
Selective Prediction via Training Dynamics. - Arash Behboodi, Gabriele Cesa:
On the Sample Complexity of One Hidden Layer Networks with Equivariance, Locality and Weight Sharing. - Minguk Jang, Hye Won Chung:
Label Distribution Shift-Aware Prediction Refinement for Test-Time Adaptation. - Hikari Otsuka, Daiki Chijiwa, Ángel López García-Arias, Yasuyuki Okoshi, Kazushi Kawamura, Thiem Van Chu, Daichi Fujiki, Susumu Takeuchi, Masato Motomura:
Partially Frozen Random Networks Contain Compact Strong Lottery Tickets. - Jiazheng Li, Jundong Li, Chuxu Zhang:
Instance-Aware Graph Prompt Learning. - Savvas Melidonis, Yiming Xi, Konstantinos C. Zygalakis, Yoann Altmann, Marcelo Pereyra:
Score-Based Denoising Diffusion Models for Photon-Starved Image Restoration Problems. - Haonan Wang, Qian Liu, Chao Du, Tongyao Zhu, Cunxiao Du, Kenji Kawaguchi, Tianyu Pang:
When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training. - Pengyun Wang, Yadi Cao, Chris Russell, Yanxin Shen, Junyu Luo, Ming Zhang, Siyu Heng, Xiao Luo:
DELTA: Dual Consistency Delving with Topological Uncertainty for Active Graph Domain Adaptation. - Ibrahim Serouis, Florence Sèdes:
Towards context and domain-aware algorithms for scene analysis. - Luca Butera, Giovanni de Felice, Andrea Cini, Cesare Alippi:
On the Regularization of Learnable Embeddings for Time Series Forecasting. - Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, Chunyuan Li:
LLaVA-OneVision: Easy Visual Task Transfer. - Zhi Chen, Yufan Ren, Tong Zhang, Zheng Dang, Wenbing Tao, Sabine Süsstrunk, Mathieu Salzmann:
Adaptive Multi-step Refinement Network for Robust Point Cloud Registration. - Liran Nochumsohn, Omri Azencot:
Data Augmentation Policy Search for Long-Term Forecasting. - Ya Song, Laurens Bliek, Yaoxin Wu, Yingqian Zhang:
Enhancing Remaining Useful Life Prediction with Ensemble Multi-Term Fourier Graph Neural Networks. - Hussein Mozannar, Valerie Chen, Mohammed Alsobay, Subhro Das, Sebastian Zhao, Dennis Wei, Manish Nagireddy, Prasanna Sattigeri, Ameet Talwalkar, David A. Sontag:
The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers. - Anna Hedström, Philine Lou Bommer, Thomas F. Burns, Sebastian Lapuschkin, Wojciech Samek, Marina M.-C. Höhne:
Evaluating Interpretable Methods via Geometric Alignment of Functional Distortions. - Giovanni Luca Marchetti, Gabriele Cesa, Kumar Pratik, Arash Behboodi:
Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach. - Neil Ashtekar, Jingxi Zhu, Vasant G. Honavar:
Class Incremental Learning from First Principles: A Review. - Aymene Mohammed Bouayed, Samuel Deslauriers-Gauthier, Adrian Iacovelli, David Naccache:
CNN Interpretability with Multivector Tucker Saliency Maps for Self-Supervised Models. - Thibault de Surrel, Sylvain Chevallier, Fabien Lotte, Florian Yger:
Geometry-Aware visualization of high dimensional Symmetric Positive Definite matrices. - Sharmita Dey, Benjamin Paassen, Sarath Ravindran Nair, Sabri Boughorbel, Arndt F. Schilling:
Continual Learning from Simulated Interactions via Multitask Prospective Rehearsal for Bionic Limb Behavior Modeling. - Ali Shirali, Moritz Hardt:
What Makes ImageNet Look Unlike LAION. - Angus Nicolson, Lisa Schut, J. Alison Noble, Yarin Gal:
Explaining Explainability: Recommendations for Effective Use of Concept Activation Vectors. - Antonios Valkanas, Yuening Wang, Yingxue Zhang, Mark Coates:
Personalized Negative Reservoir for Incremental Learning in Recommender Systems. - Lorenzo Loconte, Antonio Mari, Gennaro Gala, Robert Peharz, Cassio de Campos, Erik Quaeghebeur, Gennaro Vessio, Antonio Vergari:
What is the Relationship between Tensor Factorizations and Circuits (and How Can We Exploit it)? - Cullen Anderson, Jeff M. Phillips:
Robust High-Dimensional Mean Estimation With Low Data Size, an Empirical Study. - Viraj Shah, Svetlana Lazebnik, Julien Philip:
JoIN: Joint GANs Inversion for Intrinsic Image Decomposition. - Hikaru Umeda, Hideaki Iiduka:
Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent. - Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernárdez, Olga Zaghen, Simone Scardapane, Pietro Lio:
Hypergraph Neural Networks through the Lens of Message Passing: A Common Perspective to Homophily and Architecture Design. - Yancheng Wang, Changyu Liu, Yingzhen Yang:
Diffusion on Graph: Augmentation of Graph Structure for Node Classification. - Haoyun Yin, Yixuan Qiu, Xiao Wang:
Wasserstein Coreset via Sinkhorn Loss. - Mahrokh Ghoddousi Boroujeni, Andreas Krause, Giancarlo Ferrari-Trecate:
Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian Approach. - Simon Dufort-Labbé, Pierluca D'Oro, Evgenii Nikishin, Irina Rish, Pierre-Luc Bacon, Razvan Pascanu, Aristide Baratin:
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons. - Yuki Takezawa, Ryoma Sato, Han Bao, Kenta Niwa, Makoto Yamada:
Necessary and Sufficient Watermark for Large Language Models. - Sara Venturini, Marianna De Santis, Jordan Patracone, Martin Schmidt, Francesco Rinaldi, Saverio Salzo:
Relax and penalize: a new bilevel approach to mixed-binary hyperparameter optimization. - Stefano Bruno, Ying Zhang, Dongyoung Lim, Ömer Deniz Akyildiz, Sotirios Sabanis:
On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates. - Joel Jonsson, Bevan Leslie Cheeseman, Ivo F. Sbalzarini:
APR-CNN: Convolutional Neural Networks for the Adaptive Particle Representation of Large Microscopy Images. - Eric Tang, Bangding Yang, Xingyou Song:
Understanding LLM Embeddings for Regression. - Rundong Luo, Hong-Xing Yu, Jiajun Wu:
Unsupervised Discovery of Object-Centric Neural Fields. - Yuki Ichihara, Yuu Jinnai, Tetsuro Morimura, Kenshi Abe, Kaito Ariu, Mitsuki Sakamoto, Eiji Uchibe:
Evaluation of Best-of-N Sampling Strategies for Language Model Alignment. - Harsh Raj, Vipul Gupta, Domenic Rosati, Subhabrata Majumdar:
Improving Consistency in Large Language Models through Chain of Guidance. - Michal Lewandowski, Hamid Eghbalzadeh, Bernhard Heinzl, Raphael Pisoni, Bernhard Alois Moser:
On Space Folds of ReLU Neural Networks. - Astrit Tola, Jack Myrick, Baris Coskunuzer:
PROXI: Challenging the GNNs for Link Prediction. - Philippe Formont, Hugo Jeannin, Pablo Piantanida, Ismail Ben Ayed:
A Strong Baseline for Molecular Few-Shot Learning. - Xiangru Jian, Xinjian Zhao, Wei Pang, Chaolong Ying, Yimu Wang, Yaoyao Xu, Tianshu Yu:
Rethinking Spectral Augmentation for Contrast-based Graph Self-Supervised Learning. - Margherita Mele, Roberto Menichetti, Alessandro Ingrosso, Raffaello Potestio:
Density of states in neural networks: an in-depth exploration of learning in parameter space. - Peter Shaw, James Cohan, Jacob Eisenstein, Kenton Lee, Jonathan Berant, Kristina Toutanova:
ALTA: Compiler-Based Analysis of Transformers. - David Brandfonbrener, Nikhil Anand, Nikhil Vyas, Eran Malach, Sham M. Kakade:
Loss-to-Loss Prediction: Scaling Laws for All Datasets. - Fang Wu, Stan Z. Li:
Dynamics-inspired Structure Hallucination for Protein-protein Interaction Modeling. - Rishi Bommasani, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, Percy Liang:
The 2023 Foundation Model Transparency Index. - Sucheng Ren, Hongru Zhu, Chen Wei, Yijiang Li, Alan L. Yuille, Cihang Xie:
ARVideo: Autoregressive Pretraining for Self-Supervised Video Representation Learning. - Christopher Bockel-Rickermann, Toon Vanderschueren, Jeroen Berrevoets, Tim Verdonck, Wouter Verbeke:
Using representation balancing to learn conditional-average dose responses from clustered data. - Sheng Cheng, Deqian Kong, Jianwen Xie, Kookjin Lee, Ying Nian Wu, Yezhou Yang:
Latent Space Energy-based Neural ODEs. - Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters:
Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability. - Francois Caron, Fadhel Ayed, Paul Jung, Hoil Lee, Juho Lee, Hongseok Yang:
Over-parameterised Shallow Neural Networks with Asymmetrical Node Scaling: Global Convergence Guarantees and Feature Learning. - Weizhi Lu, Zhongzheng Li, Mingrui Chen, Weiyu Li:
The Sparse Matrix-Based Random Projection: A Study of Binary and Ternary Quantization. - Xiangming Gu, Chao Du, Tianyu Pang, Chongxuan Li, Min Lin, Ye Wang:
On Memorization in Diffusion Models. - Piyush Tiwary, Atri Guha, Subhodip Panda, Prathosh AP:
Adapt then Unlearn: Exploring Parameter Space Semantics for Unlearning in Generative Adversarial Networks. - Kieran A. Murphy, Sam Dillavou, Danielle S. Bassett:
Comparing the information content of probabilistic representation spaces. - Martha Lewis, Melanie Mitchell:
Evaluating the Robustness of Analogical Reasoning in Large Language Models. - Eliav Mor, Yair Carmon:
An Analytical Model for Overparameterized Learning Under Class Imbalance. - Song Wang, Zhen Tan, Yaochen Zhu, Chuxu Zhang, Jundong Li:
Generative Risk Minimization for Out-of-Distribution Generalization on Graphs. - Rudi Coppola, Manuel Mazo Espinosa:
On Training-Conditional Conformal Prediction and Binomial Proportion Confidence Intervals. - Leyla Naz Candogan, Yongtao Wu, Elías Abad-Rocamora, Grigorios Chrysos, Volkan Cevher:
Single-pass Detection of Jailbreaking Input in Large Language Models. - Amer Essakine, Yanqi Cheng, Chun-Wun Cheng, Lipei Zhang, Zhongying Deng, Lei Zhu, Carola-Bibiane Schönlieb, Angelica I. Avilés-Rivero:
Where Do We Stand with Implicit Neural Representations? A Technical and Performance Survey. - Haoyu Wang, Guozheng Ma, Cong Yu, Ning Gui, Linrui Zhang, Zhiqi Huang, Suwei Ma, Yongzhe Chang, Sen Zhang, Li Shen, Xueqian Wang, Peilin Zhao, Dacheng Tao:
Are Large Language Models Really Robust to Word-Level Perturbations? - Julius Ott, Huawei Sun, Enrico Rinaldi, Gianfranco Mauro, Lorenzo Servadei, Robert Wille:
Exploiting Benford's Law for Weight Regularization of Deep Neural Networks. - Georgi Ganev, Meenatchi Sundaram Muthu Selva Annamalai, Emiliano De Cristofaro:
The Elusive Pursuit of Reproducing PATE-GAN: Benchmarking, Auditing, Debugging. - Duy-Kien Nguyen, Martin R. Oswald, Cees G. M. Snoek:
SimPLR: A Simple and Plain Transformer for Efficient Object Detection and Segmentation. - Adam Fisch, Jacob Eisenstein, Vicky Zayats, Alekh Agarwal, Ahmad Beirami, Chirag Nagpal, Peter Shaw, Jonathan Berant:
Robust Preference Optimization through Reward Model Distillation. - Adrian Remonda, Cole Corbitt Terrell, Eduardo E. Veas, Marc Masana:
Uncertainty-Based Experience Replay for Task-Agnostic Continual Reinforcement Learning. - Shachar Schnapp, Sivan Sabato:
Differentially Private Source-Target Clustering. - Cristian A. Galvis-Florez, Ahmad Farooq, Simo Särkkä:
Provable Quantum Algorithm Advantage for Gaussian Process Quadrature. - Cristina Garbacea, Qiaozhu Mei:
Why is constrained neural language generation particularly challenging? - Houssam Zenati, Alberto Bietti, Matthieu Martin, Eustache Diemert, Pierre Gaillard, Julien Mairal:
Counterfactual Learning of Stochastic Policies with Continuous Actions. - Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu:
Verbalized Machine Learning: Revisiting Machine Learning with Language Models. - Olivier Teytaud, Mariia Zameshina, Tom Sander, Pierre Fernandez, Furong Ye, Laurent Najman, Thomas Bäck, Ismail Labiad:
Lognormal Mutations and their Use in Detecting Surreptitious Fake Images. - Omer Rochman Sharabi, Sacha Lewin, Gilles Louppe:
A Neural Material Point Method for Particle-based Emulation. - Zeyu Yang, Han Yu, Peikun Guo, Khadija Zanna, Xiaoxue Yang, Akane Sano:
Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models. - Krishna Acharya, Juba Ziani, Jingyan Wang, Varun Vangala:
Producers Equilibria and Dynamics in Engagement-Driven Recommender Systems. - Prabhu Babu, Petre Stoica, Astha Saini:
Fair principal component analysis (PCA): minorization-maximization algorithms for Fair PCA, Fair Robust PCA and Fair Sparse PCA. - Sanjeev Raja, Ishan Amin, Fabian Pedregosa, Aditi S. Krishnapriyan:
Stability-Aware Training of Machine Learning Force Fields with Differentiable Boltzmann Estimators. - Zach Nussbaum, John Xavier Morris, Andriy Mulyar, Brandon Duderstadt:
Nomic Embed: Training a Reproducible Long Context Text Embedder. - Lan V. Truong:
Global Convergence Rate of Deep Equilibrium Models with General Activations. - Denis Kuznedelev, Soroush Tabesh, Kimia Noorbakhsh, Elias Frantar, Sara Beery, Eldar Kurtic, Dan Alistarh:
TACO Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression. - Haozhe Liu, Wentian Zhang, Jinheng Xie, Francesco Faccio, Mengmeng Xu, Tao Xiang, Mike Zheng Shou, Juan-Manuel Pérez-Rúa, Jürgen Schmidhuber:
Faster Diffusion Through Temporal Attention Decomposition. - Nikita Malik, Konda Reddy Mopuri:
FaAlGrad: Fairness through Alignment of Gradients across Different Subpopulations. - Jiaqi Wang, Yuhang Zhou, Zhixiong Zhang, Qiguang Chen, Yongqiang Chen, James Cheng:
DivIL: Unveiling and Addressing Over-Invariance for Out-of- Distribution Generalization. - Minttu Alakuijala, Reginald McLean, Isaac Woungang, Nariman Farsad, Samuel Kaski, Pekka Marttinen, Kai Yuan:
Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics. - Zhong Chuang, Yusuke Tanaka, Tomoharu Iwata:
Meta-Learning for Graphs with Heterogeneous Node Attribute Spaces for Few-Shot Edge Predictions. - Kazuki Irie, Róbert Csordás, Jürgen Schmidhuber:
Metalearning Continual Learning Algorithms. - Zijun Wang, Haoqin Tu, Jieru Mei, Bingchen Zhao, Yisen Wang, Cihang Xie:
AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention Manipulation. - Seyed Moslem Shokrolahi, Il-Min Kim:
Combating Inter-Task Confusion and Catastrophic Forgetting by Metric Learning and Re-Using a Past Trained Model. - Yilun Kong, Hangyu Mao, Qi Zhao, Bin Zhang, Jingqing Ruan, Li Shen, Yongzhe Chang, Xueqian Wang, Rui Zhao, Dacheng Tao:
QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning. - Amadou S. Sangare, Nicolas Dunou, Jhony H. Giraldo, Fragkiskos D. Malliaros:
A Fused Gromov-Wasserstein Approach to Subgraph Contrastive Learning. - Tian Xie, Jifan Zhang, Haoyue Bai, Robert D. Nowak:
Deep Active Learning in the Open World. - Roozbeh Yousefzadeh, Xuenan Cao:
A Lean Dataset for International Math Olympiad: Small Steps towards Writing Math Proofs for Hard Problems. - Mohsen Tabejamaat, Farzaneh Etminani, Mattias Ohlsson:
Cycle Conditioning for Robust Representation Learning from Categorical Data. - Isay Katsman, Anna Gilbert:
Shedding Light on Problems with Hyperbolic Graph Learning. - Weiguo Gao, Ming Li:
Evolution of Discriminator and Generator Gradients in GAN Training: From Fitting to Collapse. - Jiacheng You, Xinyang Chen, Yu Sun, Weili Guan, Liqiang Nie:
Long Short-Term Imputer: Handling Consecutive Missing Values in Time Series. - Francesco Ferrini, Antonio Longa, Andrea Passerini, Manfred Jaeger:
A Self-Explainable Heterogeneous GNN for Relational Deep Learning. - Hejia Geng, Peng Li:
HoSNNs: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds. - Qi Zhang, Yi Zhou, Shaofeng Zou:
Convergence Guarantees for RMSProp and Adam in Generalized-smooth Non-convex Optimization with Affine Noise Variance. - Zhao Yang, Thomas M. Moerland, Mike Preuss, Aske Plaat, Edward S. Hu:
Reset-free Reinforcement Learning with World Models. - Sebastian Gregor Gruber, Francis R. Bach:
Optimizing Estimators of Squared Calibration Errors in Classification. - Tejumade Afonja, Hui-Po Wang, Raouf Kerkouche, Mario Fritz:
DP-2Stage: Adapting Language Models as Differentially Private Tabular Data Generators. - Pavel Rumiantsev, Mark Coates:
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation. - Xingmei Lou, Yu Hu, Xiaodong Li:
Learning Linear Polytree Structural Equation Model. - Tobias Bernecker, Ghalia Rehawi, Francesco Paolo Casale, Janine Knauer-Arloth, Annalisa Marsico:
Random Walk Diffusion for Efficient Large-Scale Graph Generation. - Kelly Ramsay, Aukosh Jagannath, Shoja'eddin Chenouri:
An elementary concentration bound for Gibbs measures arising in statistical learning theory. - Giuseppe Serra, Ben Werner, Florian Buettner:
How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online Continual Learning. - Rishi Bommasani, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, Percy Liang:
The 2024 Foundation Model Transparency Index. - Shenghong Dai, Jy-yong Sohn, Yicong Chen, S. M. Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, Kangwook Lee:
Buffer-based Gradient Projection for Continual Federated Learning. - Roman Bresson, Giannis Nikolentzos, George Panagopoulos, Michail Chatzianastasis, Jun Pang, Michalis Vazirgiannis:
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning. - Daisuke Hatano, Satoshi Hara, Hiromi Arai:
Path-Specific Counterfactual Fairness via Dividend Correction. - Shenao Zhang, Donghan Yu, Hiteshi Sharma, Han Zhong, Zhihan Liu, Ziyi Yang, Shuohang Wang, Hany Hassan Awadalla, Zhaoran Wang:
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment. - Ali Bahri, Moslem Yazdanpanah, Mehrdad Noori, Milad Cheraghalikhani, Gustavo Adolfo Vargas Hakim, David Osowiechi, Farzad Beizaee, Ismail Ben Ayed, Christian Desrosiers:
GeoMask3D: Geometrically Informed Mask Selection for Self-Supervised Point Cloud Learning in 3D. - Qinxun Bai, Steven Rosenberg, Wei Xu:
Generalized Tangent Kernel: A Unified Geometric Foundation for Natural Gradient and Standard Gradient. - Luciana Ferrer:
No Need for Ad-hoc Substitutes: The Expected Cost is a Principled All-purpose Classification Metric. - Phuong Quynh Le, Jörg Schlötterer, Christin Seifert:
Out of Spuriousity: Improving Robustness to Spurious Correlations without Group Annotations. - Vincent Abbott, Gioele Zardini:
FlashAttention on a Napkin: A Diagrammatic Approach to Deep Learning IO-Awareness. - Thomas De Min, Massimiliano Mancini, Stéphane Lathuilière, Subhankar Roy, Elisa Ricci:
Unlearning Personal Data from a Single Image. - Billy Joe Franks, Moshe Eliasof, Semih Cantürk, Guy Wolf, Carola-Bibiane Schönlieb, Sophie Fellenz, Marius Kloft:
Towards Graph Foundation Models: A Study on the Generalization of Positional and Structural Encodings. - Vinoth Nandakumar, Qiang Qu, Peng Mi, Tongliang Liu:
State space models can express n-gram languages. - Charles Marx, Volodymyr Kuleshov, Stefano Ermon:
Calibrated Probabilistic Forecasts for Arbitrary Sequences. - Thibault Le Sellier de Chezelles, Maxime Gasse, Alexandre Lacoste, Massimo Caccia, Alexandre Drouin, Léo Boisvert, Megh Thakkar, Tom Marty, Rim Assouel, Sahar Omidi Shayegan, Lawrence Keunho Jang, Xing Han Lù, Ori Yoran, Dehan Kong, Frank F. Xu, Siva Reddy, Graham Neubig, Quentin Cappart, Russ Salakhutdinov, Nicolas Chapados:
The BrowserGym Ecosystem for Web Agent Research. - Kristian Schwethelm, Johannes Kaiser, Moritz Knolle, Sarah Lockfisch, Daniel Rueckert, Alexander Ziller:
Visual Privacy Auditing with Diffusion Models. - Zihao Liang, Tianyu Zhou, Zehui Lu, Shaoshuai Mou:
Online Control-Informed Learning. - Pihe Hu, Shaolong Li, Xun Wang, Longbo Huang:
Mixed Sparsity Training: Achieving 4× FLOP Reduction for Transformer Pretraining. - Oisín Nolan, Tristan S. W. Stevens, Wessel L. van Nierop, Ruud van Sloun:
Active Diffusion Subsampling. - Martin Bichler, Davide Legacci, Panayotis Mertikopoulos, Matthias Oberlechner, Bary S. R. Pradelski:
Characterizing the Convergence of Game Dynamics via Potentialness. - Jonas Brusokas, Seshu Tirupathi, Dalin Zhang, Torben Bach Pedersen:
The Time-Energy Model: Selective Time-Series Forecasting Using Energy-Based Models. - Arash Mari Oriyad, Mohammadali Banayeeanzade, Reza Abbasi, Mohammad Hossein Rohban, Mahdieh Soleymani Baghshah:
Attention Overlap Is Responsible for The Entity Missing Problem in Text-to-image Diffusion Models! - Wenjing Chang, Kay Liu, Philip S. Yu, Jianjun Yu:
Enhancing Fairness in Unsupervised Graph Anomaly Detection through Disentanglement. - Muheng Li, Ruqi Zhang:
Reheated Gradient-based Discrete Sampling for Combinatorial Optimization. - Manuel Faysse, Patrick Fernandes, Nuno Miguel Guerreiro, António Loison, Duarte M. Alves, Caio Corro, Nicolas Boizard, João Alves, Ricardo Rei, Pedro Henrique Martins, Antoni Bigata Casademunt, François Yvon, André F. T. Martins, Gautier Viaud, Céline Hudelot, Pierre Colombo:
CroissantLLM: A Truly Bilingual French-English Language Model. - Tao Daniel Alter, Raz Lapid, Moshe Sipper:
On the Robustness of Kolmogorov-Arnold Networks: An Adversarial Perspective. - Ashka Shah, Adela Frances DePavia, Nathaniel C. Hudson, Ian T. Foster, Rick Stevens:
Causal Discovery over High-Dimensional Structured Hypothesis Spaces with Causal Graph Partitioning. - Danil Provodin, Bram van den Akker, Christina Katsimerou, Maurits Clemens Kaptein, Mykola Pechenizkiy:
Rethinking Knowledge Transfer in Learning Using Privileged Information. - Hanyang Wang, Juergen Branke, Matthias Poloczek:
Respecting the limit: Bayesian optimization with a bound on the optimal value. - Yousef El-Laham, Zhongchang Sun, Haibei Zhu, Tucker Balch, Svitlana Vyetrenko:
Variational Neural Stochastic Differential Equations with Change Points. - Gerardo Duran-Martin, Leandro Sánchez-Betancourt, Alexander Y. Shestopaloff, Kevin Patrick Murphy:
A unifying framework for generalised Bayesian online learning in non-stationary environments. - Garweet Sresth, Satish Mulleti, Ajit Rajwade:
Unlabelled Compressive Sensing under Sparse Permutation and Prior Information. - Akshay Kumar, Jarvis D. Haupt:
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations. - Justin Kay, Timm Haucke, Suzanne Stathatos, Siqi Deng, Erik Young, Pietro Perona, Sara Beery, Grant Van Horn:
Align and Distill: Unifying and Improving Domain Adaptive Object Detection. - Hana Yahia, Bruno Figliuzzi, Florent Di Meglio, Laurent Gerbaud, Stephane Menand, Mohamed Mahjoub:
Domain Generalization for Time Series: Enhancing Drilling Regression Models for Stick-Slip Index Prediction. - Sai Srinivas Kancheti, Rahul Vigneswaran, Bamdev Mishra, Vineeth N. Balasubramanian:
HARE: Human-in-the-Loop Algorithmic Recourse. - Ramansh Sharma, Varun Shankar:
Ensemble and Mixture-of-Experts DeepONets For Operator Learning. - Shwai He, Daize Dong, Liang Ding, Ang Li:
Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques. - Yaochen Hu, Mai Zeng, Ge Zhang, Pavel Rumiantsev, Liheng Ma, Yingxue Zhang, Mark Coates:
Sparse Decomposition of Graph Neural Networks. - Charles-Étienne Joseph, Benjamin Thérien, Abhinav Moudgil, Boris Knyazev, Eugene Belilovsky:
Meta-learning Optimizers for Communication-Efficient Learning. - Yun Jin Park, Didong Li:
Lower Ricci Curvature for Efficient Community Detection. - Nicolas Drapier, Aladine Chetouani, Aurélien Chateigner:
Enhancing Maritime Trajectory Forecasting via H3 Index and Causal Language Modelling (CLM). - Pranav Jeevan, Amit Sethi:
Which Backbone to Use: A Resource-efficient Domain Specific Comparison for Computer Vision. - Mikko A. Heikkilä:
On Using Secure Aggregation in Differentially Private Federated Learning with Multiple Local Steps. - Asaf Shul, Eliahu Horwitz, Yedid Hoshen:
Distilling Datasets Into Less Than One Image. - José I. Segovia-Martín, Santiago Mazuelas, Anqi Liu:
A Unified View of Double-Weighting for Marginal Distribution Shift. - Brian Matejek, Ashish Gehani, Nathaniel D. Bastian, Daniel J. Clouse, Bradford J. Kline, Susmit Jha:
SAFE-NID: Self-Attention with Normalizing-Flow Encodings for Network Intrusion Detection. - Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Michael Hunter Klein, Vahid Tarokh, David E. Carlson:
Understanding and Robustifying Sub-domain Alignment for Domain Adaptation. - Michael Hagmann, Michael Staniek, Stefan Riezler:
Compositionality in Time Series: A Proof of Concept using Symbolic Dynamics and Compositional Data Augmentation. - Moussa Kassem Sbeyti, Nadja Klein, Azarm Nowzad, Fikret Sivrikaya, Sahin Albayrak:
Building Blocks for Robust and Effective Semi-Supervised Real-World Object Detection. - Nolan Simran Dey, J. Eric Taylor, Alexander Wong, Bryan P. Tripp, Graham W. Taylor:
Neuron-based explanations of neural networks sacrifice completeness and interpretability. - Dan Kushnir, Sandeep Silwal:
Cluster Tree for Nearest Neighbor Search. - Christian Dietrich Weilbach, Frank Wood:
Daphne: Multi-Pass Compilation of Probabilistic Programs into Graphical Models and Neural Networks. - Yuki Tsukada, Hideaki Iiduka:
Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo-Line-Search Learning Rate. - William Chang, Yuanhao Lu:
Multiplayer Information Asymmetric Contextual Bandits. - Weixin Liang, Lili Yu, Liang Luo, Srini Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, Xi Victoria Lin:
Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. - Alessandro De Palma, Serge Durand, Zakaria Chihani, François Terrier, Caterina Urban:
On Using Certified Training towards Empirical Robustness. - Tristan S. W. Stevens, Hans Van Gorp, Faik C. Meral, Jun Seob Shin, Jason Yu, Jean-Luc Robert, Ruud van Sloun:
Removing Structured Noise using Diffusion Models. - Woomin Song, Jihoon Tack, Sangwoo Mo, Seunghyuk Oh, Jinwoo Shin:
Sparsified State-Space Models are Efficient Highway Networks. - Seongyoon Kim, Minchan Jeong, Sungnyun Kim, Sungwoo Cho, Sumyeong Ahn, Se-Young Yun:
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning. - Mohammadamin Banayeeanzade, Mahdi Soltanolkotabi, Mohammad Rostami:
Theoretical Insights into Overparameterized Models in Multi-Task and Replay-Based Continual Learning. - Liyao Jiang, Negar Hassanpour, Mohammad Salameh, Mohan Sai Singamsetti, Fengyu Sun, Wei Lui, Di Niu:
FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive Prompt Weighting. - Hiroaki Ito, Jiale Yan, Hikari Otsuka, Kazushi Kawamura, Masato Motomura, Thiem Van Chu, Daichi Fujiki:
Uncovering Strong Lottery Tickets in Graph Transformers: A Path to Memory Efficient and Robust Graph Learning. - Chenguang Wang, Zhang-Hua Fu, Pinyan Lu, Tianshu Yu:
Efficient Training of Multi-task Neural Solver for Combinatorial Optimization. - Dahyun Kang, Ahmet Iscen, Eunchan Jo, Sua Choi, Minsu Cho, Cordelia Schmid:
Memory-Modular Classification: Learning to Generalize with Memory Replacement. - Feng Chen, Xinwei Chen, Rong-Jun Qin, Cong Guan, Lei Yuan, Zongzhang Zhang, Yang Yu:
Efficient Multi-Agent Cooperation Learning through Teammate Lookahead. - Aida Mohammadshahi, Yani Ioannou:
What's Left After Distillation? How Knowledge Transfer Impacts Fairness and Bias. - Samuel Stevens, Emily Wenger, Cathy Yuanchen Li, Niklas Nolte, Eshika Saxena, François Charton, Kristin E. Lauter:
Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors. - Brieuc Pinon, Raphaël M. Jungers, Jean-Charles Delvenne:
A limitation on black-box dynamics approaches to Reinforcement Learning. - Arnaud Robert, Aldo A. Faisal, Ciara Pike-Burke:
Posterior Sampling for Reinforcement Learning on Graphs. - Johannes Hog, Raghu Rajan, André Biedenkapp, Noor H. Awad, Frank Hutter, Vu Nguyen:
Meta-learning Population-based Methods for Reinforcement Learning. - Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, Wenhu Chen:
Long-context LLMs Struggle with Long In-context Learning. - Ingvar M. Ziemann:
A Vector Bernstein Inequality for Self-Normalized Martingales. - Shivi Dixit, Rishabh Gupta, Qi Zhang:
Decision-Focused Surrogate Modeling for Mixed-Integer Linear Optimization. - Christopher Bülte, Philipp Scholl, Gitta Kutyniok:
Probabilistic neural operators for functional uncertainty quantification. - Niccolò Avogaro, Thomas Frick, Mattia Rigotti, Andrea Bartezzaghi, Filip Janicki, A. Cristiano I. Malossi, Konrad Schindler, Roy Assaf:
Show or Tell? Effectively prompting Vision-Language Models for semantic segmentation. - Ramzi Dakhmouche, Ivan Lunati, M. Hossein Gorji:
Robust Symbolic Regression for Dynamical System Identification. - Niccolò Tosato, Lorenzo Basile, Emanuele Ballarin, Giuseppe de Alteriis, Alberto Cazzaniga, Alessio Ansuini:
Emergent representations in networks trained with the Forward-Forward algorithm. - Jack Foster, Kyle Fogarty, Stefan Schoepf, Zack Dugue, Cengiz Öztireli, Alexandra Brintrup:
An Information Theoretic Approach to Machine Unlearning. - Elena Congeduti, Roberto Rocchetta, Frans A. Oliehoek:
Influence Learning in Complex Systems. - Haoxiang Ma, Shuo Han, Ahmed Hemida, Charles A. Kamhoua, Jie Fu:
Adaptive Incentive Design for Markov Decision Processes with Unknown Rewards. - Christopher Watson, Arjun Krishna, Rajeev Alur, Dinesh Jayaraman:
Illustrated Landmark Graphs for Long-horizon Policy Learning. - Yifei Xiong, Nianqiao P. Ju, Sanguo Zhang:
Simulation-based Bayesian Inference from Privacy Protected Data. - Wenxian Shi, Menghua Wu, Regina Barzilay:
Predicting sub-population specific viral evolution. - Joshua Engels, Logan Riggs Smith, Max Tegmark:
Decomposing The Dark Matter of Sparse Autoencoders. - Bhavya Vasudeva, Puneesh Deora, Christos Thrampoulidis:
Implicit Bias and Fast Convergence Rates for Self-attention. - Siheng Li, Cheng Yang, Taiqiang Wu, Chufan Shi, Yuji Zhang, Xinyu Zhu, Zesen Cheng, Deng Cai, Mo Yu, Lemao Liu, Jie Zhou, Yujiu Yang, Ngai Wong, Xixin Wu, Wai Lam:
A Survey on the Honesty of Large Language Models. - William Chen, Oier Mees, Aviral Kumar, Sergey Levine:
Vision-Language Models Provide Promptable Representations for Reinforcement Learning. - Kefan Su, Zongqing Lu:
f-Divergence Policy Optimization in Fully Decentralized Cooperative MARL. - Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton van den Hengel, Kun Zhang, Javen Qinfeng Shi:
Latent Covariate Shift: Unlocking Partial Identifiability for Multi-Source Domain Adaptation. - Yuxuan Shu, Vasileios Lampos:
DeformTime: capturing variable dependencies with deformable attention for time series forecasting. - Shubham Agarwal, Gaurav Sahu, Abhay Puri, Issam H. Laradji, Krishnamurthy Dj Dvijotham, Jason Stanley, Laurent Charlin, Christopher Pal:
LitLLMs, LLMs for Literature Review: Are we there yet? - Shuvendu Roy, Elham Dolatabadi, Arash Afkanpour, Ali Etemad:
Consistency-Guided Asynchronous Contrastive Tuning for Few-Shot Class-Incremental Tuning of Foundation Models. - Simon Weissmann, Sara Klein, Waïss Azizian, Leif Döring:
Almost Sure Convergence of Stochastic Gradient Methods under Gradient Domination. - Ding Zhu, Zhiqun Zuo, Mohammad Mahdi Khalili:
An Efficient Training Algorithm for Models with Block-wise Sparsity. - Paul-Ruben Schlumbom, Eibe Frank:
Revisiting Deep Hybrid Models for Out-of-Distribution Detection. - Shirsha Bose, Mainak Singha, Ankit Jha, Souradeep Mukhopadhyay, Biplab Banerjee:
Meta-Learning to Teach Semantic Prompts for Open Domain Generalization in Vision-Language Models. - Gang Li, Qihang Lin, Ayush Ghosh, Tianbao Yang:
Multi-Output Distributional Fairness via Post-Processing. - Dongyue Xie:
Empirical Bayes Trend Filtering Through a Variational Inference Framework. - Max Wasserman, Gonzalo Mateos:
Stabilizing the Kumaraswamy Distribution. - Inwon Kang, Parikshit Ram, Yi Zhou, Horst Samulowitz, Oshani Seneviratne:
On Learning Representations for Tabular Data Distillation. - Inkyu Shin, Qihang Yu, Xiaohui Shen, In So Kweon, Kuk-Jin Yoon, Liang-Chieh Chen:
Enhancing Temporal Consistency in Video Editing by Reconstructing Videos with 3D Gaussian Splatting. - Nisha Lakshmana Raichur, Lucas Heublein, Tobias Feigl, Alexander Rügamer, Christopher Mutschler, Felix Ott:
Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning. - Aditya Hemant Shahane, Prathosh AP, Sandeep Kumar:
GOTHAM: Graph Class Incremental Learning Framework under Weak Supervision. - Brian Godwin Lim, Galvin Brice Sy Lim, Renzo Roel Tan, Kazushi Ikeda:
Contextualized Messages Boost Graph Representations. - Aditya Challa, Sravan Danda, Laurent Najman, Snehanshu Saha:
Quantile Activation: Correcting a failure mode of traditional ML models. - Anh Quang Dang, Reza Babanezhad Harikandeh, Sharan Vaswani:
(Accelerated) Noise-adaptive Stochastic Heavy-Ball Momentum. - Teresa Yeo, Andrei Atanov, Harold Benoit, Aleksandr Alekseev, Ruchira Ray, Pooya Esmaeil Akhoondi, Amir Zamir:
Controlled Training Data Generation with Diffusion Models. - Hoang Anh Dung, Cuong C. Nguyen, Vasileios Belagiannis, Thanh-Toan Do, Gustavo Carneiro:
Maximising the Utility of Validation Sets for Imbalanced Noisy-label Meta-learning. - Akiyoshi Sannai, Yasunari Hikima, Ken Kobayashi, Akinori Tanaka, Naoki Hamada:
Bézier Flow: a Surface-wise Gradient Descent Method for Multi-objective Optimization. - Ameesh Shah, Cameron Voloshin, Chenxi Yang, Abhinav Verma, Swarat Chaudhuri, Sanjit A. Seshia:
LTL-Constrained Policy Optimization with Cycle Experience Replay. - Francesco Di Salvo, Sebastian Doerrich, Ines Rieger, Christian Ledig:
An Embedding is Worth a Thousand Noisy Labels. - Arjhun Swaminathan, Mete Akgün:
Distributed and Secure Kernel-Based Quantum Machine Learning. - Shuhao Fu, Andrew Jun Lee, Yixin Anna Wang, Ida Momennejad, Trevor Bihl, Hongjing Lu, Taylor Whittington Webb:
Evaluating Compositional Scene Understanding in Multimodal Generative Models. - Karthik Valmeekam, Kaya Stechly, Atharva Gundawar, Subbarao Kambhampati:
A Systematic Evaluation of the Planning and Scheduling Abilities of the Reasoning Model o1. - Letian Fu, Long Lian, Renhao Wang, Baifeng Shi, Xudong Wang, Adam Yala, Trevor Darrell, Alexei A. Efros, Ken Goldberg:
Rethinking Patch Dependence for Masked Autoencoders. - Yueming Lyu, Xiaowei Zhou, Xingrui Yu, Ivor W. Tsang:
Graph Potential Field Neural Network for Massive Agents Group-wise Path Planning. - Xin Ma, Yaohui Wang, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao:
Latte: Latent Diffusion Transformer for Video Generation. - Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak Roy Chowdhury, Rajesh K. Gupta, Pengtao Xie:
Downstream Task Guided Masking Learning in Masked Autoencoders Using Multi-Level Optimization. - Marc A. Tunnell, Zachary J. DeBruine, Erin Carrier:
Rank Suggestion in Non-negative Matrix Factorization: Residual Sensitivity to Initial Conditions (RSIC). - Daniele Bracale, Moulinath Banerjee, Yuekai Sun, Salam Turki, Kevin Stoll:
Dynamic Pricing in the Linear Valuation Model using Shape Constraints. - Lorenzo Basile, Valentino Maiorca, Luca Bortolussi, Emanuele Rodolà, Francesco Locatello:
ResiDual Transformer Alignment with Spectral Decomposition. - Kishan Gurumurthy, Himanshu Pal, Charu Sharma:
Federated Spectral Graph Transformers Meet Neural Ordinary Differential Equations for Non-IID Graphs. - Bao Duong, Hung Le, Biwei Huang, Thin Nguyen:
Reinforcement Learning for Causal Discovery without Acyclicity Constraints. - Clement Nyanhongo, Bruno Miranda Henrique, Eugene Santos:
Reward Distance Comparisons Under Transition Sparsity. - Anka Reuel, Benjamin Bucknall, Stephen Casper, Timothy Fist, Lisa Soder, Onni Aarne, Lewis Hammond, Lujain Ibrahim, Alan Chan, Peter Wills, Markus Anderljung, Ben Garfinkel, Lennart Heim, Andrew Trask, Gabriel Mukobi, Rylan Schaeffer, Mauricio Baker, Sara Hooker, Irene Solaiman, Sasha Luccioni, Nitarshan Rajkumar, Nicolas Moës, Jeffrey Ladish, David Bau, Paul Bricman, Neel Guha, Jessica Newman, Yoshua Bengio, Tobin South, Alex Pentland, Sanmi Koyejo, Mykel J. Kochenderfer, Robert Trager:
Open Problems in Technical AI Governance. - Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas:
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. - Diogo S. Carvalho, Pedro A. Santos, Francisco S. Melo:
Multi-Bellman operator for convergence of Q-learning with linear function approximation. - Arwin Gansekoele, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei:
Relative Phase Equivariant Deep Neural Systems for Physical Layer Communications. - Haiqing Hao, Wenhui Wang:
Bayesian Transferability Assessment for Spiking Neural Networks. - Jakub Lucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando:
An Adversarial Perspective on Machine Unlearning for AI Safety. - Tobias Ladner, Michael Eichelbeck, Matthias Althoff:
Formal Verification of Graph Convolutional Networks with Uncertain Node Features and Uncertain Graph Structure. - Nabarun Goswami, Hanqin Wang, Tatsuya Harada:
EDM-TTS: Efficient Dual-Stage Masked Modeling for Alignment-Free Text-to-Speech Synthesis. - Sahra Ghalebikesabi, Eugene Bagdasarian, Ren Yi, Itay Yona, Ilia Shumailov, Aneesh Pappu, Chongyang Shi, Laura Weidinger, Robert Stanforth, Leonard Berrada, Pushmeet Kohli, Po-Sen Huang, Borja Balle:
Privacy Awareness for Information-Sharing Assistants: A Case-study on Form-filling with Contextual Integrity. - Yangyi Chen, Binxuan Huang, Yifan Gao, Zhengyang Wang, Jingfeng Yang, Heng Ji:
Scaling Laws for Predicting Downstream Performance in LLMs. - Renchunzi Xie, Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko, Jianfeng Zhang, Bo An:
Leveraging Gradients for Unsupervised Accuracy Estimation under Distribution Shift. - Paul-Hieu V. Nguyen, Ryan Yee, Sameer K. Deshpande:
Oblique Bayesian Additive Regression Trees. - Hongfei Wu, Lijun Wu, Guoqing Liu, Zhirong Liu, Bin Shao, Zun Wang:
SE3Set: Harnessing Equivariant Hypergraph Neural Networks for Molecular Representation Learning. - Makoto Takamoto, Daniel Oñoro-Rubio, Wiem Ben Rim, Takashi Maruyama, Bhushan Kotnis:
Optimal Embedding Guided Negative Sample Generation for Knowledge Graph Link Prediction. - Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig:
Accelerating Non-Conjugate Gaussian Processes By Trading Off Computation For Uncertainty. - Ali Modarressi, Abdullatif Köksal, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze:
MemLLM: Finetuning LLMs to Use Explicit Read-Write Memory. - Matéo Mahaut, Roberto Dessì, Francesca Franzon, Marco Baroni:
Referential communication in heterogeneous communities of pre-trained visual deep networks. - Ye Yuan, Youyuan Zhang, Can Chen, Haolun Wu, Melody Zixuan Li, Jianmo Li, James J. Clark, Xue Liu:
Design Editing for Offline Model-based Optimization. - Edgar Torres, Mathias Niepert:
Adaptive Physics-informed Neural Networks: A Survey. - Yik Lun Kei, Jialiang Li, Hangjian Li, Yanzhen Chen, Oscar Hernan Madrid Padilla:
Change Point Detection in Dynamic Graphs with Decoder-only Latent Space Model. - Jose L. Garcia, Karolina Hajkova, Maria Marchenko, Carlos Miguel Patiño:
Reproducibility Study of "Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation". - Daniel P. Jeong, Zachary Chase Lipton, Pradeep Kumar Ravikumar:
LLM-Select: Feature Selection with Large Language Models. - Alexis Thual, Yohann Benchetrit, Felix Geilert, Jérémy Rapin, Iurii Makarov, Stanislas Dehaene, Bertrand Thirion, Hubert J. Banville, Jean-Remi King:
Sample-efficient decoding of visual stimuli from fMRI through inter-individual functional alignment. - Gobinda Saha, Kaushik Roy:
Amphibian: A Meta-Learning Framework for Rehearsal-Free, Fast Online Continual Learning. - Futoon M. Abushaqra, Hao Xue, Yongli Ren, Flora D. Salim:
ODEStream: A Buffer-Free Online Learning Framework with ODE-based Adaptor for Streaming Time Series Forecasting. - Tatyana Benko, Martin Buck, Ilya Amburg, Stephen J. Young, Sinan G. Aksoy:
HyperMagNet: A Magnetic Laplacian based Hypergraph Neural Network. - Tomoyuki Obuchi, Toshiyuki Tanaka:
When resampling/reweighting improves feature learning in imbalanced classification? A toy-model study. - Jiaxi Wang, Yaosen Min, Miao Li, Ji Wu:
FragFormer: A Fragment-based Representation Learning Framework for Molecular Property Prediction. - Sungwon Han, Seungeon Lee, Meeyoung Cha, Sercan Ö. Arik, Jinsung Yoon:
LLM-Guided Self-Supervised Tabular Learning With Task-Specific Pre-text Tasks. - Emmanouil Kariotakis, Nicholas D. Sidiropoulos, Aritra Konar:
Fairness-Aware Dense Subgraph Discovery. - Sofiane Ennadir, Gabriela Zarzar Gandler, Filip Cornell, Lele Cao, Oleg Smirnov, Tianze Wang, Levente Zólyomi, Björn Brinne, Sahar Asadi:
Expressivity of Representation Learning on Continuous-Time Dynamic Graphs: An Information-Flow Centric Review. - Jing Xiong, Gongye Liu, Lun Huang, Chengyue Wu, Taiqiang Wu, Yao Mu, Yuan Yao, Hui Shen, Zhongwei Wan, Jinfa Huang, Chaofan Tao, Shen Yan, Huaxiu Yao, Lingpeng Kong, Hongxia Yang, Mi Zhang, Guillermo Sapiro, Jiebo Luo, Ping Luo, Ngai Wong:
Autoregressive Models in Vision: A Survey. - Anuj Singh, Sayak Mukherjee, Ahmad Beirami, Hadi Jamali Rad:
CoDe: Blockwise Control for Denoising Diffusion Models. - Wenjian Hao, Devesh Upadhyay, Shaoshuai Mou:
Deep Koopman Learning using Noisy Data. - Alexander Kolesnikov, André Susano Pinto, Michael Tschannen:
Jet: A Modern Transformer-Based Normalizing Flow. - Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, Tae-Hyun Oh:
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models. - Markus Lange-Hegermann, Christoph Zimmer:
Future-aware Safe Active Learning of Time Varying Systems using Gaussian Processes. - Ankur Nath, Alan Kuhnle:
MaxCutBench: Revisiting and Benchmarking Graph Neural Networks for Maximum Cut. - Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal:
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization. - Vignesh Kothapalli, Tom Tirer:
Can Kernel Methods Explain How the Data Affects Neural Collapse? - Sheng Yang, Peihan Liu, Cengiz Pehlevan:
Convex Relaxation for Solving Large-Margin Classifiers in Hyperbolic Space. - Ricardo Baptista, Michael Brennan, Youssef Marzouk:
Dimension reduction via score ratio matching. - Andres Fernandez, Frank Schneider, Maren Mahsereci, Philipp Hennig:
Connecting Parameter Magnitudes and Hessian Eigenspaces at Scale using Sketched Methods. - Xingyuan Zhang, Philip Becker-Ehmck, Patrick van der Smagt, Maximilian Karl:
Overcoming Knowledge Barriers: Online Imitation Learning from Visual Observation with Pretrained World Models. - Yichi Zhang, Zhihao Duan, Yuning Huang, Fengqing Zhu:
Accelerating Learned Image Compression Through Modeling Neural Training Dynamics. - Yedi Zhang, Andrew M. Saxe, Peter E. Latham:
When Are Bias-Free ReLU Networks Effectively Linear Networks? - Sungmin Cha, Kyunghyun Cho:
Hyperparameters in Continual Learning: A Reality Check. - Mingqi Yuan, Roger Creus Castanyer, Bo Li, Xin Jin, Wenjun Zeng, Glen Berseth:
RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning. - Jeremy Wohlwend, Mateo Reveiz, Matt McPartlon, Axel Feldmann, Wengong Jin, Regina Barzilay:
MiniFold: Simple, Fast, and Accurate Protein Structure Prediction. - Lazar Atanackovic, Emmanuel Bengio:
Investigating Generalization Behaviours of Generative Flow Networks. - Prateek Yadav, Colin Raffel, Mohammed Muqeeth, Lucas Caccia, Haokun Liu, Tianlong Chen, Mohit Bansal, Leshem Choshen, Alessandro Sordoni:
A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning. - Haoyue Bai, Xuefeng Du, Katie Rainey, Shibin Parameswaran, Yixuan Li:
Out-of-Distribution Learning with Human Feedback. - Amitai Yacobi, Ofir Lindenbaum, Uri Shaham:
Generalizable and Robust Spectral Method for Multi-view Representation Learning. - Jiahe Lin, Yikai Zhang, George Michailidis:
Covariate-dependent Graphical Model Estimation via Neural Networks with Statistical Guarantees. - Can Chen, Gabriel L. Oliveira, Hossein Sharifi-Noghabi, Tristan Sylvain:
LLM-TS Integrator: Integrating LLM for Enhanced Time Series Modeling. - Akihiro Kubo, Paavo Parmas, Shin Ishii:
Double Horizon Model-Based Policy Optimization. - William St-Arnaud, Margarida Carvalho, Golnoosh Farnadi:
A Learning-Based Framework for Fair and Scalable Solution Generation in Kidney Exchange Problems. - Menghua Wu, Yujia Bao, Regina Barzilay, Tommi S. Jaakkola:
Sample, estimate, aggregate: A recipe for causal discovery foundation models. - Abrar Zahin, Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut, Gautam Dasarathy:
Robust Model Selection of Gaussian Graphical Models. - Haishan Wang, Arno Solin, Vikas K. Garg:
Heterophily-informed Message Passing. - Fang Chen, Gourav Datta, Mujahid Al Rafi, Hyeran Jeon, Meng Tang:
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs. - Pierre Wolinski, Julyan Arbel:
Gaussian Pre-Activations in Neural Networks: Myth or Reality? - Peter Ochieng:
Speech Synthesis By Unrolling Diffusion Process using Neural Network Layers. - Yinhan He, Wendy Zheng, Yaochen Zhu, Jing Ma, Saumitra Mishra, Natraj Raman, Ninghao Liu, Jundong Li:
Global Graph Counterfactual Explanation: A Subgraph Mapping Approach. - Vahan Martirosyan, Jhony H. Giraldo, Fragkiskos D. Malliaros:
Piecewise Constant Spectral Graph Neural Network. - Payman Behnam, Uday Kamal, Sanjana Vijay Ganesh, Zhaoyi Li, Michael Jurado, Alind Khare, Igor Fedorov, Gaowen Liu, Alexey Tumanov:
∇QDARTS: Quantization as an Elastic Dimension to Differentiable NAS. - Bruno Després:
A functional framework for nonsmooth autodiff with maxpooling functions. - Aref Miri Rekavandi, Olga Ohrimenko, Benjamin I. P. Rubinstein:
RS-Reg: Probabilistic and Robust Certified Regression through Randomized Smoothing. - Yuting Tang, Xin-Qiang Cai, Yao-Xiang Ding, Qiyu Wu, Guoqing Liu, Masashi Sugiyama:
Reinforcement Learning from Bagged Reward. - Apurv Verma, Satyapriya Krishna, Sebastian Gehrmann, Madhavan Seshadri, Anu Pradhan, John A. Doucette, David Rabinowitz, Leslie Barrett, Tom Ault, Hai Phan:
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs). - Chun Tao, Timur Ibrayev, Kaushik Roy:
Semantic-Syntactic Discrepancy in Images (SSDI): Learning Meaning and Order of Features from Natural Images. - Weiqin Chen, Santiago Paternain:
Random Policy Enables In-Context Reinforcement Learning within Trust Horizons. - Lijie Hu, Tianhao Huang, Lu Yu, Wanyu Lin, Tianhang Zheng, Di Wang:
Faithful Interpretation for Graph Neural Networks. - Tiancheng Lao, Xudong Guo, Mengge Liu, Junjie Yu, Yi Liu, Wenhui Fan:
Efficient Exploration in Multi-Agent Reinforcement Learning via Farsighted Self-Direction. - Ivan Stelmakh, John Frederick Wieting, Yang Xi, Graham Neubig, Nihar B. Shah:
A Gold Standard Dataset for the Reviewer Assignment Problem. - Çagatay Yildiz, Nishaanth Kanna Ravichandran, Nitin Sharma, Matthias Bethge, Beyza Ermis:
Investigating Continual Pretraining in Large Language Models: Insights and Implications. - Syrine Belakaria, Alaleh Ahmadianshalchi, Barbara E. Engelhardt, Stefano Ermon, Jana Doppa:
Non-Myopic Multi-Objective Bayesian Optimization. - Anna Kuzina, Haotian Chen, Babak Esmaeili, Jakub M. Tomczak:
Variational Stochastic Gradient Descent for Deep Neural Networks. - Yen-Ru Lai, Fu-Chieh Chang, Pei-Yuan Wu:
Leveraging Unlabeled Data Sharing through Kernel Function Approximation in Offline Reinforcement Learning. - Izzeddin Teeti, Aniket Thomas, Munish Monga, Sachin Kumar Giroh, Uddeshya Singh, Andrew Bradley, Biplab Banerjee, Fabio Cuzzolin:
ASTRA: A Scene-aware Transformer-based Model for Trajectory Prediction. - Gourav Datta, Zeyu Liu, James Diffenderfer, Bhavya Kailkhura, Peter Anthony Beerel:
When SNN meets ANN: Error-Free ANN-to-SNN Conversion for Extreme Edge Efficiency. - Nathan Sun, Constantin-Daniel Nicolae, Sara Sameer, Karena Yan:
Optimizing Cycle Life Prediction of Lithium-ion Batteries via a Physics-Informed Model. - Nabarun Goswami, Yusuke Mukuta, Tatsuya Harada:
HyperVQ: MLR-based Vector Quantization in Hyperbolic Space. - Zhouyang Liu, Ning Liu, Yixin Chen, Ziqing Wen, Jiezhong He, Dongsheng Li:
Graph Theory-Based Deep Graph Similarity Learning: A Unified Survey of Pipeline, Techniques, and Challenges. - Kartik Sharma, Vineeth Rakesh, Yingtong Dou, Srijan Kumar, Mahashweta Das:
Personalized Layer Selection for Graph Neural Networks. - Zhouliang Yu, Yuhuan Yuan, Tim Z. Xiao, Fuxiang Frank Xia, Jie Fu, Ge Zhang, Ge Lin, Weiyang Liu:
Generating Symbolic World Models via Test-time Scaling of Large Language Models. - Jongmin Lee, Amin Rakhsha, Ernest K. Ryu, Amir-massoud Farahmand:
Deflated Dynamics Value Iteration. - André Hottung, Paula Wong-Chung, Kevin Tierney:
Neural Deconstruction Search for Vehicle Routing Problems. - Xinzhe Li:
A Survey on LLM Test-Time Compute via Search: Tasks, LLM Profiling, Search Algorithms, and Relevant Frameworks. - Jixiang Qing, Rebecca D. Langdon, Robert M. Lee, Behrang Shafei, Mark van der Wilk, Calvin Tsay, Ruth Misener:
System-Aware Neural ODE Processes for Few-Shot Bayesian Optimization. - Soroush Abbasi Koohpayegani, Anuj Singh, Navaneet K. L., Hamed Pirsiavash, Hadi Jamali Rad:
GeNIe: Generative Hard Negative Images Through Diffusion. - Salma Abdel Magid, Weiwei Pan, Simon Warchol, Grace Guo, Junsik Kim, Mahia Rahman, Hanspeter Pfister:
Is What You Ask For What You Get? Investigating Concept Associations in Text-to-Image Models. - Zhou Wang, Xingye Qiao:
Generalized Prediction Set with Bandit Feedback. - Haoyan Xu, Kay Liu, Zhengtao Yao, Philip S. Yu, Mengyuan Li, Kaize Ding, Yue Zhao:
LEGO-Learn: Label-Efficient Graph Open-Set Learning. - Robin Ghyselinck, Valentin Delchevalerie, Bruno Dumas, Benoît Frénay:
On the effectiveness of Rotation-Equivariance in U-Net: A Benchmark for Image Segmentation. - Tyme Chatupanyachotikul, Leonard Horns, Matei Nastase:
[RE] GNNBoundary: Towards Explaining Graph Neural Networks through the Lens of Decision Boundaries. - Naganand Yadati:
LocalFormer: Mitigating Over-Globalising in Transformers on Graphs with Localised Training. - Carl R. Richardson, Matthew C. Turner, Steve R. Gunn:
Lurie Networks with Robust Convergent Dynamics. - Li Guo, George Andriopoulos, Zifan Zhao, Zixuan Dong, Shuyang Ling, Keith W. Ross:
Cross Entropy versus Label Smoothing: A Neural Collapse Perspective. - Oliver Schulte, Pascal Poupart:
When Should Reinforcement Learning Use Causal Reasoning? - Bhishma Dedhia, Niraj K. Jha:
Neural Slot Interpreters: Grounding Object Semantics in Emergent Slot Representations. - Domonkos Nagy, Lohithsai Yadala Chanchu, Krystof Bobek, Xin Zhou, Jacobus Smit:
Remembering to Be Fair Again: Reproducing Non-Markovian Fairness in Sequential Decision Making. - Junn Yong Loo, Fang Yu Leong, Michelle Adeline, Julia Kaiwen Lau, Hwa Hui Tew, Arghya Pal, Vishnu Monn Baskaran, Chee-Ming Ting, Raphaël C.-W. Phan:
Learning Energy-Based Generative Models via Potential Flow: A Variational Principle Approach to Probability Density Homotopy Matching. - Xiachong Feng, Longxu Dou, Minzhi Li, Qinghao Wang, Yu Guo, Haochuan Wang, Chang Ma, Lingpeng Kong:
A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios. - Zhenhan Huang, Kavitha Srinivas, Horst Samulowitz, Niharika S. D'Souza, Charu C. Aggarwal, Pin-Yu Chen, Jianxi Gao:
Language Models Are Good Tabular Learners. - Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo:
Bridging Lottery Ticket and Grokking: Understanding Grokking from Inner Structure of Networks. - Manogna Sreenivas, Soma Biswas:
Efficient Open Set Single Image Test Time Adaptation of Vision Language Models. - Masayuki Takayama, Tadahisa Okuda, Thong Pham, Tatsuyoshi Ikenoue, Shingo Fukuma, Shohei Shimizu, Akiyoshi Sannai:
Integrating Large Language Models in Causal Discovery: A Statistical Causal Approach. - Aniq Ur Rahman, Justin P. Coon:
Node Feature Forecasting in Temporal Graphs: an Interpretable Online Algorithm. - Ge Ya Luo, Zhi Hao Luo, Anthony Gosselin, Alexia Jolicoeur-Martineau, Christopher Pal:
Ctrl-V: Higher Fidelity Autonomous Vehicle Video Generation with Bounding-Box Controlled Object Motion. - Shahzad Ahmad, Sukalpa Chanda, Yogesh S. Rawat:
T2L: Efficient Zero-Shot Action Recognition with Temporal Token Learning. - Yuan Pu, Yazhe Niu, Zhenjie Yang, Jiyuan Ren, Hongsheng Li, Yu Liu:
UniZero: Generalized and Efficient Planning with Scalable Latent World Models. - Jiachen Yao, Lingjie Yi, Mayank Goswami, Chao Chen:
A Theoretical Study of Neural Network Expressive Power via Manifold Topology. - Shang Liu, Zhongze Cai, Guanting Chen, Xiaocheng Li:
Towards Better Understanding of In-Context Learning Ability from In-Context Uncertainty Quantification. - Ling-Qi Zhang, Zahra Kadkhodaie, Eero P. Simoncelli, David H. Brainard:
Generalized Compressed Sensing for Image Reconstruction with Diffusion Probabilistic Models. - Kuangyu Ding, Nachuan Xiao, Kim-Chuan Toh:
Adam-family Methods with Decoupled Weight Decay in Deep Learning. - Raul Astudillo, Kejun Li, Maegan Tucker, Chu Xin Cheng, Aaron D. Ames, Yisong Yue:
Preferential Multi-Objective Bayesian Optimization. - Yukun Li, Sijia Wang, Lifu Huang, Liping Liu:
Graph-based Confidence Calibration for Large Language Models. - Saumyaranjan Mohanty, Chimata Anudeep, Konda Reddy Mopuri:
Noise-free Loss Gradients: A Surprisingly Effective Baseline for Coreset Selection. - Liqiang Jing, Xinya Du:
FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback. - Nahush Lele, Arnav Chavan, Aryamaan Thakur, Deepak K. Gupta:
Rethinking the Value of Training-Free Structured Pruning of LLMs. - Udita Ghosh, Dripta S. Raychaudhuri, Jiachen Li, Konstantinos Karydis, Amit Roy-Chowdhury:
Robust Offline Imitation Learning from Diverse Auxiliary Data. - Hiroki Naganuma, Ryuichiro Hataya, Kotaro Yoshida, Ioannis Mitliagkas:
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration. - Amin Heyrani Nobari, Lyle Regenwetter, Giorgio Giannone, Faez Ahmed:
NITO: Neural Implicit Fields for Resolution-free and Domain-Adaptable Topology Optimization. - Mahdi Beitollahi, Alex Bie, Sobhan Hemati, Leo Maxime Brunswic, Xu Li, Xi Chen, Guojun Zhang:
Foundation Models Meet Federated Learning: A One-shot Feature-sharing Method with Privacy and Performance Guarantees. - Riccardo Cappuzzo, Aimee Coelho, Félix Lefebvre, Paolo Papotti, Gaël Varoquaux:
Retrieve, Merge, Predict: Augmenting Tables with Data Lakes. - Sourya Basu, Suhas Lohit, Matthew Brand:
G-RepsNet: A Lightweight Construction of Equivariant Networks for Arbitrary Matrix Groups. - Jeffrey Wen, Rizwan Ahmad, Philip Schniter:
Conformal Bounds on Full-Reference Image Quality for Imaging Inverse Problems. - Alan Chan, Kevin Wei, Sihao Huang, Nitarshan Rajkumar, Elija Perrier, Seth Lazar, Gillian K. Hadfield, Markus Anderljung:
Infrastructure for AI Agents. - Ruijie Jiang, Thuan Nguyen, Shuchin Aeron, Prakash Ishwar:
Hard-Negative Sampling for Contrastive Learning: Optimal Representation Geometry and Neural- vs Dimensional-Collapse. - Yang Zhang, Chenjia Bai, Bin Zhao, Junchi Yan, Xiu Li, Xuelong Li:
Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models. - Uday Bhaskar Kuchipudi, Jayadratha Gayen, Charu Sharma, Naresh Manwani:
Node Classification With Reject Option. - Matthieu Jonckheere, Chiara Mignacco, Gilles Stoltz:
Policy Optimization via Adv2: Adversarial Learning on Advantage Functions. - Christopher Scarvelis, Haitz Sáez de Ocáriz Borde, Justin Solomon:
Closed-Form Diffusion Models. - Jeremiah Birrell:
Statistical Error Bounds for GANs with Nonlinear Objective Functionals. - Camila Kolling, Till Speicher, Vedant Nanda, Mariya Toneva, Krishna P. Gummadi:
Investigating the Effects of Fairness Interventions Using Pointwise Representational Similarity. - Haoyang Li, Yiming Li, Anxin Tian, Tianhao Tang, Zhanchao Xu, Xuejia Chen, Nicole Hu, Wei Dong, Qing Li, Lei Chen:
A Survey on Large Language Model Acceleration based on KV Cache Management. - Numair Sani, Daniel Malinsky, Ilya Shpitser:
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning. - Charles K. Assaad:
Towards identifiability of micro total effects in summary causal graphs with latent confounding: extension of the front-door criterion. - Zifeng Ding, Yifeng Li, Yuan He, Antonio Norelli, Jingcheng Wu, Volker Tresp, Michael M. Bronstein, Yunpu Ma:
DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models. - Jun Han, Zixiang Chen, Yongqian Li, Yiwen Kou, Eran Halperin, Robert E. Tillman, Quanquan Gu:
Guided Discrete Diffusion for Electronic Health Record Generation. - Raja Kumar, Raghav Singhal, Pranamya Prashant Kulkarni, Deval Mehta, Kshitij Sharad Jadhav:
M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification. - Karim Kassab, Antoine Schnepf, Jean-Yves Franceschi, Laurent Caraffa, Jérémie Mary, Valérie Gouet-Brunet:
RefinedFields: Radiance Fields Refinement for Planar Scene Representations. - Pengcheng Xu, Li Yi, Gezheng Xu, Xi Chen, A. Ian McLeod, Charles Ling, Boyu Wang:
Uniform Noise Distribution and Compact Clusters: Unveiling the Success of Self-Supervised Learning in Label Noise. - Joe Watson, Chen Song, Oliver Weeger, Theo Gruner, An Thai Le, Kay Hansel, Ahmed Hendawy, Oleg Arenz, Will Trojak, Miles D. Cranmer, Carlo D'Eramo, Fabian Bülow, Tanmay Goyal, Jan Peters, Martin W. Hoffmann:
Machine Learning with Physics Knowledge for Prediction: A Survey. - Antoine Siraudin, Fragkiskos D. Malliaros, Christopher Morris:
Cometh: A continuous-time discrete-state graph diffusion model. - Hongyu Wang, Eibe Frank, Bernhard Pfahringer, Geoff Holmes:
Pruning Feature Extractor Stacking for Cross-domain Few-shot Learning. - Ziqing Xu, Hancheng Min, Salma Tarmoun, Enrique Mallada, René Vidal:
A Local Polyak-Łojasiewicz and Descent Lemma of Gradient Descent For Overparametrized Linear Models. - Davide Carbone:
Hitchhiker's guide on the relation of Energy-Based Models with other generative models, sampling and statistical physics: a comprehensive review. - Priya Kasimbeg, Vincent Roulet, Naman Agarwal, Sourabh Medapati, Fabian Pedregosa, Atish Agarwala, George E. Dahl:
How far away are truly hyperparameter-free learning algorithms? - Ben Chugg, Hongjian Wang, Aaditya Ramdas:
Time-Uniform Confidence Spheres for Means of Random Vectors. - Gaurav Chaudhary, Washim Uddin Mondal, Laxmidhar Behera:
MOORL: A Framework for Integrating Offline-Online Reinforcement Learning. - Weigao Sun, Zhen Qin, Dong Li, Xuyang Shen, Yu Qiao, Yiran Zhong:
LASP: Linear Attention Sequence Parallelism. - Reabetswe M. Nkhumise, Debabrota Basu, Tony J. Prescott, Aditya Gilra:
Studying Exploration in RL: An Optimal Transport Analysis of Occupancy Measure Trajectories. - Wei-Hsiang Liao, Yuhta Takida, Yukara Ikemiya, Zhi Zhong, Chieh-Hsin Lai, Giorgio Fabbro, Kazuki Shimada, Keisuke Toyama, Kin Wai Cheuk, Marco A. Martínez Ramírez, Shusuke Takahashi, Stefan Uhlich, Taketo Akama, Woosung Choi, Yuichiro Koyama, Yuki Mitsufuji:
Music Foundation Model as Generic Booster for Music Downstream Tasks. - Ehsan Futuhi, Shayan Karimi, Chao Gao, Martin Müller:
ETGL-DDPG: A Deep Deterministic Policy Gradient Algorithm for Sparse Reward Continuous Control. - Felix Divo, Eric Endress, Kevin Endler, Kristian Kersting, Devendra Singh Dhami:
Forecasting Company Fundamentals. - Rickard Brüel Gabrielsson, Tongzhou Wang, Manel Baradad, Justin Solomon:
Deep Augmentation: Dropout as Augmentation for Self-Supervised Learning. - Jinhao Li, Sarah Monazam Erfani, Lei Feng, James Bailey, Feng Liu:
Exploring Weak-to-Strong Generalization for CLIP-based Classification. - Zheyuan Zhan, Defang Chen, Jian-Ping Mei, Zhenghe Zhao, Jiawei Chen, Chun Chen, Siwei Lyu, Can Wang:
Conditional Image Synthesis with Diffusion Models: A Survey. - Huzaifa Arif, Pin-Yu Chen, Keerthiram Murugesan, Alex Gittens:
Group Fair Federated Learning via Stochastic Kernel Regularization. - Priscilla Ong, Manuel Haussmann, Otto Lönnroth, Harri Lähdesmäki:
Latent mixed-effect models for high-dimensional longitudinal data. - Md. Ibrahim Ibne Alam, Parikshit Ram, Soham Dan, Horst Samulowitz, Koushik Kar:
On the Utility of Existing Fine-Tuned Models on Data-Scarce Domains. - Chenhui Zhao, Liyue Shen:
Part-aware Prompted Segment Anything Model for Adaptive Segmentation. - Praveen Srinivasa Varadhan, Amogh Gulati, Ashwin Sankar, Srija Anand, Anirudh Gupta, Anirudh Mukherjee, Shiva Kumar Marepally, Ankur Bhatia, Saloni Jaju, Suvrat Bhooshan, Mitesh M. Khapra:
Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation. - James Y. Huang, Wenxuan Zhou, Fei Wang, Fred Morstatter, Sheng Zhang, Hoifung Poon, Muhao Chen:
Offset Unlearning for Large Language Models. - Pranav Maneriker, Aditya T. Vadlamani, Anutam Srinivasan, Yuntian He, Ali Payani, Srinivasan Parthasarathy:
Conformal Prediction: A Theoretical Note and Benchmarking Transductive Node Classification in Graphs. - Zhuoqun Chen, Xiu Yuan, Tongzhou Mu, Hao Su:
Responsive Noise-Relaying Diffusion Policy: Responsive and Efficient Visuomotor Control. - Haozhe Liu, Shikun Liu, Zijian Zhou, Mengmeng Xu, Yanping Xie, Xiao Han, Juan Camilo Pérez, Ding Liu, Kumara Kahatapitiya, Menglin Jia, Jui-Chieh Wu, Sen He, Tao Xiang, Jürgen Schmidhuber, Juan-Manuel Pérez-Rúa:
MarDini: Masked Auto-regressive Diffusion for Video Generation at Scale. - Keivan Rezaei, Khyathi Raghavi Chandu, Soheil Feizi, Yejin Choi, Faeze Brahman, Abhilasha Ravichander:
RESTOR: Knowledge Recovery in Machine Unlearning. - Hui Shen, Jingxuan Zhang, Boning Xiong, Rui Hu, Shoufa Chen, Zhongwei Wan, Xin Wang, Yu Zhang, Zixuan Gong, Guangyin Bao, Chaofan Tao, Yongfeng Huang, Ye Yuan, Mi Zhang:
Efficient Diffusion Models: A Survey. - Thang D. Bui, Matthew Ashman, Richard E. Turner:
Tighter sparse variational Gaussian processes. - Jihun Kim, Javad Lavaei:
Online Bandit Nonlinear Control with Dynamic Batch Length and Adaptive Learning Rate. - Monika Wysoczanska, Antonín Vobecký, Amaia Cardiel, Tomasz Trzcinski, Renaud Marlet, Andrei Bursuc, Oriane Siméoni:
Test-time Contrastive Concepts for Open-world Semantic Segmentation with Vision-Language Models. - Anthony Frion, Lucas Drumetz, Mauro Dalla Mura, Guillaume Tochon, Abdeldjalil Aïssa-El-Bey:
Augmented Invertible Koopman Autoencoder for long-term time series forecasting. - Youssef Mroueh, Apoorva Nitsure:
Information Theoretic Guarantees For Policy Alignment In Large Language Models. - Wenhui Cui, Haleh Akrami, Anand A. Joshi, Richard M. Leahy:
Generalizable Representation Learning for fMRI-based Neurological Disorder Identification. - Maria-Florina Balcan, Anh Tuan Nguyen, Dravyansh Sharma:
Algorithm Configuration for Structured Pfaffian Settings. - Eduardo Fernandes Montesuma, Adel El Habazi, Fred Maurice Ngolè Mboula:
Unsupervised Anomaly Detection through Mass Repulsing Optimal Transport. - Zhenhailong Wang, Joy Hsu, Xingyao Wang, Kuan-Hao Huang, Manling Li, Jiajun Wu, Heng Ji:
Visually Descriptive Language Model for Vector Graphics Reasoning. - Andreas Kirsch:
(Implicit) Ensembles of Ensembles: Epistemic Uncertainty Collapse in Large Models. - Sayan Mukherjee, Vorapong Suppakitpaisarn:
Local Differential Privacy-Preserving Spectral Clustering for General Graphs. - Taraneh Younesian, Daniel Daza, Emile van Krieken, Thiviyan Thanapalasingam, Peter Bloem:
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks. - Ben Anson, Edward Milsom, Laurence Aitchison:
Flexible Infinite-Width Graph Convolutional Neural Networks. - Zehao Wang, Han Zhou, Matthew B. Blaschko, Tinne Tuytelaars, Minye Wu:
Diversity-Driven View Subset Selection for Indoor Novel View Synthesis. - Zohair Shafi, Ayan Chatterjee, Tina Eliassi-Rad:
Explaining Node Embeddings. - Lorenzo Dall'Amico, Enrico Maria Belliardo:
Learning distributed representations with efficient SoftMax normalization. - Keziah Naggita, Matthew R. Walter, Avrim Blum:
Learning Actionable Counterfactual Explanations in Large State Spaces. - Bernard Spiegl, Andrea Perin, Stéphane Deny, Alexander Ilin:
ViewFusion: Learning Composable Diffusion Models for Novel View Synthesis. - Hsiu-Hsuan Wang, Tan-Ha Mai, Nai-Xuan Ye, Wei-I Lin, Hsuan-Tien Lin:
CLImage: Human-Annotated Datasets for Complementary-Label Learning. - Yannick Assogba, Donghao Ren:
Evaluating Long Range Dependency Handling in Code Generation LLMs. - Mohammad Reza Rezaei, Adji Bousso Dieng:
Alternators For Sequence Modeling. - Manuel Dileo, Matteo Zignani, Sabrina Gaito:
Evaluating explainability techniques on discrete-time graph neural networks. - Caleb Cranney, Jesse G. Meyer:
AttentionSmithy: A Modular Framework for Rapid Transformer Development. - Ashutosh Baheti, Debanjana Chakraborty, Faeze Brahman, Ronan Le Bras, Ximing Lu, Nouha Dziri, Yejin Choi, Mark O. Riedl, Maarten Sap:
Multi-Attribute Constraint Satisfaction via Language Model Rewriting. - Siddhant Bhambri, Mudit Verma, Subbarao Kambhampati:
Do Think Tags Really Help LLMs Plan? A Critical Evaluation of ReAct-Style Prompting. - Guangyao Zhou, Sivaramakrishnan Swaminathan, Rajkumar Vasudeva Raju, J. Swaroop Guntupalli, Wolfgang Lehrach, Joseph Ortiz, Antoine Dedieu, Miguel Lázaro-Gredilla, Kevin Patrick Murphy:
Diffusion Model Predictive Control. - Liangliang Zhang, Haoran Bao, Yao Ma:
Extending Graph Condensation to Multi-Label Datasets: A Benchmark Study. - Po-Yi Lu, Yi-Jie Cheng, Chun-Liang Li, Hsuan-Tien Lin:
An Expanded Benchmark that Rediscovers and Affirms the Edge of Uncertainty Sampling for Active Learning in Tabular Datasets. - Zhehao Zhang, Ryan A. Rossi, Branislav Kveton, Yijia Shao, Diyi Yang, Hamed Zamani, Franck Dernoncourt, Joe Barrow, Tong Yu, Sungchul Kim, Ruiyi Zhang, Jiuxiang Gu, Tyler Derr, Hongjie Chen, Junda Wu, Xiang Chen, Zichao Wang, Subrata Mitra, Nedim Lipka, Nesreen K. Ahmed, Yu Wang:
Personalization of Large Language Models: A Survey. - Reda Bensaid, Vincent Gripon, François Leduc-Primeau, Lukas Mauch, Ghouthi Boukli Hacene, Fabien Cardinaux:
A Novel Benchmark for Few-Shot Semantic Segmentation in the Era of Foundation Models. - Hongkai Zheng, Wenda Chu, Austin Wang, Nikola Borislavov Kovachki, Ricardo Baptista, Yisong Yue:
Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems. - Alexander Chebykin, Tanja Alderliesten, Peter A. N. Bosman:
To Be Greedy, or Not to Be - That Is the Question for Population Based Training Variants. - Sébastien Foulle:
Mathematical Characterization of Better-than-Random Multiclass Models. - Ju-Seung Byun, Andrew Perrault:
Normality-Guided Distributional Reinforcement Learning for Continuous Control. - Wenhan Gao, Jian Luo, Ruichen Xu, Yi Liu:
Dynamic Schwartz-Fourier Neural Operator for Enhanced Expressive Power. - Yixiang Yao, Weizhao Jin, Srivatsan Ravi:
Labeling without Seeing? Blind Annotation for Privacy-Preserving Entity Resolution. - Mengzhao Jia, Wenhao Yu, Kaixin Ma, Tianqing Fang, Zhihan Zhang, Siru Ouyang, Hongming Zhang, Dong Yu, Meng Jiang:
Leopard: A Vision Language Model for Text-Rich Multi- Image Tasks. - Marlon Tobaben, Mohamed Ali Souibgui, Rubèn Tito, Khanh Nguyen, Raouf Kerkouche, Kangsoo Jung, Joonas Jälkö, Lei Kang, Andrey Barsky, Vincent Poulain D'Andecy, Aurélie Joseph, Aashiq Muhamed, Kevin Kuo, Virginia Smith, Yusuke Yamasaki, Takumi Fukami, Kenta Niwa, Iifan Tyou, Hiro Ishii, Rio Yokota, Ragul N, Rintu Kutum, Josep Lladós, Ernest Valveny, Antti Honkela, Mario Fritz, Dimosthenis Karatzas:
NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA. - Chunsan Hong, Tae-Hyun Oh, Minhyuk Sung:
MemBench: Memorized Image Trigger Prompt Dataset for Diffusion Models. - Kai Yi, Laurent Condat, Peter Richtárik:
Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning. - Lola Le Breton, Quentin Fournier, John Xavier Morris, Mariam El Mezouar, Sarath Chandar:
NeoBERT: A Next Generation BERT. - Nicholas Matthew Boffi, Michael Samuel Albergo, Eric Vanden-Eijnden:
Flow map matching with stochastic interpolants: A mathematical framework for consistency models. - Jared Fernandez, Luca Wehrstedt, Leonid Shamis, Mostafa Elhoushi, Kalyan Saladi, Yonatan Bisk, Emma Strubell, Jacob Kahn:
Efficient Hardware Scaling and Diminishing Returns in Large-Scale Training of Language Models. - Mete Kemertas, Allan Douglas Jepson, Amir-massoud Farahmand:
Efficient and Accurate Optimal Transport with Mirror Descent and Conjugate Gradients. - Daniel Jarne Ornia, Giannis Delimpaltadakis, Jens Kober, Javier Alonso-Mora:
Predictable Reinforcement Learning Dynamics through Entropy Rate Minimization. - Xingyue Huang, Miguel A. Romero Orth, Pablo Barceló, Michael M. Bronstein, Ismail Ilkan Ceylan:
Link Prediction with Relational Hypergraphs. - Zhexiao Xiong, Xin Xing, Scott Workman, Subash Khanal, Nathan Jacobs:
Mixed-View Panorama Synthesis using Geospatially Guided Diffusion. - Ismail Nejjar, Hao Dong, Olga Fink:
Recall and Refine: A Simple but Effective Source-free Open- set Domain Adaptation Framework. - Akifumi Yamada, Tomohiro Shiraishi, Shuichi Nishino, Teruyuki Katsuoka, Kouichi Taji, Ichiro Takeuchi:
Change Point Detection in the Frequency Domain with Statistical Reliability. - David Chen, Xinwei Li, Eui-Jin Kim, Prateek Bansal, David J. Nott:
Multi-objective Bayesian optimization for Likelihood-Free inference in sequential sampling models of decision making. - Aritra Bandyopadhyay, Chiranjeev Bindra, Roan van Blanken, Arijit Ghosh:
Reproducibility Study of 'SLICE: Stabilized LIME for Consistent Explanations for Image Classification'. - Aditya Somasundaram, Pushkal Mishra, Ayon Borthakur:
Learning Using a Single Forward Pass. - Ioannis Athanasiadis, Fredrik Lindsten, Michael Felsberg:
Prior Learning in Introspective VAEs. - Ciwan Ceylan, Kambiz Ghoorchian, Danica Kragic:
Full-Rank Unsupervised Node Embeddings for Directed Graphs via Message Aggregation. - Mohammed Baharoon, Jonathan Klein, Dominik L. Michels:
Harmony: A Joint Self-Supervised and Weakly-Supervised Framework for Learning General Purpose Visual Representations. - Adrian Hill, Guillaume Dalle:
Sparser, Better, Faster, Stronger: Sparsity Detection for Efficient Automatic Differentiation. - Bum Jun Kim, Yoshinobu Kawahara, Sang Woo Kim:
Disappearance of Timestep Embedding: A Case Study on Neural ODE and Diffusion Models. - Leitian Tao, Xiang Chen, Tong Yu, Tung Mai, Ryan A. Rossi, Yixuan Li, Saayan Mitra:
CodeLutra: Boosting LLM Code Generation via Preference-Guided Refinement. - Xiaoyu Jiang, Sokratia Georgaka, Magnus Rattray, Mauricio A. Álvarez:
Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference. - Shubhankar Agarwal, Hamzah Khan, Sandeep P. Chinchali, David Fridovich-Keil:
A Framework for Finding Local Saddle Points in Two-Player Zero-Sum Black-Box Games. - Xuelian Jiang, Tongtian Zhu, Yingxiang Xu, Can Wang, Yeyu Zhang, Fengxiang He:
Lie Symmetry Net: Preserving Conservation Laws in Modelling Financial Market Dynamics via Differential Equations. - Yu Sun, Vijja Wichitwechkarn, Ronald Clark, Mirko Kovac, Basaran Bahadir Kocer:
Metamorphic Forward Adaptation Network: Dynamically Adaptive and Modular Multi-layer Learning. - Giacomo Spigler:
Proximal Policy Distillation. - Michael J. Zellinger, Matt Thomson:
Rational Tuning of LLM Cascades via Probabilistic Modeling. - Abulikemu Abuduweili, Chenyang Yuan, Changliu Liu, Frank Permenter:
Enhancing Sample Generation of Diffusion Models using Noise Level Correction. - Izabela Kurek, Wojciech Trejter, Stipe Frkovic, Andro Erdelez:
[Re] Improving Interpretation Faithfulness for Vision Transformers. - Anirudhan Badrinath, Prabhat Agarwal, Jiajing Xu:
Unified Preference Optimization: Language Model Alignment Beyond the Preference Frontier. - Tsunehiko Tanaka, Kenshi Abe, Kaito Ariu, Tetsuro Morimura, Edgar Simo-Serra:
Return-Aligned Decision Transformer. - Jan Henrik Bertrand, Lukas Bierling, Ina Klaric, Aron Wezenberg:
[RE] GNNBoundary: Finding Boundaries and Going Beyond Them. - Zhihao Liu, Xianliang Yang, Zichuan Liu, Yifan Xia, Wei Jiang, Yuanyu Zhang, Lijuan Li, Guoliang Fan, Lei Song, Jiang Bian:
Knowing What Not to Do: Leverage Language Model Insights for Action Space Pruning in Multi-agent Reinforcement Learning. - Shivanshu Shekhar, Shreyas Singh, Tong Zhang:
SEE-DPO: Self Entropy Enhanced Direct Preference Optimization. - Simon Schrodi, Julian Schur, Max Argus, Thomas Brox:
Selective Concept Bottleneck Models Without Predefined Concepts. - Amine El Hattami, Nicolas Chapados, Christopher Pal:
Spaced Scheduling for Large Language Model Training. - Senmiao Wang, Yupeng Chen, Yushun Zhang, Ruoyu Sun, Tian Ding:
Exploring and Improving Initialization for Deep Graph Neural Networks: A Signal Propagation Perspective. - Ian Davidson, Nicolás Kennedy, S. S. Ravi:
CXAD: Contrastive Explanations for Anomaly Detection: Algorithms, Complexity Results and Experiments. - Leander Weber, Jim Berend, Moritz Weckbecker, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin:
Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation. - Ziyan Wang, Yali Du, Yudi Zhang, Meng Fang, Biwei Huang:
MACCA: Offline Multi-agent Reinforcement Learning with Causal Credit Assignment. - Supriya Manna, Niladri Sett:
Reconciling Privacy and Explainability in High-Stakes: A Systematic Inquiry. - Arto Maranjyan, Mher Safaryan, Peter Richtárik:
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity. - Chaouki Ben Issaid, Praneeth Vepakomma, Mehdi Bennis:
Tackling Feature and Sample Heterogeneity in Decentralized Multi-Task Learning: A Sheaf-Theoretic Approach. - Wenhao Li, Yudong Xu, Scott Sanner, Elias Boutros Khalil:
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects. - Antoine Godichon-Baggioni, Pierre Tarrago:
Non asymptotic analysis of Adaptive stochastic gradient algorithms and applications. - Arash Tavakoli, Sina Ghiassian, Nemanja Rakicevic:
Learning in complex action spaces without policy gradients. - Charles A. Hepburn, Yue Jin, Giovanni Montana:
State-Constrained Offline Reinforcement Learning. - Kyoichi Iwasaki, Hideitsu Hino:
Dynamics of the accelerated t-SNE. - Yeruru Asrar Ahmed, Anurag Mittal:
End-to-end Training for Text-to-Image Synthesis using Dual-Text Embeddings. - Freek Byrman, Emma Kasteleyn, Bart Kuipers, Daniel Uyterlinde:
Revisiting Discover-then-Name Concept Bottleneck Models: A Reproducibility Study. - Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Guyue Liu, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang:
Large Language Model-Brained GUI Agents: A Survey. - Sabine Muzellec, Andrea Alamia, Thomas Serre, Rufin VanRullen:
Enhancing deep neural networks through complex-valued representations and Kuramoto synchronization dynamics. - Ruiyi Zhang, Sai Ashish Somayajula, Pengtao Xie:
TapWeight: Reweighting Pretraining Objectives for Task-Adaptive Pretraining. - Jayanta Dey, Haoyin Xu, Ashwin De Silva, Joshua T. Vogelstein:
Simple Calibration via Geodesic Kernels. - Navish Kumar, Thomas Möllenhoff, Mohammad Emtiyaz Khan, Aurélien Lucchi:
Optimization Guarantees for Square-Root Natural-Gradient Variational Inference. - Michael M. Jerge, David Evans:
Pitfalls in Evaluating Inference-time Methods for Improving LLM Reliability. - Sofía Pérez Casulo, Marcelo Fiori, Federico Larroca, Gonzalo Mateos:
LASE: Learned Adjacency Spectral Embeddings. - Sayash Kapoor, Benedikt Stroebl, Zachary S. Siegel, Nitya Nadgir, Arvind Narayanan:
AI Agents That Matter. - Seyed Mahdi B. Azad, Zahra Padar, Gabriel Kalweit, Joschka Boedecker:
SR-Reward: Taking The Path More Traveled. - Ryan Chen, Ziteng Pang, Bradly C. Stadie:
Thoughts and Lessons on Using Visual Foundation Models for Manipulation. - Tomonari Tanaka, Hiroyuki Hanada, Hanting Yang, Tatsuya Aoyama, Yu Inatsu, Satoshi Akahane, Yoshito Okura, Noriaki Hashimoto, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi:
Distributionally Robust Coreset Selection under Covariate Shift. - Linfeng Ye, Shayan Mohajer Hamidi, En-Hui Yang:
Towards Undistillable Models by Minimizing Conditional Mutual Information. - Weizhe Chen, Sven Koenig, Bistra Dilkina:
Solving Multi-agent Path Finding as an LLM Benchmark: How, How Good and Why. - Shubham Ugare, Tarun Suresh, Hangoo Kang, Sasa Misailovic, Gagandeep Singh:
SynCode: LLM Generation with Grammar Augmentation. - Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann:
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks. - Sander Honig, Elyanne Oey, Lisanne Wallaard, Sharanda Suttorp, Clara Rus:
A reproducibility study of "User-item fairness tradeoffs in recommendations". - Hao Zhang, Di Chang, Fang Li, Mohammad Soleymani, Narendra Ahuja:
MagicPose4D: Crafting Articulated Models with Appearance and Motion Control. - Catalin E. Brita, Hieu Nguyen, Lubov Chalakova, Nikola Petrov:
Revisiting XRec: How Collaborative Signals Influence LLM-Based Recommendation Explanations. - Jorge Carrasco Pollo, Ioannis Kapetangeorgis, Joshua Rosenthal, John Hua Yao:
[Re] Benchmarking LLM Capabilities in Negotiation through Scoreable Games. - Aleksei Korneev, Jan Ramon:
A Survey on Verifiable Cross-Silo Federated Learning. - Ruben Figge, Sjoerd Gunneweg, Aaron Kuin, Mees Lindeman:
Reassessing Fairness: A Reproducibility Study of NIFA's Impact on GNN Models. - Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, Ulrich Neumann:
GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation. - Atsuyuki Miyai, Jingkang Yang, Jingyang Zhang, Yifei Ming, Yueqian Lin, Qing Yu, Go Irie, Shafiq Joty, Yixuan Li, Hai Helen Li, Ziwei Liu, Toshihiko Yamasaki, Kiyoharu Aizawa:
Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey. - Juheon Lee, Xiaohao Cai, Carola-Bibiane Schönlieb, Simon Masnou:
Neural varifolds: an aggregate representation for quantifying the geometry of point clouds. - Razan Baltaji, Saurabh Pujar, Martin Hirzel, Louis Mandel, Luca Buratti, Lav R. Varshney:
Cross-lingual Transfer in Programming Languages: An Extensive Empirical Study. - Zhuo Li, He Zhao, Jinke Ren, Anningzhe Gao, Dandan Guo, Xiang Wan, Hongyuan Zha:
Synthesizing Minority Samples for Long-tailed Classification via Distribution Matching. - Önder Akacik, Mark Hoogendoorn:
ModernTCN Revisited: A Critical Look at the Experimental Setup in General Time Series Analysis. - Siwei Yang, Bingchen Zhao, Cihang Xie:
AQA-Bench: An Interactive Benchmark for Evaluating LLMs' Sequential Reasoning Ability in Algorithmic Environments. - Youngseog Chung, Dhruv Malik, Jeff Schneider, Yuanzhi Li, Aarti Singh:
Beyond Parameter Count: Implicit Bias in Soft Mixture of Experts. - Xiaozhuang Song, Yuzhao Tu, Tianshu Yu:
Enhancing Molecular Conformer Generation via Fragment- Augmented Diffusion Pretraining. - Matteo Tucat, Anirbit Mukherjee, Procheta Sen, Mingfei Sun, Omar Rivasplata:
Regularized Gradient Clipping Provably Trains Wide and Deep Neural Networks. - Furkan Mumcu, Yasin Yilmaz:
Universal and Efficient Detection of Adversarial Data through Nonuniform Impact on Network Layers. - Atharv Mittal, Agam Pandey, Amritanshu Tiwari, Sukrit Jindal, Swadesh Swain:
Revisiting CroPA: A Reproducibility Study and Enhancements for Cross-Prompt Adversarial Transferability in Vision-Language Models. - Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas J. Guibas, James Tompkin, Adam W. Harley:
Monocular Dynamic Gaussian Splatting: Fast, Brittle, and Scene Complexity Rules. - Asen Dotsinski, Udit Thakur, Marko Ivanov, Mohammad Hafeez Khan, Maria Heuss:
On the Generalizability of "Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals". - Meher Changlani, Benjamin Hucko, Ioannis Kechagias, Aswin Krishna Mahadevan:
Reproducibility Study of "Improving Interpretation Faithfulness For Vision Transformers". - Jiawei Sun, Hongkang Li, Meng Wang:
Theoretical Learning Performance of Graph Networks: the Impact of Jumping Connections and Layer-wise Sparsification. - Divya Anand Sinha, Yezi Liu, Ruijie Du, Athina Markopoulou, Yanning Shen:
Gradient Inversion Attack on Graph Neural Networks. - Yangyang Shu, Yuhang Liu, Xiaofeng Cao, Qi Chen, Bowen Zhang, Ziqin Zhou, Anton van den Hengel, Lingqiao Liu:
Seeing Beyond Labels: Source-Free Domain Adaptation via Hypothesis Consolidation of Prediction Rationale. - Alice V. De Lorenci, Seung Eun Yi, Théo Moutakanni, Piotr Bojanowski, Camille Couprie, Juan C. Caicedo, Wolfgang Maximilian Anton Pernice:
Scaling Channel-Adaptive Self-Supervised Learning. - Sara Ghazanfari, Alexandre Araujo, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami:
EMMA: Efficient Visual Alignment in Multi-Modal LLMs. - Marco Bagatella, Andreas Krause, Georg Martius:
Directed Exploration in Reinforcement Learning from Linear Temporal Logic. - Ryoma Sato, Shinji Ito:
Influential Bandits: Pulling an Arm May Change the Environment. - Hanning Zhang, Pengcheng Wang, Shizhe Diao, Yong Lin, Rui Pan, Hanze Dong, Dylan Zhang, Pavlo Molchanov, Tong Zhang:
Entropy-Regularized Process Reward Model. - Andrii Skliar, Ties van Rozendaal, Romain Lepert, Todor Boinovski, Mart van Baalen, Markus Nagel, Paul N. Whatmough, Babak Ehteshami Bejnordi:
Mixture of Cache-Conditional Experts for Efficient Mobile Device Inference. - Noaman Mehmood, Kenneth E. Barner:
Disentangled Embedding through Style and Mutual Information for Domain Generalization. - Ayoub Echchahed, Pablo Samuel Castro:
A Survey of State Representation Learning for Deep Reinforcement Learning. - Giang Nguyen, Ivan Brugere, Shubham Sharma, Sanjay Kariyappa, Anh Totti Nguyen, Freddy Lécué:
Interpretable LLM-based Table Question Answering. - Blagoj Mitrevski, Arina Rak, Julian Schnitzler, Chengkun Li, Andrii Maksai, Jesse Berent, Claudiu Cristian Musat:
InkSight: Offline-to-Online Handwriting Conversion by Teaching Vision-Language Models to Read and Write. - Liu Yang, Fabian Paischer, Kaveh Hassani, Jiacheng Li, Shuai Shao, Zhang Gabriel Li, Yun He, Xue Feng, Nima Noorshams, Sem Park, Bo Long, Robert D. Nowak, Xiaoli Gao, Hamid Eghbalzadeh:
Unifying Generative and Dense Retrieval for Sequential Recommendation. - Keanu Sisouk, Julie Delon, Julien Tierny:
A User's Guide to Sampling Strategies for Sliced Optimal Transport. - Yukti Makhija, Samarth Bhatia, Manoj Kumar, Sandeep Kumar:
Modularity aided consistent attributed graph clustering via coarsening. - Yash Sinha, Murari Mandal, Mohan S. Kankanhalli:
UnSTAR: Unlearning with Self-Taught Anti-Sample Reasoning for LLMs. - Jipeng Lyu, Jiahua Dong, Yu-Xiong Wang:
SCas4D: Structural Cascaded Optimization for Boosting Persistent 4D Novel View Synthesis. - Sarah Dean, Evan Dong, Meena Jagadeesan, Liu Leqi:
Accounting for AI and Users Shaping One Another: The Role of Mathematical Models. - Mohammad Hassan Vali, Tom Bäckström:
Unsupervised Panoptic Interpretation of Latent Spaces in GANs Using Space-Filling Vector Quantization. - Takumi Fukami, Tomoya Murata, Kenta Niwa:
Adaptive Clipping for Differential Private Federated Learning in Interpolation Regimes. - Hugues Van Assel, Cédric Vincent-Cuaz, Nicolas Courty, Rémi Flamary, Pascal Frossard, Titouan Vayer:
Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein. - Tian Qin, Wei-Min Huang:
Riemann-Lebesgue Forest for Regression. - Timo Kaufmann, Paul Weng, Viktor Bengs, Eyke Hüllermeier:
A Survey of Reinforcement Learning from Human Feedback. - Mattia Opper, Roland Fernandez, Paul Smolensky, Jianfeng Gao:
TRA: Better Length Generalisation with Threshold Relative Attention. - Ciwan Ceylan, Kambiz Ghoorchian, Danica Kragic:
Disobeying Directions: Switching Random Walk Filters for Unsupervised Node Embedding Learning on Directed Graphs. - Yunpeng Jiang, Yutong Ban, Paul Weng:
Understanding and Reducing the Class-Dependent Effects of Data Augmentation with A Two-Player Game Approach. - Xiangqi Wang, Shaokun Zhang, Jose Efraim Aguilar Escamill, Qingyun Wu, Xiangliang Zhang, Jian Kang, Huazheng Wang:
Fair Online Influence Maximization. - Songhan Zhang, ShiNung Ching:
A Stochastic Polynomial Expansion for Uncertainty Propagation through Networks. - Fang Sun, Zijie Huang, Haixin Wang, Huacong Tang, Xiao Luo, Wei Wang, Yizhou Sun:
Graph Fourier Neural ODEs: Modeling Spatial-temporal Multi-scales in Molecular Dynamics. - Zarif Ikram, Ling Pan, Dianbo Liu:
Evolution guided generative flow networks. - Guy Barzilai, Ohad Shamir, Moslem Zamani:
Are Convex Optimization Curves Convex? - Riccardo Zaccone, Sai Praneeth Karimireddy, Carlo Masone, Marco Ciccone:
Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball Momentum. - Yang Sui, Huy Phan, Jinqi Xiao, Tianfang Zhang, Zijie Tang, Cong Shi, Yan Wang, Yingying Chen, Bo Yuan:
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models. - Yingheng Wang, Zichen Wang, Gil Sadeh, Luca Zancato, Alessandro Achille, George Karypis, Huzefa Rangwala:
LC-PLM: Long-context Protein Language Modeling Using Bidirectional Mamba with Shared Projection Layers. - Naveen Janaki Raman, Mateo Espinosa Zarlenga, Juyeon Heo, Mateja Jamnik:
Do Concept Bottleneck Models Respect Localities? - Jing Sun, Cong Zhang, Zhiguang Cao:
Collaboration with Dynamic Open Ad Hoc Team via Team State Modelling. - Leonardo Cotta, Chris J. Maddison:
Test-Time Fairness and Robustness in Large Language Models. - Weiyang Zhang, Xinyang Chen, Yu Sun, Weili Guan, Liqiang Nie:
Batch Training for Streaming Time Series: A Transferable Augmentation Framework to Combat Distribution Shifts. - Alberto Caron, Vasilios Mavroudis, Chris Hicks:
On Efficient Bayesian Exploration in Model-Based Reinforcement Learning. - Ziyi Zhang, Yorie Nakahira, Guannan Qu:
Predictive Control and Regret Analysis of Non-Stationary MDP with Look-ahead Information. - Jiazhi Li, Mahyar Khayatkhoei, Jiageng Zhu, Hanchen Xie, Mohamed E. Hussein, Wael AbdAlmageed:
Fairness and Disentanglement: A Critical Review of Predominant Bias in Neural Networks. - Federico Di Gennaro, Thibault Laugel, Vincent Grari, Marcin Detyniecki:
Controlled Model Debiasing through Minimal and Interpretable Updates. - Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot:
Learned-Database Systems Security. - Matthew Bowditch, Mike Paterson, Matthias Englert, Ranko Lazic:
Variance Dichotomy in Feature Spaces of Facial Recognition Systems is a Weak Defense against Simple Weight Manipulation Attacks. - Kim-Celine Kahl, Selen Erkan, Jeremias Traub, Carsten T. Lüth, Klaus H. Maier-Hein, Lena Maier-Hein, Paul F. Jaeger:
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks. - Prateek Yadav, Tu Vu, Jonathan Lai, Alexandra Chronopoulou, Manaal Faruqui, Mohit Bansal, Tsendsuren Munkhdalai:
What Matters for Model Merging at Scale? - Tobias Ladner, Matthias Althoff:
Fully Automatic Neural Network Reduction for Formal Verification. - Timothée Darcet, Federico Baldassarre, Maxime Oquab, Julien Mairal, Piotr Bojanowski:
Cluster and Predict Latents Patches for Improved Masked Image Modeling. - Huajun Xi, Jianguo Huang, Kangdao Liu, Lei Feng, Hongxin Wei:
Does confidence calibration improve conformal prediction?

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.