default search action
Dylan J. Foster
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c49]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. ICLR 2024 - [c48]Adam Block, Dylan J. Foster, Akshay Krishnamurthy, Max Simchowitz, Cyril Zhang:
Butterfly Effects of SGD Noise: Error Amplification in Behavior Cloning and Autoregression. ICLR 2024 - [c47]Yuda Song, Lili Wu, Dylan J. Foster, Akshay Krishnamurthy:
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics. ICML 2024 - [c46]Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy:
Scalable Online Exploration via Coverability. ICML 2024 - [i59]Philip Amortila, Dylan J. Foster, Nan Jiang, Ayush Sekhari, Tengyang Xie:
Harnessing Density Ratios for Online Reinforcement Learning. CoRR abs/2401.09681 (2024) - [i58]Philip Amortila, Dylan J. Foster, Akshay Krishnamurthy:
Scalable Online Exploration via Coverability. CoRR abs/2403.06571 (2024) - [i57]Akshay Krishnamurthy, Keegan Harris, Dylan J. Foster, Cyril Zhang, Aleksandrs Slivkins:
Can large language models explore in-context? CoRR abs/2403.15371 (2024) - [i56]Dylan J. Foster, Yanjun Han, Jian Qian, Alexander Rakhlin:
Online Estimation via Offline Estimation: An Information-Theoretic Framework. CoRR abs/2404.10122 (2024) - [i55]Zakaria Mhammedi, Dylan J. Foster, Alexander Rakhlin:
The Power of Resets in Online Reinforcement Learning. CoRR abs/2404.15417 (2024) - [i54]Yuda Song, Lili Wu, Dylan J. Foster, Akshay Krishnamurthy:
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics. CoRR abs/2405.19269 (2024) - [i53]Tengyang Xie, Dylan J. Foster, Akshay Krishnamurthy, Corby Rosset, Ahmed Awadallah, Alexander Rakhlin:
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF. CoRR abs/2405.21046 (2024) - [i52]Audrey Huang, Wenhao Zhan, Tengyang Xie, Jason D. Lee, Wen Sun, Akshay Krishnamurthy, Dylan J. Foster:
Correcting the Mythos of KL-Regularization: Direct Alignment without Overoptimization via Chi-Squared Preference Optimization. CoRR abs/2407.13399 (2024) - [i51]Dylan J. Foster, Adam Block, Dipendra Misra:
Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning. CoRR abs/2407.15007 (2024) - 2023
- [j2]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake E. Woodworth:
Lower bounds for non-convex stochastic optimization. Math. Program. 199(1): 165-214 (2023) - [j1]Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Rajiv Didolkar, Dipendra Misra, Dylan J. Foster, Lekan P. Molu, Rajan Chari, Akshay Krishnamurthy, John Langford:
Guaranteed Discovery of Control-Endogenous Latent States with Multi-Step Inverse Models. Trans. Mach. Learn. Res. 2023 (2023) - [c45]Andrew J. Wagenmaker, Dylan J. Foster:
Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory. COLT 2023: 1322-1472 - [c44]Dean P. Foster, Dylan J. Foster, Noah Golowich, Alexander Rakhlin:
On the Complexity of Multi-Agent Decision Making: From Learning in Games to Partial Monitoring. COLT 2023: 2678-2792 - [c43]Dylan J. Foster, Noah Golowich, Yanjun Han:
Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient. COLT 2023: 3969-4043 - [c42]Aleksandrs Slivkins, Karthik Abinav Sankararaman, Dylan J. Foster:
Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression. COLT 2023: 4633-4656 - [c41]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. ICLR 2023 - [c40]Dylan J. Foster, Noah Golowich, Sham M. Kakade:
Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games. ICML 2023: 10188-10221 - [c39]Zakaria Mhammedi, Dylan J. Foster, Alexander Rakhlin:
Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL. ICML 2023: 24659-24700 - [c38]Dylan J. Foster, Noah Golowich, Jian Qian, Alexander Rakhlin, Ayush Sekhari:
Model-Free Reinforcement Learning with the Decision-Estimation Coefficient. NeurIPS 2023 - [c37]Zakaria Mhammedi, Adam Block, Dylan J. Foster, Alexander Rakhlin:
Efficient Model-Free Exploration in Low-Rank MDPs. NeurIPS 2023 - [i50]Dylan J. Foster, Noah Golowich, Yanjun Han:
Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient. CoRR abs/2301.08215 (2023) - [i49]Dylan J. Foster, Noah Golowich, Sham M. Kakade:
Hardness of Independent Learning and Sparse Equilibrium Computation in Markov Games. CoRR abs/2303.12287 (2023) - [i48]Zakaria Mhammedi, Dylan J. Foster, Alexander Rakhlin:
Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL. CoRR abs/2304.05889 (2023) - [i47]Andrew Wagenmaker, Dylan J. Foster:
Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory. CoRR abs/2304.12466 (2023) - [i46]Dylan J. Foster, Dean P. Foster, Noah Golowich, Alexander Rakhlin:
On the Complexity of Multi-Agent Decision Making: From Learning in Games to Partial Monitoring. CoRR abs/2305.00684 (2023) - [i45]Zakaria Mhammedi, Adam Block, Dylan J. Foster, Alexander Rakhlin:
Efficient Model-Free Exploration in Low-Rank MDPs. CoRR abs/2307.03997 (2023) - [i44]Adam Block, Dylan J. Foster, Akshay Krishnamurthy, Max Simchowitz, Cyril Zhang:
Butterfly Effects of SGD Noise: Error Amplification in Behavior Cloning and Autoregression. CoRR abs/2310.11428 (2023) - [i43]Dylan J. Foster, Alexander Rakhlin:
Foundations of Reinforcement Learning and Interactive Decision Making. CoRR abs/2312.16730 (2023) - 2022
- [c36]Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu:
Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation. COLT 2022: 3489 - [c35]Yonathan Efroni, Dylan J. Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford:
Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information. COLT 2022: 5062-5127 - [c34]Yinglun Zhu, Dylan J. Foster, John Langford, Paul Mineiro:
Contextual Bandits with Large Action Spaces: Made Practical. ICML 2022: 27428-27453 - [c33]Dylan J. Foster, Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan:
On the Complexity of Adversarial Decision Making. NeurIPS 2022 - [c32]Gene Li, Pritish Kamath, Dylan J. Foster, Nati Srebro:
Understanding the Eluder Dimension. NeurIPS 2022 - [c31]Tengyang Xie, Akanksha Saran, Dylan J. Foster, Lekan P. Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford:
Interaction-Grounded Learning with Action-Inclusive Feedback. NeurIPS 2022 - [i42]Yonathan Efroni, Dylan J. Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford:
Sample-Efficient Reinforcement Learning in the Presence of Exogenous Information. CoRR abs/2206.04282 (2022) - [i41]Tengyang Xie, Akanksha Saran, Dylan J. Foster, Lekan P. Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford:
Interaction-Grounded Learning with Action-inclusive Feedback. CoRR abs/2206.08364 (2022) - [i40]Dylan J. Foster, Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan:
On the Complexity of Adversarial Decision Making. CoRR abs/2206.13063 (2022) - [i39]Yinglun Zhu, Dylan J. Foster, John Langford, Paul Mineiro:
Contextual Bandits with Large Action Spaces: Made Practical. CoRR abs/2207.05836 (2022) - [i38]Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan J. Foster, Lekan P. Molu, Rajan Chari, Akshay Krishnamurthy, John Langford:
Guaranteed Discovery of Controllable Latent States with Multi-Step Inverse Models. CoRR abs/2207.08229 (2022) - [i37]Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade:
The Role of Coverage in Online Reinforcement Learning. CoRR abs/2210.04157 (2022) - [i36]Aleksandrs Slivkins, Dylan J. Foster:
Efficient Contextual Bandits with Knapsacks via Regression. CoRR abs/2211.07484 (2022) - [i35]Dylan J. Foster, Noah Golowich, Jian Qian, Alexander Rakhlin, Ayush Sekhari:
A Note on Model-Free Reinforcement Learning with the Decision-Estimation Coefficient. CoRR abs/2211.14250 (2022) - 2021
- [c30]Dylan J. Foster, Alexander Rakhlin, David Simchi-Levi, Yunzong Xu:
Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective. COLT 2021: 2059 - [c29]Dylan J. Foster, Akshay Krishnamurthy:
Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination. NeurIPS 2021: 18907-18919 - [i34]Constantinos Daskalakis, Dylan J. Foster, Noah Golowich:
Independent Policy Gradient Methods for Competitive Reinforcement Learning. CoRR abs/2101.04233 (2021) - [i33]Gene Li, Pritish Kamath, Dylan J. Foster, Nathan Srebro:
Eluder Dimension and Generalized Rank. CoRR abs/2104.06970 (2021) - [i32]Dylan J. Foster, Akshay Krishnamurthy:
Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination. CoRR abs/2107.02237 (2021) - [i31]Dylan J. Foster, Claudio Gentile, Mehryar Mohri, Julian Zimmert:
Adapting to Misspecification in Contextual Bandits. CoRR abs/2107.05745 (2021) - [i30]Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu:
Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation. CoRR abs/2111.10919 (2021) - [i29]Dylan J. Foster, Sham M. Kakade, Jian Qian, Alexander Rakhlin:
The Statistical Complexity of Interactive Decision Making. CoRR abs/2112.13487 (2021) - 2020
- [c28]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations. COLT 2020: 242-299 - [c27]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Open Problem: Model Selection for Contextual Bandits. COLT 2020: 3842-3846 - [c26]Blair L. Bilodeau, Dylan J. Foster, Daniel M. Roy:
Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance. ICML 2020: 919-929 - [c25]Dylan J. Foster, Alexander Rakhlin:
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles. ICML 2020: 3199-3210 - [c24]Dylan J. Foster, Max Simchowitz:
Logarithmic Regret for Adversarial Online Control. ICML 2020: 3211-3221 - [c23]Max Simchowitz, Dylan J. Foster:
Naive Exploration is Optimal for Online LQR. ICML 2020: 8937-8948 - [c22]Dylan J. Foster, Vasilis Syrgkanis:
Statistical Learning with a Nuisance Component (Extended Abstract). IJCAI 2020: 4726-4729 - [c21]Dylan J. Foster, Tuhin Sarkar, Alexander Rakhlin:
Learning nonlinear dynamical systems from a single trajectory. L4DC 2020: 851-861 - [c20]Constantinos Daskalakis, Dylan J. Foster, Noah Golowich:
Independent Policy Gradient Methods for Competitive Reinforcement Learning. NeurIPS 2020 - [c19]Dylan J. Foster, Claudio Gentile, Mehryar Mohri, Julian Zimmert:
Adapting to Misspecification in Contextual Bandits. NeurIPS 2020 - [c18]Zakaria Mhammedi, Dylan J. Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, John Langford:
Learning the Linear Quadratic Regulator from Nonlinear Observations. NeurIPS 2020 - [i28]Max Simchowitz, Dylan J. Foster:
Naive Exploration is Optimal for Online LQR. CoRR abs/2001.09576 (2020) - [i27]Dylan J. Foster, Alexander Rakhlin:
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles. CoRR abs/2002.04926 (2020) - [i26]Dylan J. Foster, Max Simchowitz:
Logarithmic Regret for Adversarial Online Control. CoRR abs/2003.00189 (2020) - [i25]Dylan J. Foster, Alexander Rakhlin, Tuhin Sarkar:
Learning nonlinear dynamical systems from a single trajectory. CoRR abs/2004.14681 (2020) - [i24]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Open Problem: Model Selection for Contextual Bandits. CoRR abs/2006.10940 (2020) - [i23]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations. CoRR abs/2006.13476 (2020) - [i22]Blair L. Bilodeau, Dylan J. Foster, Daniel M. Roy:
Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance. CoRR abs/2007.01160 (2020) - [i21]Dylan J. Foster, Alexander Rakhlin, David Simchi-Levi, Yunzong Xu:
Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective. CoRR abs/2010.03104 (2020) - [i20]Zakaria Mhammedi, Dylan J. Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, John Langford:
Learning the Linear Quadratic Regulator from Nonlinear Observations. CoRR abs/2010.03799 (2020)
2010 – 2019
- 2019
- [b1]Dylan J. Foster:
Adaptive Learning: Algorithms and Complexity. Cornell University, USA, 2019 - [c17]Dylan J. Foster, Andrej Risteski:
Sum-of-squares meets square loss: Fast rates for agnostic tensor completion. COLT 2019: 1280-1318 - [c16]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. COLT 2019: 1319-1345 - [c15]Dylan J. Foster, Vasilis Syrgkanis:
Statistical Learning with a Nuisance Component. COLT 2019: 1346-1348 - [c14]Jayadev Acharya, Chris De Sa, Dylan J. Foster, Karthik Sridharan:
Distributed Learning with Sublinear Communication. ICML 2019: 40-50 - [c13]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. NeurIPS 2019: 6726-6736 - [c12]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Model Selection for Contextual Bandits. NeurIPS 2019: 14714-14725 - [i19]Dylan J. Foster, Vasilis Syrgkanis:
Orthogonal Statistical Learning. CoRR abs/1901.09036 (2019) - [i18]Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. CoRR abs/1902.04686 (2019) - [i17]Jayadev Acharya, Christopher De Sa, Dylan J. Foster, Karthik Sridharan:
Distributed Learning with Sublinear Communication. CoRR abs/1902.11259 (2019) - [i16]Dylan J. Foster, Spencer Greenberg, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Hypothesis Set Stability and Generalization. CoRR abs/1904.04755 (2019) - [i15]Dylan J. Foster, Andrej Risteski:
Sum-of-squares meets square loss: Fast rates for agnostic tensor completion. CoRR abs/1905.13283 (2019) - [i14]Dylan J. Foster, Akshay Krishnamurthy, Haipeng Luo:
Model selection for contextual bandits. CoRR abs/1906.00531 (2019) - [i13]Dylan J. Foster, Alexander Rakhlin:
𝓁∞ Vector Contraction for Rademacher Complexity. CoRR abs/1911.06468 (2019) - [i12]Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake E. Woodworth:
Lower Bounds for Non-Convex Stochastic Optimization. CoRR abs/1912.02365 (2019) - 2018
- [c11]Dylan J. Foster, Karthik Sridharan, Daniel Reichman:
Inference in Sparse Graphs with Pairwise Measurements and Side Information. AISTATS 2018: 1810-1818 - [c10]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. COLT 2018: 167-208 - [c9]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
Online Learning: Sufficient Statistics and the Burkholder Method. COLT 2018: 3028-3064 - [c8]Dylan J. Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, Robert E. Schapire:
Practical Contextual Bandits with Regression Oracles. ICML 2018: 1534-1543 - [c7]Dylan J. Foster, Akshay Krishnamurthy:
Contextual bandits with surrogate losses: Margin bounds and efficient algorithms. NeurIPS 2018: 2626-2637 - [c6]Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Uniform Convergence of Gradients for Non-Convex Learning and Optimization. NeurIPS 2018: 8759-8770 - [i11]Dylan J. Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan:
Parameter-free online learning via model selection. CoRR abs/1801.00101 (2018) - [i10]Dylan J. Foster, Alekh Agarwal, Miroslav Dudík, Haipeng Luo, Robert E. Schapire:
Practical Contextual Bandits with Regression Oracles. CoRR abs/1803.01088 (2018) - [i9]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
Online Learning: Sufficient Statistics and the Burkholder Method. CoRR abs/1803.07617 (2018) - [i8]Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan:
Logistic Regression: The Importance of Being Improper. CoRR abs/1803.09349 (2018) - [i7]Dylan J. Foster, Akshay Krishnamurthy:
Contextual bandits with surrogate losses: Margin bounds and efficient algorithms. CoRR abs/1806.10745 (2018) - [i6]Dylan J. Foster, Ayush Sekhari, Karthik Sridharan:
Uniform Convergence of Gradients for Non-Convex Learning and Optimization. CoRR abs/1810.11059 (2018) - 2017
- [c5]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
ZigZag: A New Approach to Adaptive Online Learning. COLT 2017: 876-924 - [c4]Dylan J. Foster, Satyen Kale, Mehryar Mohri, Karthik Sridharan:
Parameter-Free Online Learning via Model Selection. NIPS 2017: 6020-6030 - [c3]Peter L. Bartlett, Dylan J. Foster, Matus Telgarsky:
Spectrally-normalized margin bounds for neural networks. NIPS 2017: 6240-6249 - [i5]Dylan J. Foster, Daniel Reichman, Karthik Sridharan:
Inference in Sparse Graphs with Pairwise Measurements and Side Information. CoRR abs/1703.02728 (2017) - [i4]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
ZigZag: A new approach to adaptive online learning. CoRR abs/1704.04010 (2017) - [i3]Peter L. Bartlett, Dylan J. Foster, Matus Telgarsky:
Spectrally-normalized margin bounds for neural networks. CoRR abs/1706.08498 (2017) - 2016
- [c2]Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Éva Tardos:
Learning in Games: Robustness of Fast Convergence. NIPS 2016: 4727-4735 - [i2]Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Éva Tardos:
Fast Convergence of Common Learning Algorithms in Games. CoRR abs/1606.06244 (2016) - 2015
- [c1]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
Adaptive Online Learning. NIPS 2015: 3375-3383 - [i1]Dylan J. Foster, Alexander Rakhlin, Karthik Sridharan:
Adaptive Online Learning. CoRR abs/1508.05170 (2015)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 21:32 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint