default search action
"Transformer-Based LM Surprisal Predicts Human Reading Times Best with ..."
Byung-Doh Oh, William Schuler (2023)
- Byung-Doh Oh, William Schuler:
Transformer-Based LM Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens. CoRR abs/2304.11389 (2023)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.