default search action
ETRA 2014: Safety Harbor, FL, USA
- Pernilla Qvarfordt, Dan Witzner Hansen:
Eye Tracking Research and Applications, ETRA '14, Safety Harbor, FL, USA, March 26-28, 2014. ACM 2014, ISBN 978-1-4503-2751-0
Gaze-mediated input
- Jari Kangas, Jussi Rantala, Päivi Majaranta, Poika Isokoski, Roope Raisamo:
Haptic feedback to gaze events. 11-18 - Jayson Turner, Andreas Bulling, Jason Alexander, Hans Gellersen:
Cross-device gaze-supported point-to-point content transfer. 19-26 - John Paulin Hansen, Alexandre Alapetite, I. Scott MacKenzie, Emilie Møllenbach:
The use of gaze to control drones. 27-34 - Oleg Spakov, Poika Isokoski, Päivi Majaranta:
Look and lean: accurate head-assisted eye pointing. 35-42
Analysis I: eye tracking data analysis methods
- Kuno Kurzhals, Florian Heimerl, Daniel Weiskopf:
ISeeCube: visual analysis of gaze data for video. 43-50 - Daniel J. Campbell, Joseph Chang, Katarzyna Chawarska, Frederick Shic:
Saliency-based Bayesian modeling of dynamic viewing of static scenes. 51-58 - Ryan V. Ringer, Aaron P. Johnson, John G. Gaspar, Mark B. Neider, James A. Crowell, Arthur F. Kramer, Lester C. Loschky:
Creating a new dynamic measure of the useful field of view using gaze-contingent displays. 59-66 - Quan Wang, Elizabeth S. Kim, Katarzyna Chawarska, Brian Scassellati, Steven W. Zucker, Frédérick Shic:
On relationships between fixation identification algorithms and fractal box counting methods. 67-74
Calibration & fixation analysis
- Jia-Bin Huang, Qin Cai, Zicheng Liu, Narendra Ahuja, Zhengyou Zhang:
Towards accurate and robust cross-ratio based gaze trackers through learning from simulation. 75-82 - Morten Lidegaard, Dan Witzner Hansen, Norbert Krüger:
Head mounted device for point-of-gaze estimation in three dimensions. 83-86 - Andrea Mazzei, Shahram Eivazi, Youri Marko, Frédéric Kaplan, Pierre Dillenbourg:
3D model-based gaze estimation in natural reading: a systematic error correction procedure based on annotated texts. 87-90 - Dan Witzner Hansen, Lars Roholm, Iván García Ferreiros:
Robust glint detection through homography normalization. 91-94 - Yunfeng Zhang, Anthony J. Hornof:
Easy post-hoc spatial recalibration of eye tracking data. 95-98 - Neil D. B. Bruce:
Towards fine-grained fixation analysis: distilling out context dependence. 99-102
3D & gaming applications
- Andrew T. Duchowski, Donald H. House, Jordan Gestring, Robert Congdon, Lech Swirski, Neil A. Dodgson, Krzysztof Krejtz, Izabela Krejtz:
Comparing estimated gaze depth in virtual and physical environments. 103-110 - Matthias Bernhard, Camillo Dell'mour, Michael Hecher, Efstathios Stavrakis, Michael Wimmer:
The effects of fast disparity adjustment in gaze-controlled stereoscopic applications. 111-118 - Margarita Vinnikov, Robert S. Allison:
Gaze-contingent depth of field in realistic scenes: the user experience. 119-126 - Andrew K. Mackenzie, Julie M. Harris:
Characterizing visual attention during driving and non-driving hazard perception tasks in a simulated environment. 127-130 - Jutta Hild, Dennis Gill, Jürgen Beyerer:
Comparing mouse and MAGIC pointing for moving target acquisition. 131-134
Analysis II: finding patterns in eye tracking data
- Michael Raschke, Dominik Herr, Tanja Blascheck, Thomas Ertl, Michael Burch, Sven Willmann, Michael Schrauf:
A visual approach for scan path comparison. 135-142 - Tommy P. Keane, Nathan D. Cahill, Jeff B. Pelz:
Eye-movement sequence statistics and hypothesis-testing with classical recurrence analysis. 143-150 - Michael Burch, Fabian Beck, Michael Raschke, Tanja Blascheck, Daniel Weiskopf:
A dynamic graph visualization perspective on eye movement data. 151-158 - Krzysztof Krejtz, Tomasz Szmidt, Andrew T. Duchowski, Izabela Krejtz:
Entropy-based statistical analysis of eye movement transitions. 159-166
Visual attention and eye movements
- Lindsey K. McIntire, John P. McIntire, R. Andy McKinley, Chuck Goodyear:
Detection of vigilance performance with pupillometry. 167-174 - Xianta Jiang, M. Stella Atkins, Geoffrey Tien, Bin Zheng, Roman Bednarik:
Pupil dilations during target-pointing respect Fitts' law. 175-182 - Nicholas M. Ross, Elio M. Santos:
The relative contributions of internal motor cues and external semantic cues to anticipatory smooth pursuit. 183-186 - Brooke E. Wooley, David S. March:
Exploring the influence of audio in directing visual attention during dynamic content. 187-190 - Amy Rouinfar, Elise Agra, Jeffrey Murray, Adam M. Larson, Lester C. Loschky, N. Sanjay Rebello:
Influence of visual cueing on students' eye movements while solving physics problems. 191-194
Mobile eye tracking & applications
- Thies Pfeiffer, Patrick Renner:
EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology. 195-202 - Stephen Ackland, Howell O. Istance, Simon Coupland, Stephen Vickers:
An investigation into determining head pose for gaze estimation on unmodified mobile devices. 203-206 - Erroll Wood, Andreas Bulling:
EyeTab: model-based gaze estimation on unmodified tablet computers. 207-210 - Takatsugu Hirayama, Takafumi Marutani, Sidney S. Fels, Kenji Mase:
Analysis of gaze behavior while using a multi-viewpoint video viewer. 211-214 - Thanh-Chung Dao, Roman Bednarik, Hana Vrzakova:
Heatmap rendering from large-scale distributed datasets using cloud computing. 215-218 - Lech Swirski, Neil A. Dodgson:
Rendering synthetic ground truth images for eye tracker evaluation. 219-222 - Jorge Bernal, Francisco Javier Sánchez, Fernando Vilariño, Mirko Arnold, Anarta Ghosh, Gerard Lacey:
Experts vs. novices: applying eye-tracking methodologies in colonoscopy video screening for polyp search. 223-226
Poster abstracts
- Prathusha K. Sarma, Tarunraj Singh:
A mixture distribution for visual foraging. 227-230 - Rachel Turner, Michael Falcone, Bonita Sharif, Alina Lazar:
An eye-tracking study assessing the comprehension of c++ and Python source code. 231-234 - Andrea Mazzei, Tabea Koll, Frédéric Kaplan, Pierre Dillenbourg:
Attentional processes in natural reading: the effect of margin annotations on reading behaviour and comprehension. 235-238 - Brendan John, Srinivas Sridharan, Reynold J. Bailey:
Collaborative eye tracking for image analysis. 239-242 - Laura Sesma-Sanchez, Arantxa Villanueva, Rafael Cabeza:
Design issues of remote eye tracking systems with large range of movement. 243-246 - Elizabeth S. Kim, Adam Naples, Giuliana Vaccarino Gearty, Quan Wang, Seth Wallace, Carla A. Wall, Michael Perlmutter, Fred Volkmar, Frederick Shic, Linda Friedlaender, Jennifer Kowitt, Brian Reichow:
Development of an untethered, mobile, low-cost head-mounted eye tracker. 247-250 - Kentaro Takemura, Shunki Kimura, Sara Suda:
Estimating point-of-regard using corneal surface image. 251-254 - Kenneth Alberto Funes Mora, Florent Monay, Jean-Marc Odobez:
EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. 255-258 - Benedict C. O. F. Fehringer:
Eye tracking gaze visualiser: eye tracker and experimental software independent visualisation of gaze data. 259-262 - Selina Sharmin, Mari Wiklund:
Gaze behaviour and linguistic processing of dynamic text in print interpreting. 263-266 - Zhengyou Zhang, Qin Cai:
Improving cross-ratio-based eye tracking techniques by leveraging the binocular fixation constraint. 267-270 - Binbin Ye, Yusuke Sugano, Yoichi Sato:
Influence of stimulus and viewing task types on a learning-based visual saliency model. 271-274 - Xuan Guo, Rui Li, Cecilia Ovesdotter Alm, Qi Yu, Jeff B. Pelz, Pengcheng Shi, Anne R. Haake:
Infusing perceptual expertise and domain knowledge into a human-centered image retrieval system: a prototype application. 275-278 - Bogdan Hoanca, Timothy C. Smith, Kenrick J. Mock:
Machine-extracted eye gaze features: how well do they correlate to sight-reading abilities of piano players? 279-282 - Jacek Gwizdka:
News stories relevance effects on eye-movements. 283-286 - Christopher Kanan, Nicholas A. Ray, Dina N. F. Bseiso, Janet Hui-wen Hsiao, Garrison W. Cottrell:
Predicting an observer's task using multi-fixation pattern analysis. 287-290 - Oleg Spakov, Yulia Gizatdinova:
Real-time hidden gaze point correction. 291-294 - Michael Maurus, Jan Hendrik Hammer, Jürgen Beyerer:
Realistic heatmap visualization for interactive analysis of 3D gaze data. 295-298 - Pascual Martínez-Gómez, Akshay Minocha, Jin Huang, Michael Carl, Srinivas Bangalore, Akiko Aizawa:
Recognition of translator expertise using sequences of fixations and keystrokes. 299-302 - Preethi Vaidyanathan, Jeff B. Pelz, Cecilia Ovesdotter Alm, Pengcheng Shi, Anne R. Haake:
Recurrence quantification analysis reveals eye-movement behavior differences between experts and novices. 303-306 - Michael Burch, Hansjörg Schmauder, Michael Raschke, Daniel Weiskopf:
Saccade plots. 307-310 - Thomas B. Kinsman, Jeff B. Pelz:
Simulating refraction and reflection of ocular surfaces for algorithm validation in outdoor mobile eye tracking videos. 311-314 - Peter Kiefer, Ioannis Giannopoulos, Dominik Kremer, Christoph Schlieder, Martin Raubal:
Starting to get bored: an outdoor eye tracking study of tourists exploring a city panorama. 315-318 - Thomas C. Kübler, Enkelejda Kasneci, Wolfgang Rosenstiel:
SubsMatch: scanpath similarity in dynamic scenes based on subsequence frequencies. 319-322 - Enkelejda Kasneci, Gjergji Kasneci, Thomas C. Kübler, Wolfgang Rosenstiel:
The applicability of probabilistic methods to the online recognition of fixations and saccades in dynamic scenes. 323-326 - Deepak Akkil, Poika Isokoski, Jari Kangas, Jussi Rantala, Roope Raisamo:
TraQuMe: a tool for measuring the gaze tracking quality. 327-330 - Geoffrey Tien, M. Stella Atkins, Xianta Jiang, Bin Zheng, Roman Bednarik:
Verbal gaze instruction matches visual gaze guidance in laparoscopic skills training. 331-334 - Teresa Busjahn, Roman Bednarik, Carsten Schulte:
What influences dwell time during source code reading?: analysis of element type and frequency as factors. 335-338
Demo/video session
- Michael Raschke, Dominik Herr, Tanja Blascheck, Thomas Ertl, Michael Burch, Sven Willmann, Michael Schrauf:
A visual approach for scan path comparison. 339-346 - Jorge Bernal, Francisco Javier Sánchez, Fernando Vilariño, Mirko Arnold, Anarta Ghosh, Gerard Lacey:
Experts vs. novices: applying eye-tracking methodologies in colonoscopy video screening for polyp search. 347-350 - Kuno Kurzhals, Florian Heimerl, Daniel Weiskopf:
ISeeCube: visual analysis of gaze data for video. 351-358 - Addison Mayberry, Pan Hu, Benjamin M. Marlin, Christopher D. Salthouse, Deepak Ganesan:
iShadow: the computational eyeglass system. 359-360 - Patrick Renner, Thies Pfeiffer:
Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction. 361-362 - Takahiro Yoshioka, Satoshi Nakashima, Junichi Odagiri, Hideki Tomimori, Taku Fukui:
Pupil detection in the presence of specular reflection. 363-364 - Corey Holland, Oleg V. Komogortsev:
Software framework for an ocular biometric system. 365-366 - Thies Pfeiffer, Patrick Renner:
EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology. 367-374
Doctoral symposium extended abstracts
- Feridun M. Celebi, Elizabeth S. Kim, Quan Wang, Carla A. Wall, Frederick Shic:
A smooth pursuit calibration technique. 375-376 - Estefanía Domínguez Martínez:
Assessment of the improvement of signal recorded in infant EEG by using eye tracking algorithms. 377-378 - Marzena Rusanowska:
Attentional retraining in depressive disorders. 379-380 - Hiroyuki Manabe, Tohru Yagi:
EOG-based eye gesture input with audio staging. 381-382 - Thomas C. Kübler, Enkelejda Kasneci, Wolfgang Rosenstiel:
Gaze guidance for the visually impaired. 383-384 - Nina Chrobot:
The role of processing fluency in online consumer behavior: evaluating fluency by tracking eye movements. 385-386 - Lien Dupont, Veerle Van Eetvelde:
The use of eye-tracking in landscape perception research. 387-388 - Tanja Blascheck, Thomas Ertl:
Towards visualizing eye movement data from interactive stimuli. 389-390
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.