default search action
Atsunori Ogawa
- > Home > Persons > Atsunori Ogawa
Publications
- 2024
- [c101]Naohiro Tawara, Marc Delcroix, Atsushi Ando, Atsunori Ogawa:
NTT Speaker Diarization System for Chime-7: Multi-Domain, Multi-Microphone end-to-end and Vector Clustering Diarization. ICASSP 2024: 11281-11285 - [c100]William Chen, Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe:
Train Long and Test Long: Leveraging Full Document Contexts in Speech Processing. ICASSP 2024: 13066-13070 - [i12]Atsunori Ogawa, Naoyuki Kamo, Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Takatomo Kano, Naohiro Tawara, Marc Delcroix:
Applying LLMs for Rescoring N-best ASR Hypotheses of Casual Conversations: Effects of Domain Adaptation and Context Carry-over. CoRR abs/2406.18972 (2024) - [i11]Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Masato Mimura, Takatomo Kano, Atsunori Ogawa, Marc Delcroix:
Sentence-wise Speech Summarization: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation. CoRR abs/2408.00205 (2024) - 2023
- [c95]Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Kohei Matsuura, Takanori Ashihara, William Chen, Shinji Watanabe:
Summarize While Translating: Universal Model With Parallel Decoding for Summarization and Translation. ASRU 2023: 1-8 - [c94]Roshan S. Sharma, William Chen, Takatomo Kano, Ruchira Sharma, Siddhant Arora, Shinji Watanabe, Atsunori Ogawa, Marc Delcroix, Rita Singh, Bhiksha Raj:
Espnet-Summ: Introducing a Novel Large Dataset, Toolkit, and a Cross-Corpora Evaluation of Speech Summarization Systems. ASRU 2023: 1-8 - [c93]Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Roshan S. Sharma, Kohei Matsuura, Shinji Watanabe:
Speech Summarization of Long Spoken Document: Improving Memory Efficiency of Speech/Text Encoders. ICASSP 2023: 1-5 - [c92]Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Tomohiro Tanaka, Atsunori Ogawa, Marc Delcroix, Ryo Masumura:
Leveraging Large Text Corpora For End-To-End Speech Summarization. ICASSP 2023: 1-5 - [c91]Atsunori Ogawa, Takafumi Moriya, Naoyuki Kamo, Naohiro Tawara, Marc Delcroix:
Iterative Shallow Fusion of Backward Language Model for End-To-End Speech Recognition. ICASSP 2023: 1-5 - [c90]Takafumi Moriya, Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Takanori Ashihara, Kohei Matsuura, Tomohiro Tanaka, Ryo Masumura, Atsunori Ogawa, Taichi Asami:
Knowledge Distillation for Neural Transducer-based Target-Speaker ASR: Exploiting Parallel Mixture/Single-Talker Speech Data. INTERSPEECH 2023: 899-903 - [c87]Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Tomohiro Tanaka, Takatomo Kano, Atsunori Ogawa, Marc Delcroix:
Transfer Learning from Pre-trained Language Models Improves End-to-End Speech Summarization. INTERSPEECH 2023: 2943-2947 - [c86]Marc Delcroix, Naohiro Tawara, Mireia Díez, Federico Landini, Anna Silnova, Atsunori Ogawa, Tomohiro Nakatani, Lukás Burget, Shoko Araki:
Multi-Stream Extension of Variational Bayesian HMM Clustering (MS-VBx) for Combined End-to-End and Vector Clustering-based Diarization. INTERSPEECH 2023: 3477-3481 - [i10]Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Tomohiro Tanaka, Atsunori Ogawa, Marc Delcroix, Ryo Masumura:
Leveraging Large Text Corpora for End-to-End Speech Summarization. CoRR abs/2303.00978 (2023) - [i9]Marc Delcroix, Naohiro Tawara, Mireia Díez, Federico Landini, Anna Silnova, Atsunori Ogawa, Tomohiro Nakatani, Lukás Burget, Shoko Araki:
Multi-Stream Extension of Variational Bayesian HMM Clustering (MS-VBx) for Combined End-to-End and Vector Clustering-based Diarization. CoRR abs/2305.13580 (2023) - [i8]Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Tomohiro Tanaka, Takatomo Kano, Atsunori Ogawa, Marc Delcroix:
Transfer Learning from Pre-trained Language Models Improves End-to-End Speech Summarization. CoRR abs/2306.04233 (2023) - [i7]Naohiro Tawara, Marc Delcroix, Atsushi Ando, Atsunori Ogawa:
NTT speaker diarization system for CHiME-7: multi-domain, multi-microphone End-to-end and vector clustering diarization. CoRR abs/2309.12656 (2023) - [i6]Atsunori Ogawa, Takafumi Moriya, Naoyuki Kamo, Naohiro Tawara, Marc Delcroix:
Iterative Shallow Fusion of Backward Language Model for End-to-End Speech Recognition. CoRR abs/2310.11010 (2023) - [i5]Atsunori Ogawa, Naohiro Tawara, Marc Delcroix, Shoko Araki:
Lattice Rescoring Based on Large Ensemble of Complementary Neural Language Models. CoRR abs/2312.12764 (2023) - [i4]Atsunori Ogawa, Naohiro Tawara, Takatomo Kano, Marc Delcroix:
BLSTM-Based Confidence Estimation for End-to-End Speech Recognition. CoRR abs/2312.14609 (2023) - 2022
- [c85]Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe:
Integrating Multiple ASR Systems into NLP Backend with Attention Fusion. ICASSP 2022: 6237-6241 - [c84]Atsunori Ogawa, Naohiro Tawara, Marc Delcroix, Shoko Araki:
Lattice Rescoring Based on Large Ensemble of Complementary Neural Language Models. ICASSP 2022: 6517-6521 - 2021
- [c79]Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe:
Attention-Based Multi-Hypothesis Fusion for Speech Summarization. ASRU 2021: 487-494 - [c78]Atsunori Ogawa, Naohiro Tawara, Takatomo Kano, Marc Delcroix:
BLSTM-Based Confidence Estimation for End-to-End Speech Recognition. ICASSP 2021: 6383-6387 - [i1]Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe:
Attention-based Multi-hypothesis Fusion for Speech Summarization. CoRR abs/2111.08201 (2021) - 2020
- [c73]Naohiro Tawara, Atsunori Ogawa, Tomoharu Iwata, Marc Delcroix, Tetsuji Ogawa:
Frame-Level Phoneme-Invariant Speaker Embedding for Text-Independent Speaker Recognition on Extremely Short Utterances. ICASSP 2020: 6799-6803 - [c71]Atsunori Ogawa, Naohiro Tawara, Marc Delcroix:
Language Model Data Augmentation Based on Text Domain Transfer. INTERSPEECH 2020: 4926-4930 - 2019
- [j14]Michael Hentschel, Marc Delcroix, Atsunori Ogawa, Tomoharu Iwata, Tomohiro Nakatani:
Feature Based Domain Adaptation for Neural Network Language Models with Factorised Hidden Layers. IEICE Trans. Inf. Syst. 102-D(3): 598-608 (2019) - [c70]Shigeki Karita, Shinji Watanabe, Tomoharu Iwata, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Semi-supervised End-to-end Speech Recognition Using Text-to-speech and Autoencoders. ICASSP 2019: 6166-6170 - [c69]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Tomohiro Nakatani:
A Unified Framework for Neural Speech Separation and Extraction. ICASSP 2019: 6975-6979 - [c67]Michael Hentschel, Marc Delcroix, Atsunori Ogawa, Tomoharu Iwata, Tomohiro Nakatani:
A Unified Framework for Feature-based Domain Adaptation of Neural Network Language Models. ICASSP 2019: 7250-7254 - [c66]Marc Delcroix, Shinji Watanabe, Tsubasa Ochiai, Keisuke Kinoshita, Shigeki Karita, Atsunori Ogawa, Tomohiro Nakatani:
End-to-End SpeakerBeam for Single Channel Target Speech Recognition. INTERSPEECH 2019: 451-455 - [c65]Shigeki Karita, Nelson Enrique Yalta Soplin, Shinji Watanabe, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Improving Transformer-Based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration. INTERSPEECH 2019: 1408-1412 - [c64]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Tomohiro Nakatani:
Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues. INTERSPEECH 2019: 2718-2722 - [c63]Atsunori Ogawa, Marc Delcroix, Shigeki Karita, Tomohiro Nakatani:
Improved Deep Duel Model for Rescoring N-Best Speech Recognition List Using Backward LSTMLM and Ensemble Encoders. INTERSPEECH 2019: 3900-3904 - 2018
- [j13]Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Christian Huemmer, Tomohiro Nakatani:
Context Adaptive Neural Network Based Acoustic Models for Rapid Adaptation. IEEE ACM Trans. Audio Speech Lang. Process. 26(5): 895-908 (2018) - [c61]Michael Hentschel, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Feature-Based Learning Hidden Unit Contributions for Domain Adaptation of RNN-LMs. APSIPA 2018: 1692-1696 - [c60]Michael Hentschel, Marc Delcroix, Atsunori Ogawa, Tomoharu Iwata, Tomohiro Nakatani:
Factorised Hidden Layer Based Domain Adaptation for Recurrent Neural Network Language Models. APSIPA 2018: 1940-1944 - [c59]Marc Delcroix, Katerina Zmolíková, Keisuke Kinoshita, Atsunori Ogawa, Tomohiro Nakatani:
Single Channel Target Speaker Extraction and Recognition with Speaker Beam. ICASSP 2018: 5554-5558 - [c58]Shigeki Karita, Atsunori Ogawa, Marc Delcroix, Tomohiro Nakatani:
Sequence Training of Encoder-Decoder Model Using Policy Gradient for End-to-End Speech Recognition. ICASSP 2018: 5839-5843 - [c56]Atsunori Ogawa, Marc Delcroix, Shigeki Karita, Tomohiro Nakatani:
Rescoring N-Best Speech Recognition List Based on One-on-One Hypothesis Comparison Using Encoder-Classifier Model. ICASSP 2018: 6099-6103 - [c55]Shigeki Karita, Shinji Watanabe, Tomoharu Iwata, Atsunori Ogawa, Marc Delcroix:
Semi-Supervised End-to-End Speech Recognition. INTERSPEECH 2018: 2-6 - [c54]Marc Delcroix, Shinji Watanabe, Atsunori Ogawa, Shigeki Karita, Tomohiro Nakatani:
Auxiliary Feature Based Adaptation of End-to-end ASR Systems. INTERSPEECH 2018: 2444-2448 - 2017
- [c53]Michael Hentschel, Atsunori Ogawa, Marc Delcroix, Tomohiro Nakatani, Yuji Matsumoto:
Exploiting imbalanced textual and acoustic data for training prosodically-enhanced RNNLMs. APSIPA 2017: 618-621 - [c51]Katerina Zmolíková, Marc Delcroix, Keisuke Kinoshita, Takuya Higuchi, Atsunori Ogawa, Tomohiro Nakatani:
Learning speaker representation for neural network based multichannel speaker extraction. ASRU 2017: 8-15 - [c50]Shoko Araki, Nobutaka Ito, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Takuya Higuchi, Takuya Yoshioka, Dung T. Tran, Shigeki Karita, Tomohiro Nakatani:
Online meeting recognition in noisy environments with time-frequency mask based MVDR beamforming. HSCMA 2017: 16-20 - [c49]Keisuke Kinoshita, Marc Delcroix, Atsunori Ogawa, Takuya Higuchi, Tomohiro Nakatani:
Deep mixture density network for statistical model-based feature enhancement. ICASSP 2017: 251-255 - [c48]Christian Huemmer, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Tomohiro Nakatani, Walter Kellermann:
Online environmental adaptation of CNN-based acoustic models using spatial diffuseness features. ICASSP 2017: 4875-4879 - [c47]Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Taichi Asami, Shigeru Katagiri, Tomohiro Nakatani:
Cumulative moving averaged bottleneck speaker vectors for online speaker adaptation of CNN-based acoustic models. ICASSP 2017: 5175-5179 - [c46]Dung T. Tran, Marc Delcroix, Atsunori Ogawa, Christian Huemmer, Tomohiro Nakatani:
Feedback connection for deep neural network-based acoustic modeling. ICASSP 2017: 5240-5244 - [c45]Dung T. Tran, Marc Delcroix, Shigeki Karita, Michael Hentschel, Atsunori Ogawa, Tomohiro Nakatani:
Unfolded Deep Recurrent Convolutional Neural Network with Jump Ahead Connections for Acoustic Modeling. INTERSPEECH 2017: 1596-1600 - [c44]Shigeki Karita, Atsunori Ogawa, Marc Delcroix, Tomohiro Nakatani:
Forward-Backward Convolutional LSTM for Acoustic Modeling. INTERSPEECH 2017: 1601-1605 - [c43]Atsunori Ogawa, Keisuke Kinoshita, Marc Delcroix, Tomohiro Nakatani:
Improved Example-Based Speech Enhancement by Using Deep Neural Network Acoustic Model for Noise Robust Example Search. INTERSPEECH 2017: 1963-1967 - [c42]Katerina Zmolíková, Marc Delcroix, Keisuke Kinoshita, Takuya Higuchi, Atsunori Ogawa, Tomohiro Nakatani:
Speaker-Aware Neural Network Based Beamformer for Speaker Extraction in Speech Mixtures. INTERSPEECH 2017: 2655-2659 - [c41]Dung T. Tran, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Uncertainty Decoding with Adaptive Sampling for Noise Robust DNN-Based Acoustic Modeling. INTERSPEECH 2017: 3852-3856 - [p1]Marc Delcroix, Takuya Yoshioka, Nobutaka Ito, Atsunori Ogawa, Keisuke Kinoshita, Masakiyo Fujimoto, Takuya Higuchi, Shoko Araki, Tomohiro Nakatani:
Multichannel Speech Enhancement Approaches to DNN-Based Far-Field Speech Recognition. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 21-49 - 2016
- [j11]Marc Delcroix, Atsunori Ogawa, Seong-Jun Hahm, Tomohiro Nakatani, Atsushi Nakamura:
Differenced maximum mutual information criterion for robust unsupervised acoustic model adaptation. Comput. Speech Lang. 36: 24-41 (2016) - [c39]Marc Delcroix, Keisuke Kinoshita, Chengzhu Yu, Atsunori Ogawa, Takuya Yoshioka, Tomohiro Nakatani:
Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions. ICASSP 2016: 5270-5274 - [c38]Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, Takuya Yoshioka, Dung T. Tran, Tomohiro Nakatani:
Context Adaptive Neural Network for Rapid Adaptation of Deep CNN Based Acoustic Models. INTERSPEECH 2016: 1573-1577 - [c37]Atsunori Ogawa, Shogo Seki, Keisuke Kinoshita, Marc Delcroix, Takuya Yoshioka, Tomohiro Nakatani, Kazuya Takeda:
Robust Example Search Using Bottleneck Features for Example-Based Speech Enhancement. INTERSPEECH 2016: 3733-3737 - [c36]Dung T. Tran, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Factorized Linear Input Network for Acoustic Model Adaptation in Noisy Conditions. INTERSPEECH 2016: 3813-3817 - 2015
- [j9]Marc Delcroix, Takuya Yoshioka, Atsunori Ogawa, Yotaro Kubo, Masakiyo Fujimoto, Nobutaka Ito, Keisuke Kinoshita, Miquel Espi, Shoko Araki, Takaaki Hori, Tomohiro Nakatani:
Strategies for distant speech recognitionin reverberant environments. EURASIP J. Adv. Signal Process. 2015: 60 (2015) - [c35]Takuya Yoshioka, Nobutaka Ito, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Masakiyo Fujimoto, Chengzhu Yu, Wojciech J. Fabian, Miquel Espi, Takuya Higuchi, Shoko Araki, Tomohiro Nakatani:
The NTT CHiME-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices. ASRU 2015: 436-443 - [c32]Keisuke Kinoshita, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani:
Text-informed speech enhancement with deep neural networks. INTERSPEECH 2015: 1760-1764 - [c31]Chengzhu Yu, Atsunori Ogawa, Marc Delcroix, Takuya Yoshioka, Tomohiro Nakatani, John H. L. Hansen:
Robust i-vector extraction for neural network adaptation in noisy environment. INTERSPEECH 2015: 2854-2857 - 2014
- [c30]Marc Delcroix, Takuya Yoshioka, Atsunori Ogawa, Yotaro Kubo, Masakiyo Fujimoto, Nobutaka Ito, Keisuke Kinoshita, Miquel Espi, Shoko Araki, Takaaki Hori, Tomohiro Nakatani:
Defeating reverberation: Advanced dereverberation and recognition techniques for hands-free speech recognition. GlobalSIP 2014: 522-526 - 2013
- [j6]Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Atsunori Ogawa, Takaaki Hori, Shinji Watanabe, Masakiyo Fujimoto, Takuya Yoshioka, Takanobu Oba, Yotaro Kubo, Mehrez Souden, Seong-Jun Hahm, Atsushi Nakamura:
Speech recognition in living rooms: Integrated speech enhancement and recognition system based on spatial, spectral and temporal modeling of sounds. Comput. Speech Lang. 27(3): 851-873 (2013) - [c25]Marc Delcroix, Atsunori Ogawa, Seong-Jun Hahm, Tomohiro Nakatani, Atsushi Nakamura:
Unsupervised discriminative adaptation using differenced maximum mutual information based linear regression. ICASSP 2013: 7888-7892 - [c24]Seong-Jun Hahm, Atsunori Ogawa, Marc Delcroix, Masakiyo Fujimoto, Takaaki Hori, Atsushi Nakamura:
Feature space variational Bayesian linear regression and its combination with model space VBLR. ICASSP 2013: 7898-7902 - 2012
- [c21]Marc Delcroix, Atsunori Ogawa, Shinji Watanabe, Tomohiro Nakatani, Atsushi Nakamura:
Discriminative feature transforms using differenced maximum mutual information. ICASSP 2012: 4753-4756 - [c17]Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani, Atsushi Nakamura:
Dynamic variance adaptation using differenced maximum mutual information. MLSLP 2012: 9-12
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-09 01:13 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint