default search action
Juhan Nam
Person information
- affiliation: KAIST, Music and Audio Computing Lab, Republic of Korea
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [c59]Haven Kim, Jongmin Jung, Dasaem Jeong, Juhan Nam:
K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling. LREC/COLING 2024: 9974-9987 - [c58]Seungheon Doh, Minhee Lee, Dasaem Jeong, Juhan Nam:
Enriching Music Descriptions with A Finetuned-LLM and Metadata for Text-to-Music Retrieval. ICASSP 2024: 826-830 - [c57]Jiyun Park, Sangeon Yong, Taegyun Kwon, Juhan Nam:
A Real-Time Lyrics Alignment System Using Chroma and Phonetic Features for Classical Vocal Performance. ICASSP 2024: 1371-1375 - [c56]Yoonjin Chung, Junwon Lee, Juhan Nam:
T-Foley: A Controllable Waveform-Domain Diffusion Model for Temporal-Event-Guided Foley Sound Synthesis. ICASSP 2024: 6820-6824 - [c55]Jaekwon Im, Juhan Nam:
DiffRENT: A Diffusion Model for Recording Environment Transfer of Speech. ICASSP 2024: 7425-7429 - [c54]Hounsu Kim, Soonbeom Choi, Juhan Nam:
Expressive Acoustic Guitar Sound Synthesis with an Instrument-Specific Input Representation and Diffusion Outpainting. ICASSP 2024: 7620-7624 - [c53]Wootaek Lim, Juhan Nam:
Enhancing Spatial Audio Generation with Source Separation and Channel Panning Loss. ICASSP 2024: 8321-8325 - [c52]Yeonghyeon Lee, Inmo Yeon, Juhan Nam, Joon Son Chung:
VoiceLDM: Text-to-Speech with Environmental Context. ICASSP 2024: 12566-12571 - [i53]Jaekwon Im, Juhan Nam:
DIFFRENT: A Diffusion Model for Recording Environment Transfer of Speech. CoRR abs/2401.08102 (2024) - [i52]Jiyun Park, Sangeon Yong, Taegyun Kwon, Juhan Nam:
A Real-Time Lyrics Alignment System Using Chroma And Phonetic Features For Classical Vocal Performance. CoRR abs/2401.09200 (2024) - [i51]Yoonjin Chung, Junwon Lee, Juhan Nam:
T-FOLEY: A Controllable Waveform-Domain Diffusion Model for Temporal-Event-Guided Foley Sound Synthesis. CoRR abs/2401.09294 (2024) - [i50]Hounsu Kim, Soonbeom Choi, Juhan Nam:
Expressive Acoustic Guitar Sound Synthesis with an Instrument-Specific Input Representation and Diffusion Outpainting. CoRR abs/2401.13498 (2024) - [i49]Taegyun Kwon, Dasaem Jeong, Juhan Nam:
Towards Efficient and Real-Time Piano Transcription Using Neural Autoregressive Models. CoRR abs/2404.06818 (2024) - [i48]Seungheon Doh, Jongpil Lee, Dasaem Jeong, Juhan Nam:
Musical Word Embedding for Music Tagging and Retrieval. CoRR abs/2404.13569 (2024) - [i47]Gyubin Lee, Hounsu Kim, Junwon Lee, Juhan Nam:
CONMOD: Controllable Neural Frame-based Modulation Effects. CoRR abs/2406.13935 (2024) - [i46]Junwon Lee, Jaekwon Im, Dabin Kim, Juhan Nam:
Video-Foley: Two-Stage Video-To-Sound Generation via Temporal Event Condition For Foley Sound. CoRR abs/2408.11915 (2024) - [i45]Seungheon Doh, Minhee Lee, Dasaem Jeong, Juhan Nam:
Enriching Music Descriptions with a Finetuned-LLM and Metadata for Text-to-Music Retrieval. CoRR abs/2410.03264 (2024) - 2023
- [j10]Zhiyao Duan, Peter van Kranenburg, Juhan Nam, Preeti Rao:
Editorial for TISMIR Special Collection: Cultural Diversity in MIR Research. Trans. Int. Soc. Music. Inf. Retr. 6(1): 203-205 (2023) - [c51]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
PrimaDNN': A Characteristics-Aware DNN Customization for Singing Technique Detection. EUSIPCO 2023: 406-410 - [c50]Seungheon Doh, Minz Won, Keunwoo Choi, Juhan Nam:
Textless Speech-to-Music Retrieval Using Emotion Similarity. ICASSP 2023: 1-5 - [c49]Seungheon Doh, Minz Won, Keunwoo Choi, Juhan Nam:
Toward Universal Text-To-Music Retrieval. ICASSP 2023: 1-5 - [c48]Hyemi Kim, Jiyun Park, Taegyun Kwon, Dasaem Jeong, Juhan Nam:
A Study of Audio Mixing Methods for Piano Transcription in Violin-Piano Ensembles. ICASSP 2023: 1-5 - [c47]Sangeon Yong, Li Su, Juhan Nam:
A Phoneme-Informed Neural Network Model For Note-Level Singing Transcription. ICASSP 2023: 1-5 - [c46]Eugene Hwang, Joonhyung Bae, Wonil Kim, Juhan Nam, Jeongmi Lee:
Sense of Convergence: Exploring the Artistic Potential of Cross-modal Sensory Transfer in Virtual Reality. ISMAR-Adjunct 2023: 722-726 - [c45]Seungheon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam:
LP-MusicCaps: LLM-Based Pseudo Music Captioning. ISMIR 2023: 409-416 - [c44]Haven Kim, Kento Watanabe, Masataka Goto, Juhan Nam:
A Computational Evaluation Framework for Singable Lyric Translation. ISMIR 2023: 774-781 - [c43]Vanessa Tan, Junghyun Nam, Juhan Nam, Junyong Noh:
Motion to Dance Music Generation using Latent Diffusion Model. SIGGRAPH Asia Technical Communications 2023: 5:1-5:4 - [c42]Taejun Kim, Juhan Nam:
All-in-One Metrical and Functional Structure Analysis with Neighborhood Attentions on Demixed Audio. WASPAA 2023: 1-5 - [i44]Haven Kim, Seungheon Doh, Junwon Lee, Juhan Nam:
Music Playlist Title Generation Using Artist Information. CoRR abs/2301.08145 (2023) - [i43]Seungheon Doh, Minz Won, Keunwoo Choi, Juhan Nam:
Textless Speech-to-Music Retrieval Using Emotion Similarity. CoRR abs/2303.10539 (2023) - [i42]Sangeon Yong, Li Su, Juhan Nam:
A Phoneme-Informed Neural Network Model for Note-Level Singing Transcription. CoRR abs/2304.05917 (2023) - [i41]Hyemi Kim, Jiyun Park, Taegyun Kwon, Dasaem Jeong, Juhan Nam:
A study of audio mixing methods for piano transcription in violin-piano ensembles. CoRR abs/2305.13758 (2023) - [i40]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
PrimaDNN': A Characteristics-aware DNN Customization for Singing Technique Detection. CoRR abs/2306.14191 (2023) - [i39]Seungheon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam:
LP-MusicCaps: LLM-Based Pseudo Music Captioning. CoRR abs/2307.16372 (2023) - [i38]Haven Kim, Kento Watanabe, Masataka Goto, Juhan Nam:
A Computational Evaluation Framework for Singable Lyric Translation. CoRR abs/2308.13715 (2023) - [i37]Haven Kim, Jongmin Jung, Dasaem Jeong, Juhan Nam:
K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling. CoRR abs/2309.11093 (2023) - [i36]Yeonghyeon Lee, Inmo Yeon, Juhan Nam, Joon Son Chung:
VoiceLDM: Text-to-Speech with Environmental Context. CoRR abs/2309.13664 (2023) - [i35]Ilaria Manco, Benno Weck, Seungheon Doh, Minz Won, Yixiao Zhang, Dmitry Bogdanov, Yusong Wu, Ke Chen, Philip Tovstogan, Emmanouil Benetos, Elio Quinton, György Fazekas, Juhan Nam:
The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation. CoRR abs/2311.10057 (2023) - 2022
- [c41]Joonhyung Bae, Karam Eum, Haram Kwon, Seolhee Lee, Juhan Nam, Young Yim Doh:
Classy Trash Monster: An Educational Game for Teaching Machine Learning to Non-major Students. CHI Extended Abstracts 2022: 479:1-479:7 - [c40]Haven Kim, Jaeran Choi, Young Yim Doh, Juhan Nam:
The Melody of the Mysterious Stones: A VR Mindfulness Game Using Sound Spatialization. CHI Extended Abstracts 2022: 481:1-481:6 - [c39]Jinwook Kim, Pooseung Koh, Seokjun Kang, Hyunyoung Jang, Jeongmi Lee, Juhan Nam, Young Yim Doh:
Seung-ee and Kkaebi: A VR-Mobile Cross Platform Game based on Co-Presence for a Balanced Immersive Experience. CHI PLAY 2022: 273-278 - [c38]Sangeun Kum, Jongpil Lee, Keunhyoung Luke Kim, Taehyoung Kim, Juhan Nam:
Pseudo-Label Transfer from Frame-Level to Note-Level in a Teacher-Student Framework for Singing Transcription from Polyphonic Music. ICASSP 2022: 796-800 - [c37]Soonbeom Choi, Juhan Nam:
A Melody-Unsupervision Model for Singing Voice Synthesis. ICASSP 2022: 7242-7246 - [c36]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
Deformable CNN and Imbalance-Aware Feature Learning for Singing Technique Classification. INTERSPEECH 2022: 2778-2782 - [c35]Eunjin Choi, Yoonjin Chung, Seolhee Lee, JongIk Jeon, Taegyun Kwon, Juhan Nam:
YM2413-MDB: A Multi-Instrumental FM Video Game Music Dataset with Emotion Annotations. ISMIR 2022: 100-108 - [c34]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
Analysis and detection of singing techniques in repertoires of J-POP solo singers. ISMIR 2022: 384-391 - [i34]Sangeun Kum, Jongpil Lee, Keunhyoung Luke Kim, Taehyoung Kim, Juhan Nam:
Pseudo-Label Transfer from Frame-Level to Note-Level in a Teacher-Student Framework for Singing Transcription from Polyphonic Music. CoRR abs/2203.13422 (2022) - [i33]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
Deformable CNN and Imbalance-Aware Feature Learning for Singing Technique Classification. CoRR abs/2206.12230 (2022) - [i32]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa:
Analysis and Detection of Singing Techniques in Repertoires of J-POP Solo Singers. CoRR abs/2210.17367 (2022) - [i31]Taesu Kim, Seungheon Doh, Gyunpyo Lee, Hyungseok Jeon, Juhan Nam, Hyeon-Jeong Suk:
Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words. CoRR abs/2211.03371 (2022) - [i30]Eunjin Choi, Yoonjin Chung, Seolhee Lee, JongIk Jeon, Taegyun Kwon, Juhan Nam:
YM2413-MDB: A Multi-Instrumental FM Video Game Music Dataset with Emotion Annotations. CoRR abs/2211.07131 (2022) - [i29]Seungheon Doh, Minz Won, Keunwoo Choi, Juhan Nam:
Toward Universal Text-to-Music Retrieval. CoRR abs/2211.14558 (2022) - [i28]Jaekwon Im, Soonbeom Choi, Sangeon Yong, Juhan Nam:
Neural Vocoder Feature Estimation for Dry Singing Voice Separation. CoRR abs/2211.15948 (2022) - [i27]Meinard Müller, Rachel M. Bittner, Juhan Nam:
Deep Learning and Knowledge Integration for Music Audio Analysis (Dagstuhl Seminar 22082). Dagstuhl Reports 12(2): 103-133 (2022) - 2021
- [c33]Yuya Yamamoto, Juhan Nam, Hiroko Terasawa, Yuzuru Hiraga:
Investigating Time-Frequency Representations for Audio Feature Extraction in Singing Technique Classification. APSIPA ASC 2021: 890-896 - [c32]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. ISMIR 2021: 318-325 - [c31]Keunhyoung Luke Kim, Jongpil Lee, Sangeun Kum, Juhan Nam:
Learning a cross-domain embedding space of vocal and mixed audio with a structure-preserving triplet loss. ISMIR 2021: 334-341 - [c30]Taejun Kim, Yi-Hsuan Yang, Juhan Nam:
Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization. NIME 2021 - [e1]Jin Ha Lee, Alexander Lerch, Zhiyao Duan, Juhan Nam, Preeti Rao, Peter van Kranenburg, Ajay Srinivasamurthy:
Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021. 2021, ISBN 978-1-7327299-0-2 [contents] - [i26]Kyungyun Lee, Wonil Kim, Juhan Nam:
PocketVAE: A Two-step Model for Groove Generation and Control. CoRR abs/2107.05009 (2021) - [i25]Hsiao-Tzu Hung, Joann Ching, Seungheon Doh, Nabin Kim, Juhan Nam, Yi-Hsuan Yang:
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. CoRR abs/2108.01374 (2021) - [i24]Soonbeom Choi, Juhan Nam:
A Melody-Unsupervision Model for Singing Voice Synthesis. CoRR abs/2110.06546 (2021) - [i23]Seungheon Doh, Junwon Lee, Juhan Nam:
Music Playlist Title Generation: A Machine-Translation Approach. CoRR abs/2110.07354 (2021) - 2020
- [j9]Doheum Park, Juhan Nam, Juyong Park:
Novelty and influence of creative works, and quantifying patterns of advances based on probabilistic references networks. EPJ Data Sci. 9(1): 2 (2020) - [j8]Keunhyoung Luke Kim, Jongpil Lee, Sangeun Kum, Chae Lin Park, Juhan Nam:
Semantic Tagging of Singing Voices in Popular Music Recordings. IEEE ACM Trans. Audio Speech Lang. Process. 28: 1656-1668 (2020) - [c29]Jongpil Lee, Nicholas J. Bryan, Justin Salamon, Zeyu Jin, Juhan Nam:
Disentangled Multidimensional Metric Learning for Music Similarity. ICASSP 2020: 6-10 - [c28]Soonbeom Choi, Wonil Kim, Saebyul Park, Sangeon Yong, Juhan Nam:
Korean Singing Voice Synthesis Based on Auto-Regressive Boundary Equilibrium Gan. ICASSP 2020: 7234-7238 - [c27]Sangeun Kum, Jing-Hua Lin, Li Su, Juhan Nam:
Semi-supervised learning using teacher-student models for vocal melody extraction. ISMIR 2020: 93-100 - [c26]Jongpil Lee, Nicholas J. Bryan, Justin Salamon, Zeyu Jin, Juhan Nam:
Metric learning vs classification for disentangled music representation learning. ISMIR 2020: 439-445 - [c25]Taegyun Kwon, Dasaem Jeong, Juhan Nam:
Polyphonic Piano Transcription Using Autoregressive Multi-State Note Model. ISMIR 2020: 454-461 - [c24]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. ISMIR 2020: 764-770 - [i22]Seungheon Doh, Jongpil Lee, Tae Hong Park, Juhan Nam:
Musical Word Embedding: Bridging the Gap between Listening Contexts and Music. CoRR abs/2008.01190 (2020) - [i21]Jongpil Lee, Nicholas J. Bryan, Justin Salamon, Zeyu Jin, Juhan Nam:
Disentangled Multidimensional Metric Learning for Music Similarity. CoRR abs/2008.03720 (2020) - [i20]Jongpil Lee, Nicholas J. Bryan, Justin Salamon, Zeyu Jin, Juhan Nam:
Metric Learning vs Classification for Disentangled Music Representation Learning. CoRR abs/2008.03729 (2020) - [i19]Sangeun Kum, Jing-Hua Lin, Li Su, Juhan Nam:
Semi-supervised learning using teacher-student models for vocal melody extraction. CoRR abs/2008.06358 (2020) - [i18]Taejun Kim, Minsuk Choi, Evan Sacks, Yi-Hsuan Yang, Juhan Nam:
A Computational Analysis of Real-World DJ Mixes using Mix-To-Track Subsequence Alignment. CoRR abs/2008.10267 (2020) - [i17]Taegyun Kwon, Dasaem Jeong, Juhan Nam:
Polyphonic Piano Transcription Using Autoregressive Multi-State Note Model. CoRR abs/2010.01104 (2020)
2010 – 2019
- 2019
- [j7]Hendrik Purwins, Bob L. Sturm, Bo Li, Juhan Nam, Abeer Alwan:
Introduction to the Issue on Data Science: Machine Learning for Audio Signal Processing. IEEE J. Sel. Top. Signal Process. 13(2): 203-205 (2019) - [j6]Taejun Kim, Jongpil Lee, Juhan Nam:
Comparison and Analysis of SampleCNN Architectures for Audio Classification. IEEE J. Sel. Top. Signal Process. 13(2): 285-297 (2019) - [j5]Juhan Nam, Keunwoo Choi, Jongpil Lee, Szu-Yu Chou, Yi-Hsuan Yang:
Deep Learning for Audio-Based Music Classification and Tagging: Teaching Computers to Distinguish Rock from Bach. IEEE Signal Process. Mag. 36(1): 41-51 (2019) - [c23]Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Juhan Nam:
Graph Neural Network for Music Score Data and Modeling Expressive Piano Performance. ICML 2019: 3060-3070 - [c22]Jeong Choi, Jongpil Lee, Jiyoung Park, Juhan Nam:
Zero-shot Learning for Audio-based Music Classification and Tagging. ISMIR 2019: 67-74 - [c21]Kyungyun Lee, Juhan Nam:
Learning a Joint Embedding Space of Monophonic and Mixed Music Signals for Singing Voice. ISMIR 2019: 295-302 - [c20]Saebyul Park, Taegyun Kwon, Jongpil Lee, Jeounghoon Kim, Juhan Nam:
A Cross-Scape Plot Representation for Visualizing Symbolic Melodic Similarity. ISMIR 2019: 423-430 - [c19]Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam:
VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance. ISMIR 2019: 908-915 - [i16]Doheum Park, Juhan Nam, Juyong Park:
Quantifying Novelty and Influence, and the Patterns of Paradigm Shifts. CoRR abs/1905.08665 (2019) - [i15]Jeong Choi, Jongpil Lee, Jiyoung Park, Juhan Nam:
Zero-shot Learning and Knowledge Transfer in Music Classification and Tagging. CoRR abs/1906.08615 (2019) - [i14]Kyungyun Lee, Juhan Nam:
Learning a Joint Embedding Space of Monophonic and Mixed Music Signals for Singing Voice. CoRR abs/1906.11139 (2019) - [i13]Jongpil Lee, Jiyoung Park, Juhan Nam:
Representation Learning of Music Using Artist, Album, and Track Information. CoRR abs/1906.11783 (2019) - [i12]Jeong Choi, Jongpil Lee, Jiyoung Park, Juhan Nam:
Zero-shot Learning for Audio-based Music Classification and Tagging. CoRR abs/1907.02670 (2019) - [i11]Taejun Kim, Juhan Nam:
Temporal Feedback Convolutional Recurrent Neural Networks for Keyword Spotting. CoRR abs/1911.01803 (2019) - 2018
- [c18]Sangeon Yong, Juhan Nam:
Singing Expression Transfer from One Voice to Another for a Given Song. ICASSP 2018: 151-155 - [c17]Taejun Kim, Jongpil Lee, Juhan Nam:
Sample-Level CNN Architectures for Music Auto-Tagging Using Raw Waveforms. ICASSP 2018: 366-370 - [c16]Dasaem Jeong, Taegyun Kwon, Juhan Nam:
A Timbre-based Approach to Estimate Key Velocity from Polyphonic Piano Recordings. ISMIR 2018: 120-127 - [c15]Kyungyun Lee, Keunwoo Choi, Juhan Nam:
Revisiting Singing Voice Detection: A quantitative review and the future outlook. ISMIR 2018: 506-513 - [c14]Jiyoung Park, Jongpil Lee, Jangyeon Park, Jung-Woo Ha, Juhan Nam:
Representation Learning of Music Using Artist Labels. ISMIR 2018: 717-724 - [i10]Kyungyun Lee, Keunwoo Choi, Juhan Nam:
Revisiting Singing Voice Detection: a Quantitative Review and the Future Outlook. CoRR abs/1806.01180 (2018) - [i9]Jongpil Lee, Kyungyun Lee, Jiyoung Park, Jangyeon Park, Juhan Nam:
Deep Content-User Embedding Model for Music Recommendation. CoRR abs/1807.06786 (2018) - [i8]Jiyoung Park, Donghyun Kim, Jongpil Lee, Sangeun Kum, Juhan Nam:
A Hybrid of Deep Audio Feature and i-vector for Artist Recognition. CoRR abs/1807.09208 (2018) - 2017
- [j4]Jongpil Lee, Juhan Nam:
Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging. IEEE Signal Process. Lett. 24(8): 1208-1212 (2017) - [c13]Edward Jangwon Lee, Sangeon Yong, Soonbeom Choi, Liwei Chan, Roshan Lalintha Peiris, Juhan Nam:
Use the Force: Incorporating Touch Force Sensors into Mobile Music Interaction. CMMR 2017: 574-585 - [c12]Jongpil Lee, Jiyoung Park, Sangeun Kum, Youngho Jeong, Juhan Nam:
Combining Multi-Scale Features Using Sample-Level Deep Convolutional Neural Networks for Weakly Supervised Sound Event Detection. DCASE 2017: 69-73 - [c11]Dasaem Jeong, Juhan Nam:
Note Intensity Estimation of Piano Recordings by Score-Informed NMF. Semantic Audio 2017 - [c10]Sangeon Yong, Edward Jangwon Lee, Roshan Lalintha Peiris, Liwei Chan, Juhan Nam:
ForceClicks: Enabling Efficient Button Interaction with Single Finger Touch. TEI 2017: 489-493 - [i7]Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, Juhan Nam:
Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms. CoRR abs/1703.01789 (2017) - [i6]Jongpil Lee, Juhan Nam:
Multi-Level and Multi-Scale Feature Aggregation Using Pre-trained Convolutional Neural Networks for Music Auto-tagging. CoRR abs/1703.01793 (2017) - [i5]Jongpil Lee, Juhan Nam:
Multi-Level and Multi-Scale Feature Aggregation Using Sample-level Deep Convolutional Neural Networks for Music Classification. CoRR abs/1706.06810 (2017) - [i4]Jiyoung Park, Jongpil Lee, Jangyeon Park, Jung-Woo Ha, Juhan Nam:
Representation Learning of Music Using Artist Labels. CoRR abs/1710.06648 (2017) - [i3]Taejun Kim, Jongpil Lee, Juhan Nam:
Sample-level CNN Architectures for Music Auto-tagging Using Raw Waveforms. CoRR abs/1710.10451 (2017) - [i2]Taegyun Kwon, Dasaem Jeong, Juhan Nam:
Audio-to-score alignment of piano music using RNN-based automatic music transcription. CoRR abs/1711.04480 (2017) - [i1]Jongpil Lee, Taejun Kim, Jiyoung Park, Juhan Nam:
Raw Waveform-based Audio Classification Using Sample-level CNN Architectures. CoRR abs/1712.00866 (2017) - 2016
- [c9]Sangeun Kum, Changheun Oh, Juhan Nam:
Melody Extraction on Vocal Segments Using Multi-Column Deep Neural Networks. ISMIR 2016: 819-825 - 2015
- [c8]Seunghun Kim, Juhan Nam, Graham Wakefield:
Toward Certain Sonic Properties of an Audio Feedback System by Evolutionary Control of Second-Order Structures. EvoMUSART 2015: 113-124 - [c7]Seunghun Kim, Graham Wakefield, Juhan Nam:
Augmenting Room Acoustics and System Interaction for Intentional Control of Audio Feedback. ICMC 2015 - 2013
- [c6]Kyogu Lee, Ziwon Hyung, Juhan Nam:
Acoustic scene classification using sparse feature learning and event-based pooling. WASPAA 2013: 1-4 - 2012
- [j3]Jussi Pekonen, Juhan Nam, Julius O. Smith III, Vesa Välimäki:
Optimized Polynomial Spline Basis Function Design for Quasi-Bandlimited Classical Waveform Synthesis. IEEE Signal Process. Lett. 19(3): 159-162 (2012) - [c5]Juhan Nam, Gautham J. Mysore, Paris Smaragdis:
Sound Recognition in Mixtures. LVA/ICA 2012: 405-413 - [c4]