default search action
Rio Yokota
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2024
- [j25]Hiroyuki Ootomo, Katsuhisa Ozaki, Rio Yokota:
DGEMM on integer matrix multiplication unit. Int. J. High Perform. Comput. Appl. 38(4): 297-313 (2024) - [j24]Qianxiang Ma, Rio Yokota:
An inherently parallel ℋ2-ULV factorization for solving dense linear systems on GPUs. Int. J. High Perform. Comput. Appl. 38(4): 314-336 (2024) - [j23]Kenta Niwa, Hiro Ishii, Hiroshi Sawada, Akinori Fujino, Noboru Harada, Rio Yokota:
Natural Gradient Primal-Dual Method for Decentralized Learning. IEEE Trans. Signal Inf. Process. over Networks 10: 417-433 (2024) - [c43]Ryosuke Yamada, Kensho Hara, Hirokatsu Kataoka, Koshi Makihara, Nakamasa Inoue, Rio Yokota, Yutaka Satoh:
Formula-Supervised Visual-Geometric Pre-training. ECCV (22) 2024: 57-74 - [c42]Satoki Ishikawa, Rio Yokota:
When Does Second-Order Optimization Speed Up Training? Tiny Papers @ ICLR 2024 - [c41]Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan, Thomas Möllenhoff:
Variational Learning is Effective for Large Deep Networks. ICML 2024 - [i47]Yuesong Shen, Nico Daheim, Bai Cong, Peter Nickl, Gian Maria Marconi, Clement Bazan, Rio Yokota, Iryna Gurevych, Daniel Cremers, Mohammad Emtiyaz Khan, Thomas Möllenhoff:
Variational Learning is Effective for Large Deep Networks. CoRR abs/2402.17641 (2024) - [i46]Taishi Nakamura, Mayank Mishra, Simone Tedeschi, Yekun Chai, Jason T. Stillerman, Felix Friedrich, Prateek Yadav, Tanmay Laud, Minh Chien Vu, Terry Yue Zhuo, Diganta Misra, Ben Bogin, Xuan-Son Vu, Marzena Karpinska, Arnav Varma Dantuluri, Wojciech Kusa, Tommaso Furlanello, Rio Yokota, Niklas Muennighoff, Suhas Pai, Tosin P. Adewumi, Veronika Laippala, Xiaozhe Yao, Adalberto Junior, Alpay Ariyak, Aleksandr Drozd, Jordan Clive, Kshitij Gupta, Liangyu Chen, Qi Sun, Ken Tsui, Noah Persaud, Nour Moustafa-Fahmy, Tianlong Chen, Mohit Bansal, Nicolo Monti, Tai Dang, Ziyang Luo, Tien-Tung Bui, Roberto Navigli, Virendra Mehta, Matthew Blumberg, Victor May, Huu Nguyen, Sampo Pyysalo:
Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order. CoRR abs/2404.00399 (2024) - [i45]Naoaki Okazaki, Kakeru Hattori, Hirai Shota, Hiroki Iida, Masanari Ohi, Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Rio Yokota, Sakae Mizuki:
Building a Large Japanese Web Corpus for Large Language Models. CoRR abs/2404.17733 (2024) - [i44]Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Hiroki Iida, Masanari Ohi, Kakeru Hattori, Hirai Shota, Sakae Mizuki, Rio Yokota, Naoaki Okazaki:
Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities. CoRR abs/2404.17790 (2024) - [i43]Akiko Aizawa, Eiji Aramaki, Bowen Chen, Fei Cheng, Hiroyuki Deguchi, Rintaro Enomoto, Kazuki Fujii, Kensuke Fukumoto, Takuya Fukushima, Namgi Han, Yuto Harada, Chikara Hashimoto, Tatsuya Hiraoka, Shohei Hisada, Sosuke Hosokawa, Lu Jie, Keisuke Kamata, Teruhito Kanazawa, Hiroki Kanezashi, Hiroshi Kataoka, Satoru Katsumata, Daisuke Kawahara, Seiya Kawano, Atsushi Keyaki, Keisuke Kiryu, Hirokazu Kiyomaru, Takashi Kodama, Takahiro Kubo, Yohei Kuga, Ryoma Kumon, Shuhei Kurita, Sadao Kurohashi, Conglong Li, Taiki Maekawa, Hiroshi Matsuda, Yusuke Miyao, Kentaro Mizuki, Sakae Mizuki, Yugo Murawaki, Ryo Nakamura, Taishi Nakamura, Kouta Nakayama, Tomoka Nakazato, Takuro Niitsuma, Jiro Nishitoba, Yusuke Oda, Hayato Ogawa, Takumi Okamoto, Naoaki Okazaki, Yohei Oseki, Shintaro Ozaki, Koki Ryu, Rafal Rzepka, Keisuke Sakaguchi, Shota Sasaki, Satoshi Sekine, Kohei Suda, Saku Sugawara, Issa Sugiura, Hiroaki Sugiyama, Hisami Suzuki, Jun Suzuki, Toyotaro Suzumura, Kensuke Tachibana, Yu Takagi, Kyosuke Takami, Koichi Takeda, Masashi Takeshita, Masahiro Tanaka, Kenjiro Taura, Arseny Tolmachev, Nobuhiro Ueda, Zhen Wan, Shuntaro Yada, Sakiko Yahata, Yuya Yamamoto, Yusuke Yamauchi, Hitomi Yanaka, Rio Yokota, Koichiro Yoshino:
LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs. CoRR abs/2407.03963 (2024) - [i42]Ryo Nakamura, Ryu Tadokoro, Ryosuke Yamada, Yuki M. Asano, Iro Laina, Christian Rupprecht, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka:
Scaling Backwards: Minimal Synthetic Pre-training? CoRR abs/2408.00677 (2024) - [i41]Go Ohtani, Ryu Tadokoro, Ryosuke Yamada, Yuki M. Asano, Iro Laina, Christian Rupprecht, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka, Yoshimitsu Aoki:
Rethinking Image Super-Resolution from Training Data Perspectives. CoRR abs/2409.00768 (2024) - 2023
- [j22]Damian W. I. Rouson, Konrad Hinsen, Jeffrey C. Carver, Irina Tezaur, John Shalf, Rio Yokota, Anshu Dubey:
The 2023 Society for Industrial and Applied Mathematics Conference on Computational Science and Engineering. Comput. Sci. Eng. 25(2): 41-43 (2023) - [j21]Erik A. Daxberger, Siddharth Swaroop, Kazuki Osawa, Rio Yokota, Richard E. Turner, José Miguel Hernández-Lobato, Mohammad Emtiyaz Khan:
Improving Continual Learning by Accurate Gradient Reconstructions of the Past. Trans. Mach. Learn. Res. 2023 (2023) - [j20]Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas:
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization. Trans. Mach. Learn. Res. 2023 (2023) - [j19]Sameer Deshmukh, Rio Yokota, George Bosilca:
Cache Optimization and Performance Modeling of Batched, Small, and Rectangular Matrix Multiplication on Intel, AMD, and Fujitsu Processors. ACM Trans. Math. Softw. 49(3): 23:1-23:29 (2023) - [c40]Tomoya Takahashi, Shingo Yashima, Kohta Ishikawa, Ikuro Sato, Rio Yokota:
Pixel-level Contrastive Learning of Driving Videos with Optical Flow. CVPR Workshops 2023: 3180-3187 - [c39]Sora Takashima, Ryo Hayamizu, Nakamasa Inoue, Hirokatsu Kataoka, Rio Yokota:
Visual Atoms: Pre-Training Vision Transformers with Sinusoidal Waves. CVPR 2023: 18579-18588 - [c38]Edgar Josafat Martinez-Noriega, Rio Yokota:
Towards real-time formula driven dataset feed for large scale deep learning training. High Performance Computing for Imaging 2023: 1-6 - [c37]Hiroyuki Ootomo, Rio Yokota:
Reducing shared memory footprint to leverage high throughput on Tensor Cores and its flexible API extension library. HPC Asia 2023: 1-8 - [c36]Risa Shinoda, Ryo Hayamizu, Kodai Nakashima, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka:
SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning. ICCV 2023: 19997-20006 - [c35]Ryo Nakamura, Hirokatsu Kataoka, Sora Takashima, Edgar Josafat Martinez-Noriega, Rio Yokota, Nakamasa Inoue:
Pre-training Vision Transformers with Very Limited Synthesized Images. ICCV 2023: 20303-20312 - [c34]Sameer Deshmukh, Rio Yokota, George Bosilca, Qianxiang Ma:
O(N) distributed direct factorization of structured dense matrices using runtime systems. ICPP 2023: 1-10 - [c33]M. Ridwan Apriansyah, Rio Yokota:
Computing the k-th Eigenvalue of Symmetric H2-Matrices. ICPP 2023: 11-20 - [c32]Hiroyuki Ootomo, Rio Yokota:
Mixed-Precision Random Projection for RandNLA on Tensor Cores. PASC 2023: 14:1-14:11 - [c31]Shaoshuai Zhang, Ruchi Shah, Hiroyuki Ootomo, Rio Yokota, Panruo Wu:
Fast Symmetric Eigenvalue Decomposition via WY Representation on Tensor Core. PPoPP 2023: 301-312 - [c30]Hiroyuki Ootomo, Hidetaka Manabe, Kenji Harada, Rio Yokota:
Quantum Circuit Simulation by SGEMM Emulation on Tensor Cores and Automatic Precision Selection. ISC 2023: 259-276 - [i40]Sora Takashima, Ryo Hayamizu, Nakamasa Inoue, Hirokatsu Kataoka, Rio Yokota:
Visual Atoms: Pre-training Vision Transformers with Sinusoidal Waves. CoRR abs/2303.01112 (2023) - [i39]Hiroyuki Ootomo, Hidetaka Manabe, Kenji Harada, Rio Yokota:
Quantum Circuit Simulation by SGEMM Emulation on Tensor Cores and Automatic Precision Selection. CoRR abs/2303.08989 (2023) - [i38]Hiroyuki Ootomo, Rio Yokota:
Mixed-Precision Random Projection for RandNLA on Tensor Cores. CoRR abs/2304.04612 (2023) - [i37]Kazuki Osawa, Satoki Ishikawa, Rio Yokota, Shigang Li, Torsten Hoefler:
ASDL: A Unified Interface for Gradient Preconditioning in PyTorch. CoRR abs/2305.04684 (2023) - [i36]Hiroyuki Ootomo, Katsuhisa Ozaki, Rio Yokota:
DGEMM on Integer Matrix Multiplication Unit. CoRR abs/2306.11975 (2023) - [i35]Ryo Nakamura, Hirokatsu Kataoka, Sora Takashima, Edgar Josafat Martinez-Noriega, Rio Yokota, Nakamasa Inoue:
Pre-training Vision Transformers with Very Limited Synthesized Images. CoRR abs/2307.14710 (2023) - [i34]Hiroyuki Ootomo, Rio Yokota:
Reducing shared memory footprint to leverage high throughput on Tensor Cores and its flexible API extension library. CoRR abs/2308.15152 (2023) - [i33]Risa Shinoda, Ryo Hayamizu, Kodai Nakashima, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka:
SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning. CoRR abs/2309.17083 (2023) - [i32]Sameer Deshmukh, Qinxiang Ma, Rio Yokota, George Bosilca:
O(N) distributed direct factorization of structured dense matrices using runtime systems. CoRR abs/2311.00921 (2023) - [i31]Sameer Deshmukh, Rio Yokota, George Bosilca:
Cache Optimization and Performance Modeling of Batched, Small, and Rectangular Matrix Multiplication on Intel, AMD, and Fujitsu Processors. CoRR abs/2311.07602 (2023) - [i30]M. Ridwan Apriansyah, Rio Yokota:
Computing the k-th Eigenvalue of Symmetric H2-Matrices. CoRR abs/2311.08618 (2023) - 2022
- [j18]Hiroyuki Ootomo, Rio Yokota:
Recovering single precision accuracy from Tensor Cores while surpassing the FP32 theoretical peak performance. Int. J. High Perform. Comput. Appl. 36(4): 475-491 (2022) - [j17]Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Chuan-Sheng Foo, Rio Yokota:
Scalable and Practical Natural Gradient for Large-Scale Deep Learning. IEEE Trans. Pattern Anal. Mach. Intell. 44(1): 404-415 (2022) - [j16]M. Ridwan Apriansyah, Rio Yokota:
Parallel QR Factorization of Block Low-rank Matrices. ACM Trans. Math. Softw. 48(3): 27:1-27:28 (2022) - [c29]Hirokatsu Kataoka, Ryo Hayamizu, Ryosuke Yamada, Kodai Nakashima, Sora Takashima, Xinyu Zhang, Edgar Josafat Martinez-Noriega, Nakamasa Inoue, Rio Yokota:
Replacing Labeled Real-image Datasets with Auto-generated Contours. CVPR 2022: 21200-21209 - [c28]Hana Hoshino, Kei Ota, Asako Kanezaki, Rio Yokota:
OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via Distribution Matching. ICRA 2022: 448-454 - [c27]Aoyu Li, Ikuro Sato, Kohta Ishikawa, Rei Kawakami, Rio Yokota:
Informative Sample-Aware Proxy for Deep Metric Learning. MMAsia 2022: 4:1-4:11 - [c26]Satoshi Ohshima, Akihiro Ida, Rio Yokota, Ichitaro Yamazaki:
QR Factorization of Block Low-Rank Matrices on Multi-instance GPU. PDCAT 2022: 359-369 - [c25]Qianxiang Ma, Sameer Deshmukh, Rio Yokota:
Scalable Linear Time Dense Direct Solver for 3-D Problems without Trailing Sub-Matrix Dependencies. SC 2022: 83:1-83:12 - [i29]Hiroyuki Ootomo, Rio Yokota:
Recovering single precision accuracy from Tensor Cores while surpassing the FP32 theoretical peak performance. CoRR abs/2203.03341 (2022) - [i28]Hirokatsu Kataoka, Ryo Hayamizu, Ryosuke Yamada, Kodai Nakashima, Sora Takashima, Xinyu Zhang, Edgar Josafat Martinez-Noriega, Nakamasa Inoue, Rio Yokota:
Replacing Labeled Real-image Datasets with Auto-generated Contours. CoRR abs/2206.09132 (2022) - [i27]M. Ridwan Apriansyah, Rio Yokota:
Parallel QR Factorization of Block Low-Rank Matrices. CoRR abs/2208.06194 (2022) - [i26]Qianxiang Ma, Sameer Deshmukh, Rio Yokota:
Scalable Linear Time Dense Direct Solver for 3-D Problems Without Trailing Sub-Matrix Dependencies. CoRR abs/2208.10907 (2022) - [i25]Hiroki Naganuma, Kartik Ahuja, Shiro Takagi, Tetsuya Motokawa, Rio Yokota, Kohta Ishikawa, Ikuro Sato, Ioannis Mitliagkas:
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization. CoRR abs/2211.08583 (2022) - [i24]Aoyu Li, Ikuro Sato, Kohta Ishikawa, Rei Kawakami, Rio Yokota:
Informative Sample-Aware Proxy for Deep Metric Learning. CoRR abs/2211.10382 (2022) - 2021
- [j15]Tingyu Wang, Rio Yokota, Lorena A. Barba:
ExaFMM: a high-performance fast multipole method library with C++ and Python interfaces. J. Open Source Softw. 6(61): 3145 (2021) - [c24]Shun Iwase, Xingyu Liu, Rawal Khirodkar, Rio Yokota, Kris M. Kitani:
RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering. ICCV 2021: 3283-3292 - [i23]Shun Iwase, Xingyu Liu, Rawal Khirodkar, Rio Yokota, Kris M. Kitani:
RePOSE: Real-Time Iterative Rendering and Refinement for 6D Object Pose Estimation. CoRR abs/2104.00633 (2021) - [i22]Hana Hoshino, Kei Ota, Asako Kanezaki, Rio Yokota:
OPIRL: Sample Efficient Off-Policy Inverse Reinforcement Learning via Distribution Matching. CoRR abs/2109.04307 (2021) - 2020
- [c23]Rise Ooi, Takeshi Iwashita, Takeshi Fukaya, Akihiro Ida, Rio Yokota:
Effect of Mixed Precision Computing on H-Matrix Vector Multiplication in BEM Analysis. HPC Asia 2020: 92-101 - [c22]Yuichiro Ueno, Kazuki Osawa, Yohei Tsuji, Akira Naruse, Rio Yokota:
Rich Information is Affordable: A Systematic Performance Analysis of Second-order Optimization Using K-FAC. KDD 2020: 2145-2153 - [i21]Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Chuan-Sheng Foo, Rio Yokota:
Scalable and Practical Natural Gradient for Large-Scale Deep Learning. CoRR abs/2002.06015 (2020) - [i20]Kento Doi, Ryuhei Hamaguchi, Shun Iwase, Rio Yokota, Yutaka Matsuo, Ken Sakurada:
Epipolar-Guided Deep Object Matching for Scene Change Detection. CoRR abs/2007.15540 (2020)
2010 – 2019
- 2019
- [j14]Ichitaro Yamazaki, Akihiro Ida, Rio Yokota, Jack J. Dongarra:
Distributed-memory lattice H-matrix factorization. Int. J. High Perform. Comput. Appl. 33(5) (2019) - [j13]Akihiro Ida, Hiroshi Nakashima, Tasuku Hiraishi, Ichitaro Yamazaki, Rio Yokota, Takeshi Iwashita:
QR Factorization of Block Low-rank Matrices with Weak Admissibility Condition. J. Inf. Process. 27: 831-839 (2019) - [j12]Mustafa Abdul Jabbar, Mohammed A. Al Farhan, Noha Al-Harthi, Rui Chen, Rio Yokota, Hakan Bagci, David E. Keyes:
Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering. SIAM J. Sci. Comput. 41(3): C245-C268 (2019) - [c21]Yuichiro Ueno, Rio Yokota:
Exhaustive Study of Hierarchical AllReduce Patterns for Large Messages Between GPUs. CCGRID 2019: 430-439 - [c20]Hiroki Naganuma, Rio Yokota:
A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training. CCGRID 2019: 696-703 - [c19]Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Yokota, Satoshi Matsuoka:
Large-Scale Distributed Second-Order Optimization Using Kronecker-Factored Approximate Curvature for Deep Convolutional Neural Networks. CVPR 2019: 12359-12367 - [c18]Yohei Tsuji, Kazuki Osawa, Yuichiro Ueno, Akira Naruse, Rio Yokota, Satoshi Matsuoka:
Performance Optimizations and Analysis of Distributed Deep Learning with Approximated Second-Order Optimization Method. ICPP Workshops 2019: 21:1-21:8 - [c17]Satoshi Ohshima, Ichitaro Yamazaki, Akihiro Ida, Rio Yokota:
Optimization of Numerous Small Dense-Matrix-Vector Multiplications in H-Matrix Arithmetic on GPU. MCSoC 2019: 9-16 - [c16]Kazuki Osawa, Siddharth Swaroop, Mohammad Emtiyaz Khan, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota:
Practical Deep Learning with Bayesian Principles. NeurIPS 2019: 4289-4301 - [i19]Kazuki Osawa, Siddharth Swaroop, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota, Mohammad Emtiyaz Khan:
Practical Deep Learning with Bayesian Principles. CoRR abs/1906.02506 (2019) - [i18]Rise Ooi, Takeshi Iwashita, Takeshi Fukaya, Akihiro Ida, Rio Yokota:
Effect of Mixed Precision Computing on H-Matrix Vector Multiplication in BEM Analysis. CoRR abs/1911.00093 (2019) - 2018
- [j11]Huda Ibeid, Rio Yokota, Jennifer Pestana, David E. Keyes:
Fast multipole preconditioners for sparse matrices arising from elliptic equations. Comput. Vis. Sci. 18(6): 213-229 (2018) - [c15]Ichitaro Yamazaki, Ahmad Abdelfattah, Akihiro Ida, Satoshi Ohshima, Stanimire Tomov, Rio Yokota, Jack J. Dongarra:
Performance of Hierarchical-matrix BiCGStab Solver on GPU Clusters. IPDPS 2018: 930-939 - [c14]Satoshi Ohshima, Ichitaro Yamazaki, Akihiro Ida, Rio Yokota:
Optimization of Hierarchical Matrix Computation on GPU. SCFA 2018: 274-292 - [e5]Rio Yokota, Weigang Wu:
Supercomputing Frontiers - 4th Asian Conference, SCFA 2018, Singapore, March 26-29, 2018, Proceedings. Lecture Notes in Computer Science 10776, Springer 2018, ISBN 978-3-319-69952-3 [contents] - [e4]Rio Yokota, Michèle Weiland, David E. Keyes, Carsten Trinitis:
High Performance Computing - 33rd International Conference, ISC High Performance 2018, Frankfurt, Germany, June 24-28, 2018, Proceedings. Lecture Notes in Computer Science 10876, Springer 2018, ISBN 978-3-319-92039-9 [contents] - [e3]Rio Yokota, Michèle Weiland, John Shalf, Sadaf R. Alam:
High Performance Computing - ISC High Performance 2018 International Workshops, Frankfurt/Main, Germany, June 28, 2018, Revised Selected Papers. Lecture Notes in Computer Science 11203, Springer 2018, ISBN 978-3-030-02464-2 [contents] - [i17]Mustafa Abdul Jabbar, Mohammed A. Al Farhan, Noha Al-Harthi, Rui Chen, Rio Yokota, Hakan Bagci, David E. Keyes:
Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering. CoRR abs/1803.09948 (2018) - [i16]Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Yokota, Satoshi Matsuoka:
Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs. CoRR abs/1811.12019 (2018) - 2017
- [c13]Mustafa Abdul Jabbar, Mohammed A. Al Farhan, Rio Yokota, David E. Keyes:
Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture. Euro-Par 2017: 553-564 - [c12]Kazuki Osawa, Rio Yokota:
Evaluating the Compression Efficiency of the Filters in Convolutional Neural Networks. ICANN (2) 2017: 459-466 - [c11]Kazuki Osawa, Akira Sekiya, Hiroki Naganuma, Rio Yokota:
Accelerating Matrix Multiplication in Deep Learning by Using Low-Rank Approximation. HPCS 2017: 186-192 - [c10]Mustafa Abdul Jabbar, George S. Markomanolis, Huda Ibeid, Rio Yokota, David E. Keyes:
Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions. ISC 2017: 79-96 - [e2]Julian M. Kunkel, Rio Yokota, Pavan Balaji, David E. Keyes:
High Performance Computing - 32nd International Conference, ISC High Performance 2017, Frankfurt, Germany, June 18-22, 2017, Proceedings. Lecture Notes in Computer Science 10266, Springer 2017, ISBN 978-3-319-58666-3 [contents] - [e1]Julian M. Kunkel, Rio Yokota, Michela Taufer, John Shalf:
High Performance Computing - ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS, Frankfurt, Germany, June 18-22, 2017, Revised Selected Papers. Lecture Notes in Computer Science 10524, Springer 2017, ISBN 978-3-319-67629-6 [contents] - [i15]Mustafa Abdul Jabbar, George S. Markomanolis, Huda Ibeid, Rio Yokota, David E. Keyes:
Communication Reducing Algorithms for Distributed Hierarchical N-Body Problems with Boundary Distributions. CoRR abs/1702.05459 (2017) - 2016
- [j10]Huda Ibeid, Rio Yokota, David E. Keyes:
A performance model for the communication in fast multipole methods on high-performance computing platforms. Int. J. High Perform. Comput. Appl. 30(4): 423-437 (2016) - [c9]Keisuke Fukuda, Motohiko Matsuda, Naoya Maruyama, Rio Yokota, Kenjiro Taura, Satoshi Matsuoka:
Tapas: An Implicitly Parallel Programming Framework for Hierarchical N-Body Algorithms. ICPADS 2016: 1100-1109 - [c8]Abdelhalim Amer, Satoshi Matsuoka, Miquel Pericàs, Naoya Maruyama, Kenjiro Taura, Rio Yokota, Pavan Balaji:
Scaling FMM with Data-Driven OpenMP Tasks on Multicore Architectures. IWOMP 2016: 156-170 - [i14]Rio Yokota, Huda Ibeid, David E. Keyes:
Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation. CoRR abs/1602.02244 (2016) - [i13]Huda Ibeid, Rio Yokota, David E. Keyes:
A Matrix-free Preconditioner for the Helmholtz Equation based on the Fast Multipole Method. CoRR abs/1608.02461 (2016) - 2014
- [j9]Hatem Ltaief, Rio Yokota:
Data-driven execution of fast multipole methods. Concurr. Comput. Pract. Exp. 26(11): 1935-1946 (2014) - [j8]Yousuke Ohno, Rio Yokota, Hiroshi Koyama, Gentaro Morimoto, Aki Hasegawa, Gen Masumoto, Noriaki Okimoto, Yoshinori Hirano, Huda Ibeid, Tetsu Narumi, Makoto Taiji:
Petascale molecular dynamics simulation using the fast multipole method on K computer. Comput. Phys. Commun. 185(10): 2575-2585 (2014) - [j7]Rio Yokota, George Turkiyyah, David E. Keyes:
Communication Complexity of the Fast Multipole Method and its Algebraic Variants. Supercomput. Front. Innov. 1(1): 63-84 (2014) - [c7]Qi Hu, Nail A. Gumerov, Rio Yokota, Lorena A. Barba, Ramani Duraiswami:
Scalable Fast Multipole Accelerated Vortex Methods. IPDPS Workshops 2014: 966-975 - [i12]Huda Ibeid, Rio Yokota, David E. Keyes:
A Performance Model for the Communication in Fast Multipole Methods on HPC Platforms. CoRR abs/1405.6362 (2014) - [i11]Mustafa Abdul Jabbar, Rio Yokota, David E. Keyes:
Asynchronous Execution of the Fast Multipole Method Using Charm++. CoRR abs/1405.7487 (2014) - [i10]Rio Yokota, George Turkiyyah, David E. Keyes:
Communication Complexity of the Fast Multipole Method and its Algebraic Variants. CoRR abs/1406.1974 (2014) - 2013
- [j6]Rio Yokota, Lorena A. Barba, Tetsu Narumi, Kenji Yasuoka:
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs. Comput. Phys. Commun. 184(3): 445-455 (2013) - [c6]Abdelhalim Amer, Naoya Maruyama, Miquel Pericàs, Kenjiro Taura, Rio Yokota, Satoshi Matsuoka:
Fork-Join and Data-Driven Execution Models on Multi-core Architectures: Case Study of the FMM. ISC 2013: 255-266 - [i9]Rio Yokota, Jennifer Pestana, Huda Ibeid, David E. Keyes:
Fast Multipole Preconditioners for Sparse Matrices Arising from Elliptic Equations. CoRR abs/1308.3339 (2013) - 2012
- [j5]Rio Yokota, Lorena A. Barba:
Hierarchical N-body Simulations with Autotuning for Heterogeneous Systems. Comput. Sci. Eng. 14(3): 30-39 (2012) - [j4]Rio Yokota, Lorena A. Barba:
A tuned and scalable fast multipole method as a preeminent algorithm for exascale systems. Int. J. High Perform. Comput. Appl. 26(4): 337-346 (2012) - [c5]Enas Yunis, Rio Yokota, Aron J. Ahmadia:
Scalable Force Directed Graph Layout Algorithms Using Fast Multipole Methods. ISPDC 2012: 180-187 - [c4]Kenjiro Taura, Jun Nakashima, Rio Yokota, Naoya Maruyama:
A Task Parallel Implementation of Fast Multipole Methods. SC Companion 2012: 617-625 - [c3]Qi Hu, Nail A. Gumerov, Rio Yokota, Lorena A. Barba, Ramani Duraiswami:
Abstract: Scalable Fast Multipole Methods for Vortex Element Methods. SC Companion 2012: 1408 - [c2]Qi Hu, Nail A. Gumerov, Rio Yokota, Lorena A. Barba, Ramani Duraiswami:
Poster: Scalable Fast Multipole Methods for Vortex Element Methods. SC Companion 2012: 1409 - [i8]