
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 83 matches
- 2021
- Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, Ion Stoica:
TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models. CoRR abs/2102.07988 (2021) - 2020
- Quan M. Nguyen, Daniel Sanchez:
Pipette: Improving Core Utilization on Irregular Applications through Intra-Core Pipeline Parallelism. MICRO 2020: 596-608 - Isabelly Rocha, Nathaniel Morris, Lydia Y. Chen, Pascal Felber, Robert Birke, Valerio Schiavoni:
PipeTune: Pipeline Parallelism of Hyper and System Parameters Tuning for Deep Learning Clusters. Middleware 2020: 89-104 - Jay H. Park, Gyeongchan Yun, Chang M. Yi, Nguyen T. Nguyen, Seungmin Lee, Jaesik Choi, Sam H. Noh, Young-ri Choi:
HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism. USENIX Annual Technical Conference 2020: 307-321 - Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim:
torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models. CoRR abs/2004.09910 (2020) - Jay H. Park, Gyeongchan Yun, Chang M. Yi, Nguyen T. Nguyen, Seungmin Lee, Jaesik Choi, Sam H. Noh, Young-ri Choi:
HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism. CoRR abs/2005.14038 (2020) - Isabelly Rocha, Nathaniel Morris, Lydia Y. Chen, Pascal Felber, Robert Birke, Valerio Schiavoni:
PipeTune: Pipeline Parallelism of Hyper and System Parameters Tuning for Deep Learning Clusters. CoRR abs/2010.00501 (2020) - Letian Zhao, Rui Xu, Tianqi Wang, Teng Tian, Xiaotian Wang, Wei Wu, Chio-in Ieong, Xi Jin:
BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training. CoRR abs/2012.12544 (2020) - 2019
- Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, Zhifeng Chen:
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. NeurIPS 2019: 103-112 - Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, Matei Zaharia:
PipeDream: generalized pipeline parallelism for DNN training. SOSP 2019: 1-15 - Lei Guan, Wotao Yin, Dongsheng Li, Xicheng Lu:
XPipe: Efficient Pipeline Model Parallelism for Multi-GPU DNN Training. CoRR abs/1911.04610 (2019) - 2018
- Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Zhifeng Chen:
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism. CoRR abs/1811.06965 (2018) - 2017
- Junchang Wang, Yangfeng Tian, Tao Li, Xiong Fu:
A Flexible Communication Mechanism for Pipeline Parallelism. ISPA/IUCC 2017: 778-785 - Yang Wang, Kenneth B. Kent:
A Region-Based Approach to Pipeline Parallelism in Java Programs on Multicores. PDP 2017: 124-131 - 2016
- Hadi Mardani Kamali, Shaahin Hessabi:
A Fault Tolerant Parallelism Approach for Implementing High-Throughput Pipelined Advanced Encryption Standard. J. Circuits Syst. Comput. 25(9): 1650113:1-1650113:14 (2016) - Gwangsun Kim, Jiyun Jeong, John Kim, Mark Stephenson:
Automatically Exploiting Implicit Pipeline Parallelism from Multiple Dependent Kernels for GPUs. PACT 2016: 341-352 - Jongsok Choi, Ruolong Lian, Stephen Dean Brown, Jason Helge Anderson:
A unified software approach to specify pipeline and spatial parallelism in FPGA hardware. ASAP 2016: 75-82 - Jinsu Park, Woongki Baek:
HAP: A Heterogeneity-Conscious Runtime System for Adaptive Pipeline Parallelism. Euro-Par 2016: 518-530 - Peter Koek, Stefan J. Geuns, Joost P. H. M. Hausmans, Henk Corporaal, Marco Jan Gerrit Bekooij:
CSDFa: A Model for Exploiting the Trade-Off between Data and Pipeline Parallelism. SCOPES 2016: 30-39 - 2015
- Chen Chen, Kai Lu, Xiaoping Wang, Xu Zhou, Zhendong Wu:
A Load-Balanced Deterministic Runtime for Pipeline Parallelism. IEICE Trans. Inf. Syst. 98-D(2): 433-436 (2015) - Shinichi Yamagiwa, Guyue Wang, Koichi Wada:
Development of an Algorithm for Extracting Parallelism and Pipeline Structure from Stream-based Processing flow with Spanning Tree. Int. J. Netw. Comput. 5(1): 159-179 (2015) - Yu Zhang, Zhaopeng Li, Hui-Fang Cao:
System-Enforced Deterministic Streaming for Efficient Pipeline Parallelism. J. Comput. Sci. Technol. 30(1): 57-73 (2015) - I-Ting Angelina Lee, Charles E. Leiserson, Tao B. Schardl, Zhunping Zhang, Jim Sukha:
On-the-Fly Pipeline Parallelism. ACM Trans. Parallel Comput. 2(3): 17:1-17:42 (2015) - Nam-Luc Tran, Thomas Peel, Sabri Skhiri:
Distributed frank-wolfe under pipelined stale synchronous parallelism. BigData 2015: 184-192 - 2014
- Joost P. H. M. Hausmans, Stefan J. Geuns, Maarten Wiggers, Marco Jan Gerrit Bekooij:
Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing. MEMOCODE 2014: 12-21 - Nader Khammassi, Jean-Christophe Le Lann:
A high-level programming model to ease pipeline parallelism expression on shared memory multicore architectures. SpringSim (HPS) 2014: 9 - 2013
- Thomas Preud'homme:
Communication inter-cœurs optimisée pour le parallélisme de flux. (Optimized inter-core communication for pipeline parallelism). Pierre and Marie Curie University, Paris, France, 2013 - Shuai Mu, Dongdong Li, Yubei Chen, Yangdong Deng, Zhihua Wang:
Exploiting the Task-Pipelined Parallelism of Stream Programs on Many-Core GPUs. IEICE Trans. Inf. Syst. 96-D(10): 2194-2207 (2013) - Chih-Sheng Lin, Chao-Sheng Lin, Yu-Shin Lin, Pao-Ann Hsiung, Chihhsiong Shih:
Multi-objective exploitation of pipeline parallelism using clustering, replication and duplication in embedded multi-core systems. J. Syst. Archit. 59(10-C): 1083-1094 (2013) - Daniel Cordes, Michael Engel, Olaf Neugebauer, Peter Marwedel:
Automatic Extraction of pipeline parallelism for embedded heterogeneous multi-core platforms. CASES 2013: 4:1-4:10
skipping 53 more matches
loading more results
failed to load more results, please try again later

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
load content from web.archive.org
Privacy notice: By enabling the option above, your browser will contact the API of web.archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
Tweets on dblp homepage
Show tweets from on the dblp homepage.
Privacy notice: By enabling the option above, your browser will contact twitter.com and twimg.com to load tweets curated by our Twitter account. At the same time, Twitter will persistently store several cookies with your web browser. While we did signal Twitter to not track our users by setting the "dnt" flag, we do not have any control over how Twitter uses your data. So please proceed with care and consider checking the Twitter privacy policy.
retrieved on 2021-03-02 03:35 CET from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint