default search action
Xiaochen Peng
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c28]Hidehiro Fujiwara, Haruki Mori, Wei-Chang Zhao, Kinshuk Khare, Cheng-En Lee, Xiaochen Peng, Vineet Joshi, Chao-Kai Chuang, Shu-Huan Hsu, Takeshi Hashizume, Toshiaki Naganuma, Chen-Hung Tien, Yao-Yi Liu, Yen-Chien Lai, Chia-Fu Lee, Tan-Li Chou, Kerem Akarvardar, Saman Adham, Yih Wang, Yu-Der Chih, Yen-Huei Chen, Hung-Jen Liao, Tsung-Yung Jonathan Chang:
34.4 A 3nm, 32.5TOPS/W, 55.0TOPS/mm2 and 3.78Mb/mm2 Fully-Digital Compute-in-Memory Macro Supporting INT12 × INT12 with a Parallel-MAC Architecture and Foundry 6T-SRAM Bit Cell. ISSCC 2024: 572-574 - [c27]Ankit Kaul, Madison Manley, James Read, Yandong Luo, Xiaochen Peng, Shimeng Yu, Muhannad S. Bakir:
Co-Optimization for Robust Power Delivery Design in 3D-Heterogeneous Integration of Compute In-Memory Accelerators. VLSI Technology and Circuits 2024: 1-2 - 2022
- [b1]Xiaochen Peng:
Benchmark Framework for 2-D/3-D Integrated Compute-in-Memory Based Machine Learning Accelerator. Georgia Institute of Technology, Atlanta, GA, USA, 2022 - [j13]Shanshi Huang, Xiaoyu Sun, Xiaochen Peng, Hongwu Jiang, Shimeng Yu:
Achieving High In Situ Training Accuracy and Energy Efficiency with Analog Non-Volatile Synaptic Devices. ACM Trans. Design Autom. Electr. Syst. 27(4): 37:1-37:19 (2022) - 2021
- [j12]Anni Lu, Xiaochen Peng, Wantong Li, Hongwu Jiang, Shimeng Yu:
NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark. Frontiers Artif. Intell. 4: 659060 (2021) - [j11]Xiaochen Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu:
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 40(11): 2306-2319 (2021) - [j10]Shimeng Yu, Wonbo Shim, Xiaochen Peng, Yandong Luo:
RRAM for Compute-in-Memory: From Inference to Training. IEEE Trans. Circuits Syst. I Regul. Pap. 68(7): 2753-2765 (2021) - [j9]Jian Meng, Li Yang, Xiaochen Peng, Shimeng Yu, Deliang Fan, Jae-Sun Seo:
Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks. IEEE Trans. Circuits Syst. II Express Briefs 68(5): 1576-1580 (2021) - [j8]Anni Lu, Xiaochen Peng, Yandong Luo, Shanshi Huang, Shimeng Yu:
A Runtime Reconfigurable Design of Compute-in-Memory-Based Hardware Accelerator for Deep Learning Inference. ACM Trans. Design Autom. Electr. Syst. 26(6): 45:1-45:18 (2021) - [j7]Shanshi Huang, Hongwu Jiang, Xiaochen Peng, Wantong Li, Shimeng Yu:
Secure XOR-CIM Engine: Compute-In-Memory SRAM Architecture With Embedded XOR Encryption. IEEE Trans. Very Large Scale Integr. Syst. 29(12): 2027-2039 (2021) - [c26]Ankit Kaul, Yandong Luo, Xiaochen Peng, Shimeng Yu, Muhannad S. Bakir:
Thermal Reliability Considerations of Resistive Synaptic Devices for 3D CIM System Performance. 3DIC 2021: 1-5 - [c25]Anni Lu, Xiaochen Peng, Wantong Li, Hongwu Jiang, Shimeng Yu:
NeuroSim Validation with 40nm RRAM Compute-in-Memory Macro. AICAS 2021: 1-4 - [c24]Anni Lu, Xiaochen Peng, Shimeng Yu:
Compute-in-RRAM with Limited On-chip Resources. AICAS 2021: 1-4 - [c23]Anni Lu, Xiaochen Peng, Yandong Luo, Shanshi Huang, Shimeng Yu:
A Runtime Reconfigurable Design of Compute-in-Memory based Hardware Accelerator. DATE 2021: 932-937 - [c22]Shimeng Yu, Wonbo Shim, Jae Hur, Yuan-chun Luo, Gihun Choe, Wantong Li, Anni Lu, Xiaochen Peng:
Compute-in-Memory: From Device Innovation to 3D System Integration. ESSDERC 2021: 21-28 - [c21]Wonbo Shim, Jian Meng, Xiaochen Peng, Jae-sun Seo, Shimeng Yu:
Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine. IRPS 2021: 1-4 - [c20]Shanshi Huang, Xiaochen Peng, Hongwu Jiang, Yandong Luo, Shimeng Yu:
Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning. ISCAS 2021: 1-5 - [c19]Panni Wang, Xiaochen Peng, Wriddhi Chakraborty, Asif Khan, Suman Datta, Shimeng Yu:
Cryogenic Performance for Compute-in-Memory Based Deep Neural Network Accelerator. ISCAS 2021: 1-4 - 2020
- [j6]Hongwu Jiang, Xiaochen Peng, Shanshi Huang, Shimeng Yu:
CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays. IEEE Trans. Computers 69(7): 944-954 (2020) - [j5]Xiaochen Peng, Rui Liu, Shimeng Yu:
Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-in-Memory Architectures. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 67-I(4): 1333-1343 (2020) - [j4]Anni Lu, Xiaochen Peng, Yandong Luo, Shimeng Yu:
Benchmark of the Compute-in-Memory-Based DNN Accelerator With Area Constraint. IEEE Trans. Very Large Scale Integr. Syst. 28(9): 1945-1952 (2020) - [c18]Shimeng Yu, Xiaoyu Sun, Xiaochen Peng, Shanshi Huang:
Compute-in-Memory with Emerging Nonvolatile-Memories: Challenges and Prospects. CICC 2020: 1-4 - [c17]Hongwu Jiang, Shanshi Huang, Xiaochen Peng, Jian-Wei Su, Yen-Chi Chou, Wei-Hsing Huang, Ta-Wei Liu, Ruhui Liu, Meng-Fan Chang, Shimeng Yu:
A Two-way SRAM Array based Accelerator for Deep Neural Network On-chip Training. DAC 2020: 1-6 - [c16]Shanshi Huang, Xiaoyu Sun, Xiaochen Peng, Hongwu Jiang, Shimeng Yu:
Overcoming Challenges for Achieving High in-situ Training Accuracy with Emerging Memories. DATE 2020: 1025-1030 - [c15]Shanshi Huang, Hongwu Jiang, Xiaochen Peng, Wantong Li, Shimeng Yu:
XOR-CIM: Compute-In-Memory SRAM Architecture with Embedded XOR Encryption. ICCAD 2020: 77:1-77:6 - [c14]Hongwu Jiang, Shanshi Huang, Xiaochen Peng, Shimeng Yu:
MINT: Mixed-Precision RRAM-Based IN-Memory Training Architecture. ISCAS 2020: 1-5 - [c13]Yandong Luo, Xiaochen Peng, Ryan Hatcher, Titash Rakshit, Jorge Kittl, Mark S. Rodder, Jae-Sun Seo, Shimeng Yu:
A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out. ISCAS 2020: 1-5 - [c12]Wonbo Shim, Hongwu Jiang, Xiaochen Peng, Shimeng Yu:
Architectural Design of 3D NAND Flash based Compute-in-Memory for Inference Engine. MEMSYS 2020: 77-85 - [i1]Xiaochen Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu:
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training. CoRR abs/2003.06471 (2020)
2010 – 2019
- 2019
- [j3]Manqing Mao, Xiaochen Peng, Rui Liu, Jingtao Li, Shimeng Yu, Chaitali Chakrabarti:
MAX2: An ReRAM-Based Neural Network Accelerator That Maximizes Data Reuse and Area Utilization. IEEE J. Emerg. Sel. Topics Circuits Syst. 9(2): 398-410 (2019) - [c11]Wenqiang Zhang, Xiaochen Peng, Huaqiang Wu, Bin Gao, Hu He, Youhui Zhang, Shimeng Yu, He Qian:
Design Guidelines of RRAM based Neural-Processing-Unit: A Joint Device-Circuit-Algorithm Analysis. DAC 2019: 140 - [c10]Yandong Luo, Xiaochen Peng, Shimeng Yu:
MLP+NeuroSimV3.0: Improving On-chip Learning Performance with Device to Algorithm Optimizations. ICONS 2019: 1:1-1:7 - [c9]Xiaochen Peng, Rui Liu, Shimeng Yu:
Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on RRAM Based Processing-In-Memory Architecture. ISCAS 2019: 1-5 - [c8]Xiaochen Peng, Minkyu Kim, Xiaoyu Sun, Shihui Yin, Titash Rakshit, Ryan M. Hatcher, Jorge A. Kittl, Jae-sun Seo, Shimeng Yu:
Inference engine benchmarking across technological platforms from CMOS to RRAM. MEMSYS 2019: 471-479 - [c7]Hongwu Jiang, Xiaochen Peng, Shanshi Huang, Shimeng Yu:
CIMAT: a transpose SRAM-based compute-in-memory architecture for deep neural network on-chip training. MEMSYS 2019: 490-496 - 2018
- [j2]Pai-Yu Chen, Xiaochen Peng, Shimeng Yu:
NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(12): 3067-3080 (2018) - [j1]Rui Liu, Pai-Yu Chen, Xiaochen Peng, Shimeng Yu:
X-Point PUF: Exploiting Sneak Paths for a Strong Physical Unclonable Function Design. IEEE Trans. Circuits Syst. I Regul. Pap. 65-I(10): 3459-3468 (2018) - [c6]Xiaochen Peng, Shimeng Yu:
Benchmark of RRAM based Architectures for Dot-Product Computation. APCCAS 2018: 378-381 - [c5]Xiaoyu Sun, Xiaochen Peng, Pai-Yu Chen, Rui Liu, Jae-sun Seo, Shimeng Yu:
Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. ASP-DAC 2018: 574-579 - [c4]Rui Liu, Xiaochen Peng, Xiaoyu Sun, Win-San Khwa, Xin Si, Jia-Jing Chen, Jia-Fang Li, Meng-Fan Chang, Shimeng Yu:
Parallelizing SRAM arrays with customized bit-cell for binary neural networks. DAC 2018: 21:1-21:6 - [c3]Xiaoyu Sun, Shihui Yin, Xiaochen Peng, Rui Liu, Jae-sun Seo, Shimeng Yu:
XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks. DATE 2018: 1423-1428 - [c2]Jiyong Woo, Xiaochen Peng, Shimeng Yu:
Design Considerations of Selector Device in Cross-Point RRAM Array for Neuromorphic Computing. ISCAS 2018: 1-4 - [c1]Manqing Mao, Xiaoyu Sun, Xiaochen Peng, Shimeng Yu, Chaitali Chakrabarti:
A Versatile ReRAM-based Accelerator for Convolutional Neural Networks. SiPS 2018: 211-216
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-18 20:30 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint