default search action
Zhezhi He
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Books and Theses
- 2020
- [b1]Zhezhi He:
Efficient and Secure Deep Learning Inference System: A Software and Hardware Co-design Perspective. Arizona State University, Tempe, USA, 2020
Journal Articles
- 2024
- [j14]Chen Nie, Chenyu Tang, Jie Lin, Huan Hu, Chenyang Lv, Ting Cao, Weifeng Zhang, Li Jiang, Xiaoyao Liang, Weikang Qian, Yanan Sun, Zhezhi He:
VSPIM: SRAM Processing-in-Memory DNN Acceleration via Vector-Scalar Operations. IEEE Trans. Computers 73(10): 2378-2390 (2024) - [j13]Li Yang, Zhezhi He, Yu Cao, Deliang Fan:
A Progressive Subnetwork Searching Framework for Dynamic Inference. IEEE Trans. Neural Networks Learn. Syst. 35(3): 3809-3820 (2024) - 2023
- [j12]Fangxin Liu, Zongwu Wang, Yongbiao Chen, Zhezhi He, Tao Yang, Xiaoyao Liang, Li Jiang:
SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(1): 204-217 (2023) - [j11]Tao Yang, Fei Ma, Xiaoling Li, Fangxin Liu, Yilong Zhao, Zhezhi He, Li Jiang:
DTATrans: Leveraging Dynamic Token-Based Quantization With Accuracy Compensation Mechanism for Efficient Transformer Architecture. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(2): 509-520 (2023) - 2022
- [j10]Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan:
T-BFA: Targeted Bit-Flip Adversarial Weight Attack. IEEE Trans. Pattern Anal. Mach. Intell. 44(11): 7928-7939 (2022) - [j9]Xiaolong Ma, Sheng Lin, Shaokai Ye, Zhezhi He, Linfeng Zhang, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma, Yanzhi Wang:
Non-Structured DNN Weight Pruning - Is It Beneficial in Any Platform? IEEE Trans. Neural Networks Learn. Syst. 33(9): 4930-4944 (2022) - 2021
- [j8]Yanan Sun, Chang Ma, Zhi Li, Yilong Zhao, Jiachen Jiang, Weikang Qian, Rui Yang, Zhezhi He, Li Jiang:
Unary Coding and Variation-Aware Optimal Mapping Scheme for Reliable ReRAM-Based Neuromorphic Computing. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 40(12): 2495-2507 (2021) - [j7]Tao Yang, Zhezhi He, Tengchuan Kou, Qingzheng Li, Qi Han, Haibao Yu, Fangxin Liu, Yun Liang, Li Jiang:
BISWSRBS: A Winograd-based CNN Accelerator with a Fine-grained Regular Sparsity Pattern and Mixed Precision Quantization. ACM Trans. Reconfigurable Technol. Syst. 14(4): 18:1-18:28 (2021) - 2020
- [j6]Zhibo Wang, Zhezhi He, Milan Shah, Teng Zhang, Deliang Fan, Wei Zhang:
Network-based multi-task learning models for biomarker selection and cancer outcome prediction. Bioinform. 36(6): 1814-1822 (2020) - [j5]Zhezhi He, Li Yang, Shaahin Angizi, Adnan Siraj Rakin, Deliang Fan:
Sparse BD-Net: A Multiplication-less DNN with Sparse Binarized Depth-wise Separable Convolution. ACM J. Emerg. Technol. Comput. Syst. 16(2): 15:1-15:24 (2020) - [j4]Shaahin Angizi, Zhezhi He, Amro Awad, Deliang Fan:
MRIMA: An MRAM-Based In-Memory Accelerator. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 39(5): 1123-1136 (2020) - 2018
- [j3]Shaahin Angizi, Zhezhi He, Nader Bagherzadeh, Deliang Fan:
Design and Evaluation of a Spintronic In-Memory Processing Platform for Nonvolatile Data Encryption. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(9): 1788-1801 (2018) - [j2]Zhezhi He, Yang Zhang, Shaahin Angizi, Boqing Gong, Deliang Fan:
Exploring a SOT-MRAM Based In-Memory Computing for Data Processing. IEEE Trans. Multi Scale Comput. Syst. 4(4): 676-685 (2018) - 2017
- [j1]Zhezhi He, Deliang Fan:
Energy Efficient Reconfigurable Threshold Logic Circuit with Spintronic Devices. IEEE Trans. Emerg. Top. Comput. 5(2): 223-237 (2017)
Conference and Workshop Papers
- 2024
- [c73]Xingyue Qian, Zhezhi He, Weikang Qian:
An Efficient Logic Operation Scheduler for Minimizing Memory Footprint of In-Memory SIMD Computation. DATE 2024: 1-2 - [c72]Chenyu Tang, Chen Nie, Weikang Qian, Zhezhi He:
PIMLC: Logic Compiler for Bit-Serial Based PIM. DATE 2024: 1-6 - [c71]Xuan Zhang, Zhuoran Song, Xing Li, Zhezhi He, Naifeng Jing, Li Jiang, Xiaoyao Liang:
Watt: A Write-Optimized RRAM-Based Accelerator for Attention. Euro-Par (2) 2024: 107-120 - [c70]Jiahao Su, Kang You, Zekai Xu, Weizhi Xu, Zhezhi He:
Obtaining Optimal Spiking Neural Network in Sequence Learning via CRNN-SNN Conversion. ICANN (10) 2024: 392-406 - [c69]Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang:
CLLMs: Consistency Large Language Models. ICML 2024 - [c68]Kang You, Zekai Xu, Chen Nie, Zhijie Deng, Qinghai Guo, Xiang Wang, Zhezhi He:
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN. ICML 2024 - 2023
- [c67]Tao Yang, Hui Ma, Yilong Zhao, Fangxin Liu, Zhezhi He, Xiaoli Sun, Li Jiang:
PIMPR: PIM-based Personalized Recommendation with Heterogeneous Memory Hierarchy. DATE 2023: 1-6 - [c66]Chen Nie, Xianjue Cai, Chenyang Lv, Chen Huang, Weikang Qian, Zhezhi He:
XMG-GPPIC: Efficient and Robust General-Purpose Processing-in-Cache with XOR-Majority-Graph. ACM Great Lakes Symposium on VLSI 2023: 183-187 - [c65]Chenyang Lv, Ziling Wei, Weikang Qian, Junjie Ye, Chang Feng, Zhezhi He:
GPT-LS: Generative Pre-Trained Transformer with Offline Reinforcement Learning for Logic Synthesis. ICCD 2023: 320-326 - [c64]Xuan Zhang, Zhuoran Song, Xing Li, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang:
HyAcc: A Hybrid CAM-MAC RRAM-based Accelerator for Recommendation Model. ICCD 2023: 375-382 - [c63]Chen Nie, Guoyang Chen, Weifeng Zhang, Zhezhi He:
GIM: Versatile GNN Acceleration with Reconfigurable Processing-in-Memory. ICCD 2023: 499-506 - 2022
- [c62]Qidong Tang, Zhezhi He, Fangxin Liu, Zongwu Wang, Yiyuan Zhou, Yinghuan Zhang, Li Jiang:
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine. ASP-DAC 2022: 226-231 - [c61]Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. CVPR 2022: 10184-10192 - [c60]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks. DAC 2022: 259-264 - [c59]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Zhezhi He, Rui Yang, Qidong Tang, Tao Yang, Cheng Zhuo, Li Jiang:
PIM-DH: ReRAM-based processing-in-memory architecture for deep hashing acceleration. DAC 2022: 1087-1092 - [c58]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Tao Yang, Zhezhi He, Xiaokang Yang, Li Jiang:
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture. DAC 2022: 1105-1110 - [c57]Tao Yang, Dongyue Li, Zhuoran Song, Yilong Zhao, Fangxin Liu, Zongwu Wang, Zhezhi He, Li Jiang:
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture. DATE 2022: 700-705 - [c56]Zongwu Wang, Zhezhi He, Rui Yang, Shiquan Fan, Jie Lin, Fangxin Liu, Yueyang Jia, Chenxi Yuan, Qidong Tang, Li Jiang:
Self-Terminating Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing. DATE 2022: 1251-1256 - [c55]Yu Gong, Zhihan Xu, Zhezhi He, Weifeng Zhang, Xiaobing Tu, Xiaoyao Liang, Li Jiang:
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores. FPGA 2022: 112-122 - [c54]Chen Nie, Zongwu Wang, Qidong Tang, Chenyang Lv, Li Jiang, Zhezhi He:
Cross-layer Designs against Non-ideal Effects in ReRAM-based Processing-in-Memory System. ISQED 2022: 1-6 - 2021
- [c53]Dongyue Li, Tao Yang, Lun Du, Zhezhi He, Li Jiang:
AdaptiveGCN: Efficient GCN Through Adaptively Sparsifying Graphs. CIKM 2021: 3206-3210 - [c52]Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan:
KSM: Fast Multiple Task Adaption via Kernel-Wise Soft Mask Learning. CVPR 2021: 13845-13853 - [c51]Tao Yang, Dongyue Li, Yibo Han, Yilong Zhao, Fangxin Liu, Xiaoyao Liang, Zhezhi He, Li Jiang:
PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration. DAC 2021: 583-588 - [c50]Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery. DATE 2021: 790-795 - [c49]Yilong Zhao, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
Re2PIM: A Reconfigurable ReRAM-Based PIM Design for Variable-Sized Vector-Matrix Multiplication. ACM Great Lakes Symposium on VLSI 2021: 15-20 - [c48]Chen Nie, Jie Lin, Huan Hu, Li Jiang, Xiaoyao Liang, Zhezhi He:
Energy-Efficient Hybrid-RAM with Hybrid Bit-Serial based VMM Support. ACM Great Lakes Symposium on VLSI 2021: 347-352 - [c47]Jingtao Li, Zhezhi He, Adnan Siraj Rakin, Deliang Fan, Chaitali Chakrabarti:
NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing. HOST 2021: 248-258 - [c46]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Yongbiao Chen, Li Jiang:
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator. ICCAD 2021: 1-9 - [c45]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Tao Yang, Jingnai Feng, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. ICCD 2021: 417-424 - [c44]Fangxin Liu, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang:
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point. ICCV 2021: 5261-5270 - [c43]Tianhong Shen, Yanan Sun, Weifeng He, Zhi Li, Weiyi Liu, Zhezhi He, Li Jiang:
A Ternary Memristive Logic-in-Memory Design for Fast Data Scan. ICTA 2021: 183-184 - [c42]Zhuoran Song, Dongyue Li, Zhezhi He, Xiaoyao Liang, Li Jiang:
ReRAM-Sharing: Fine-Grained Weight Sharing for ReRAM-Based Deep Neural Network Accelerator. ISCAS 2021: 1-5 - [c41]Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang:
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning. MASS 2021: 164-172 - [c40]Wuyang Zhang, Zhezhi He, Luyang Liu, Zhenhua Jia, Yunxin Liu, Marco Gruteser, Dipankar Raychaudhuri, Yanyong Zhang:
Elf: accelerate high-resolution mobile deep vision with content-aware parallel offloading. MobiCom 2021: 201-214 - 2020
- [c39]Li Yang, Zhezhi He, Deliang Fan:
Harmonious Coexistence of Structured Weight Pruning and Ternarization for Deep Neural Networks. AAAI 2020: 6623-6630 - [c38]Adnan Siraj Rakin, Zhezhi He, Deliang Fan:
TBT: Targeted Neural Network Attack With Bit Trojan. CVPR 2020: 13195-13204 - [c37]Zhezhi He, Adnan Siraj Rakin, Jingtao Li, Chaitali Chakrabarti, Deliang Fan:
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack. CVPR 2020: 14083-14091 - [c36]Jingtao Li, Adnan Siraj Rakin, Yan Xiong, Liangliang Chang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
Defending Bit-Flip Attack through DNN Weight Reconstruction. DAC 2020: 1-6 - [c35]Li Yang, Zhezhi He, Yu Cao, Deliang Fan:
Non-uniform DNN Structured Subnets Sampling for Dynamic Inference. DAC 2020: 1-6 - [c34]Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan:
Robust Sparse Regularization: Defending Adversarial Attacks Via Regularized Sparse Network. ACM Great Lakes Symposium on VLSI 2020: 125-130 - [c33]Li Yang, Zhezhi He, Shaahin Angizi, Deliang Fan:
Processing-in-Memory Accelerator for Dynamic Neural Network with Run-Time Tuning of Accuracy, Power and Latency. SoCC 2020: 117-122 - 2019
- [c32]Shaahin Angizi, Zhezhi He, Deliang Fan:
ParaPIM: a parallel processing-in-memory accelerator for binary-weight deep neural networks. ASP-DAC 2019: 127-132 - [c31]Zhezhi He, Adnan Siraj Rakin, Deliang Fan:
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack. CVPR 2019: 588-597 - [c30]Zhezhi He, Deliang Fan:
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network Using Truncated Gaussian Approximation. CVPR 2019: 11438-11446 - [c29]Zhezhi He, Jie Lin, Rickard Ewetz, Jiann-Shiun Yuan, Deliang Fan:
Noise Injection Adaption: End-to-End ReRAM Crossbar Non-ideal Effect Adaption for Neural Network Mapping. DAC 2019: 57 - [c28]Durjoy Dev, Adithi Krishnaprasad, Zhezhi He, Sonali Das, Mashiyat Sumaiya Shawkat, Madison Manley, Olaleye Aina, Deliang Fan, Yeonwoong Jung, Tania Roy:
Artificial Neuron using Ag/2D-MoS2/Au Threshold Switching Memristor. DRC 2019: 193-194 - [c27]Li Yang, Zhezhi He, Deliang Fan:
Binarized Depthwise Separable Neural Network for Object Tracking in FPGA. ACM Great Lakes Symposium on VLSI 2019: 347-350 - [c26]Adnan Siraj Rakin, Zhezhi He, Deliang Fan:
Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search. ICCV 2019: 1211-1220 - [c25]Shaahin Angizi, Zhezhi He, Dayane Alfenas Reis, Xiaobo Sharon Hu, Wilman Tsai, Shy Jay Lin, Deliang Fan:
Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach? ISVLSI 2019: 197-202 - [c24]Zhezhi He, Boqing Gong, Deliang Fan:
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy. WACV 2019: 913-921 - 2018
- [c23]Shaahin Angizi, Zhezhi He, Farhana Parveen, Deliang Fan:
IMCE: Energy-efficient bit-wise in-memory convolution engine for deep neural network. ASP-DAC 2018: 111-116 - [c22]Farhana Parveen, Zhezhi He, Shaahin Angizi, Deliang Fan:
HielM: Highly flexible in-memory computing using STT MRAM. ASP-DAC 2018: 361-366 - [c21]Shaahin Angizi, Zhezhi He, Adnan Siraj Rakin, Deliang Fan:
CMP-PIM: an energy-efficient comparator-based processing-in-memory neural network accelerator. DAC 2018: 105:1-105:6 - [c20]Shaahin Angizi, Zhezhi He, Deliang Fan:
PIMA-logic: a novel processing-in-memory architecture for highly flexible and energy-efficient logic computation. DAC 2018: 162:1-162:6 - [c19]Shaahin Angizi, Zhezhi He, Yu Bai, Jie Han, Mingjie Lin, Ronald F. DeMara, Deliang Fan:
Leveraging Spintronic Devices for Efficient Approximate Logic and Stochastic Neural Networks. ACM Great Lakes Symposium on VLSI 2018: 397-402 - [c18]Shaahin Angizi, Zhezhi He, Deliang Fan:
DIMA: a depthwise CNN in-memory accelerator. ICCAD 2018: 122 - [c17]Adnan Siraj Rakin, Shaahin Angizi, Zhezhi He, Deliang Fan:
PIM-TGAN: A Processing-in-Memory Accelerator for Ternary Generative Adversarial Networks. ICCD 2018: 266-273 - [c16]Li Yang, Zhezhi He, Deliang Fan:
A Fully Onchip Binarized Convolutional Neural Network FPGA Impelmentation with Accurate Inference. ISLPED 2018: 50:1-50:6 - [c15]Zhezhi He, Shaahin Angizi, Adnan Siraj Rakin, Deliang Fan:
BD-NET: A Multiplication-Less DNN with Binarized Depthwise Separable Convolution. ISVLSI 2018: 130-135 - [c14]Zhezhi He, Shaahin Angizi, Deliang Fan:
Accelerating Low Bit-Width Deep Convolution Neural Network in MRAM. ISVLSI 2018: 533-538 - 2017
- [c13]Zhezhi He, Deliang Fan:
A tunable magnetic skyrmion neuron cluster for energy efficient artificial neural network. DATE 2017: 350-355 - [c12]Shaahin Angizi, Zhezhi He, Deliang Fan:
Energy Efficient In-Memory Computing Platform Based on 4-Terminal Spin Hall Effect-Driven Domain Wall Motion Devices. ACM Great Lakes Symposium on VLSI 2017: 77-82 - [c11]Zhezhi He, Shaahin Angizi, Farhana Parveen, Deliang Fan:
Leveraging Dual-Mode Magnetic Crossbar for Ultra-low Energy In-memory Data Encryption. ACM Great Lakes Symposium on VLSI 2017: 83-88 - [c10]Zhezhi He, Shaahin Angizi, Deliang Fan:
Exploring STT-MRAM Based In-Memory Computing Paradigm with Application of Image Edge Extraction. ICCD 2017: 439-446 - [c9]Farhana Parveen, Shaahin Angizi, Zhezhi He, Deliang Fan:
Hybrid polymorphic logic gate using 6 terminal magnetic domain wall motion device. ISCAS 2017: 1-4 - [c8]Farhana Parveen, Shaahin Angizi, Zhezhi He, Deliang Fan:
Low power in-memory computing based on dual-mode SOT-MRAM. ISLPED 2017: 1-6 - [c7]Shaahin Angizi, Zhezhi He, Ronald F. DeMara, Deliang Fan:
Composite spintronic accuracy-configurable adder for low power Digital Signal Processing. ISQED 2017: 391-396 - [c6]Shaahin Angizi, Zhezhi He, Farhana Parveen, Deliang Fan:
RIMPA: A New Reconfigurable Dual-Mode In-Memory Processing Architecture with Spin Hall Effect-Driven Domain Wall Motion Device. ISVLSI 2017: 45-50 - [c5]Farhana Parveen, Zhezhi He, Shaahin Angizi, Deliang Fan:
Hybrid Polymorphic Logic Gate with 5-Terminal Magnetic Domain Wall Motion Device. ISVLSI 2017: 152-157 - [c4]Deliang Fan, Shaahin Angizi, Zhezhi He:
In-Memory Computing with Spintronic Devices. ISVLSI 2017: 683-688 - [c3]Deliang Fan, Zhezhi He, Shaahin Angizi:
Leveraging spintronic devices for ultra-low power in-memory computing: Logic and neural network. MWSCAS 2017: 1109-1112 - [c2]Zhezhi He, Shaahin Angizi, Farhana Parveen, Deliang Fan:
High performance and energy-efficient in-memory computing architecture based on SOT-MRAM. NANOARCH 2017: 97-102 - 2016
- [c1]Zhezhi He, Deliang Fan:
A Low Power Current-Mode Flash ADC with Spin Hall Effect based Multi-Threshold Comparator. ISLPED 2016: 314-319
Informal and Other Publications
- 2024
- [i26]Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, Hao Zhang:
CLLMs: Consistency Large Language Models. CoRR abs/2403.00835 (2024) - [i25]Kang You, Zekai Xu, Chen Nie, Zhijie Deng, Qinghai Guo, Xiang Wang, Zhezhi He:
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN. CoRR abs/2406.03470 (2024) - [i24]Zekai Xu, Kang You, Qinghai Guo, Xiang Wang, Zhezhi He:
BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation. CoRR abs/2407.09083 (2024) - [i23]Hongqiu Wu, Zekai Xu, Tianyang Xu, Shize Wei, Yan Wang, Jiale Hong, Weiqi Wu, Hai Zhao, Min Zhang, Zhezhi He:
Evolving Virtual World with Delta-Engine. CoRR abs/2408.05842 (2024) - [i22]Jiahao Su, Kang You, Zekai Xu, Weizhi Xu, Zhezhi He:
Obtaining Optimal Spiking Neural Network in Sequence Learning via CRNN-SNN Conversion. CoRR abs/2408.09403 (2024) - 2023
- [i21]Jingtao Li, Adnan Siraj Rakin, Xing Chen, Li Yang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
Model Extraction Attacks on Split Federated Learning. CoRR abs/2303.08581 (2023) - 2022
- [i20]Zhuoran Song, Yihong Xu, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang:
CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction. CoRR abs/2203.04570 (2022) - [i19]Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. CoRR abs/2205.04007 (2022) - 2021
- [i18]Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti:
RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery. CoRR abs/2101.08254 (2021) - [i17]Fangxin Liu, Wenbo Zhao, Yilong Zhao, Zongwu Wang, Tao Yang, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. CoRR abs/2103.01705 (2021) - [i16]Jingtao Li, Zhezhi He, Adnan Siraj Rakin, Deliang Fan, Chaitali Chakrabarti:
NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing. CoRR abs/2107.09789 (2021) - [i15]Yu Gong, Zhihan Xu, Zhezhi He, Weifeng Zhang, Xiaobing Tu, Xiaoyao Liang, Li Jiang:
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores. CoRR abs/2112.08193 (2021) - 2020
- [i14]Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan:
T-BFA: Targeted Bit-Flip Adversarial Weight Attack. CoRR abs/2007.12336 (2020) - [i13]Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan:
KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning. CoRR abs/2009.05668 (2020) - [i12]Li Yang, Zhezhi He, Yu Cao, Deliang Fan:
A Progressive Sub-Network Searching Framework for Dynamic Inference. CoRR abs/2009.05681 (2020) - [i11]Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang:
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning. CoRR abs/2011.12511 (2020) - 2019
- [i10]Adnan Siraj Rakin, Zhezhi He, Deliang Fan:
Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search. CoRR abs/1903.12269 (2019) - [i9]Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan:
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness. CoRR abs/1905.13074 (2019) - [i8]Yanzhi Wang, Shaokai Ye, Zhezhi He, Xiaolong Ma, Linfeng Zhang, Sheng Lin, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma:
Non-structured DNN Weight Pruning Considered Harmful. CoRR abs/1907.02124 (2019) - [i7]Adnan Siraj Rakin, Zhezhi He, Deliang Fan:
TBT: Targeted Neural Network Attack with Bit Trojan. CoRR abs/1909.05193 (2019) - 2018
- [i6]Adnan Siraj Rakin, Zhezhi He, Boqing Gong, Deliang Fan:
Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples. CoRR abs/1802.01549 (2018) - [i5]Zhezhi He, Boqing Gong, Deliang Fan:
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy. CoRR abs/1807.07948 (2018) - [i4]Zhezhi He, Deliang Fan:
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation. CoRR abs/1810.01018 (2018) - [i3]Adnan Siraj Rakin, Zhezhi He, Deliang Fan:
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack. CoRR abs/1811.09310 (2018) - 2017
- [i2]Zhezhi He, Shaahin Angizi, Deliang Fan:
Current Induced Dynamics of Multiple Skyrmions with Domain Wall Pair and Skyrmion-based Majority Gate Design. CoRR abs/1702.04814 (2017) - [i1]Zhezhi He, Deliang Fan:
Developing All-Skyrmion Spiking Neural Network. CoRR abs/1705.02995 (2017)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-09 21:32 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint