


default search action
Fangxin Liu
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j19]Ruixin Chen, Chenqiang Gao, Zhuolin Tan, Fangxin Liu, Jiayi Yu, Xinlin Li:
Dual-branch vision transformer for low-resolution action recognition. Multim. Tools Appl. 84(34): 42425-42444 (2025)
[j18]Dongjie Tang
, Zijun Wu
, Yun Wang
, Yicheng Gu
, Fangxin Liu
, Zhengwei Qi
:
gCom: Fine-grained Compressors in Graphics Memory of Mobile GPU. ACM Trans. Archit. Code Optim. 22(1): 34:1-34:25 (2025)
[j17]Haomin Li
, Fangxin Liu
, Zongwu Wang
, Ning Yang
, Shiyuan Huang
, Xiaoyao Liang
, Haibing Guan
, Li Jiang
:
Attack and Defense: Enhancing Robustness of Binary Hyper-Dimensional Computing. ACM Trans. Archit. Code Optim. 22(3): 85:1-85:25 (2025)
[j16]Xuhang Wang
, Zhuoran Song
, Chunyu Qi
, Fangxin Liu
, Naifeng Jing
, Li Jiang
, Xiaoyao Liang
:
RTSA: A Run-Through Sparse Attention Framework for Video Transformer. IEEE Trans. Computers 74(6): 1949-1962 (2025)
[j15]Shiyuan Huang
, Fangxin Liu
, Tao Yang, Zongwu Wang
, Ning Yang
, Li Jiang
:
SpMMPlu-Pro: An Enhanced Compiler Plug-In for Efficient SpMM and Sparsity Propagation Algorithm. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 44(2): 669-683 (2025)
[j14]Shuai Yuan
, Weifeng He
, Zhenhua Zhu
, Fangxin Liu
, Zhuoran Song
, Guohao Dai
, Guanghui He
, Yanan Sun
:
HyCTor: A Hybrid CNN-Transformer Network Accelerator With Flexible Weight/Output Stationary Dataflow and Multicore Extension. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 44(5): 1819-1832 (2025)
[j13]Jiahao Sun
, Yijian Zhang
, Yuzhuo Liu
, Fangxin Liu
, Li Jiang
, Rui Yang
:
A Sub- 10 μs In-Memory-Search Collision Detection Accelerator Based on RRAM-TCAMs. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 44(12): 4510-4523 (2025)
[j12]Shiyuan Huang
, Fangxin Liu
, Tian Li
, Zongwu Wang
, Ning Yang
, Haomin Li
, Li Jiang
:
STCO: Enhancing Training Efficiency via Structured Sparse Tensor Compilation Optimization. ACM Trans. Design Autom. Electr. Syst. 30(1): 1-22 (2025)
[c63]Fangxin Liu
, Zongwu Wang, Ning Yang, Haomin Li, Tao Yang, Haibing Guan, Li Jiang:
Irregular Sparsity-Enabled Search-in-Memory Engine for Accelerating Spiking Neural Networks. APPT 2025: 99-109
[c62]Yilong Zhao
, Fangxin Liu
, Mingyu Gao, Xiaoyao Liang, Qidong Tang, Chengyang Gu, Tao Yang, Naifeng Jing, Li Jiang:
STAMP: Accelerating Second-Order DNN Training Via ReRAM-Based Processing-in-Memory Architecture. APPT 2025: 160-170
[c61]Fangxin Liu
, Zongwu Wang
, Peng Xu
, Shiyuan Huang
, Li Jiang
:
Exploiting Differential-Based Data Encoding for Enhanced Query Efficiency. ASP-DAC 2025: 594-600
[c60]Haomin Li
, Fangxin Liu
, Zewen Sun
, Zongwu Wang
, Shiyuan Huang
, Ning Yang
, Li Jiang
:
NeuronQuant: Accurate and Efficient Post-Training Quantization for Spiking Neural Networks. ASP-DAC 2025: 734-740
[c59]Tianyao Chu, Siwei Tan, Liqiang Lu, Jingwen Leng, Fangxin Liu, Congliang Lang, Yifan Guo, Jianwei Yin:
ArbiterQ: Improving QNN Convergency and Accuracy by Applying Personalized Model on Heterogeneous Quantum Devices. DAC 2025: 1-7
[c58]Fangxin Liu, Haomin Li, Zongwu Wang, Bo Zhang, Mingzhe Zhang, Shoumeng Yan, Li Jiang, Haibing Guan:
ALLMod: Exploring Area-Efficiency of LUT-based Large Number Modular Reduction via Hybrid Workloads. DAC 2025: 1-7
[c57]Fangxin Liu, Ning Yang, Zongwu Wang, Xuanpeng Zhu, Haidong Yao, Xiankui Xiong, Li Jiang, Haibing Guan:
BLOOM: Bit-Slice Framework for DNN Acceleration with Mixed-Precision. DAC 2025: 1-7
[c56]Zongwu Wang, Peng Xu, Fangxin Liu, Yiwei Hu, Qingxiao Sun, Gezi Li, Cheng Li, Xuan Wang, Li Jiang, Haibing Guan:
MILLION: MasterIng Long-Context LLM Inference Via Outlier-Immunized KV Product QuaNtization. DAC 2025: 1-7
[c55]Ning Yang, Zongwu Wang, Qingxiao Sun, Liqiang Lu, Fangxin Liu:
PISA: Efficient Precision-Slice Framework for LLMs with Adaptive Numerical Type. DAC 2025: 1-7
[c54]Haomin Li, Fangxin Liu, Zongwu Wang, Dongxu Lyu, Shiyuan Huang, Ning Yang, Qi Sun, Zhuoran Song, Li Jiang:
TAIL: Exploiting Temporal Asynchronous Execution for Efficient Spiking Neural Networks with Inter-Layer Parallelism. DATE 2025: 1-7
[c53]Fangxin Liu, Haomin Li, Zongwu Wang, Dongxu Lyu, Li Jiang:
HyperDyn: Dynamic Dimensional Masking for Efficient Hyper-Dimensional Computing. DATE 2025: 1-7
[c52]Fangxin Liu, Ning Yang, Zongwu Wang, Xuanpeng Zhu, Haidong Yao, Xiankui Xiong, Qi Sun, Li Jiang:
OPS: Outlier-Aware Precision-Slice Framework for LLM Acceleration. DATE 2025: 1-2
[c51]Zongwu Wang, Fangxin Liu, Peng Xu, Qingxiao Sun, Junping Zhao, Li Jiang:
EVASION: Efficient KV CAche CompreSsion vIa PrOduct QuaNtization. DATE 2025: 1-2
[c50]Houshu He, Gang Li, Fangxin Liu, Li Jiang, Xiaoyao Liang, Zhuoran Song:
GSArch: Breaking Memory Barriers in 3D Gaussian Splatting Training via Architectural Support. HPCA 2025: 366-379
[c49]Fangxin Liu, Shiyuan Huang, Ning Yang, Zongwu Wang, Haomin Li, Li Jiang:
CROSS: Compiler-Driven Optimization of Sparse DNNs Using Sparse/Dense Computation Kernels. HPCA 2025: 963-976
[c48]Chenning Tao
, Liqiang Lu
, Size Zheng
, Li-Wen Chang
, Minghua Shen
, Hanyu Zhang
, Fangxin Liu
, Kaiwen Zhou
, Jianwei Yin
:
Qtenon: Towards Low-Latency Architecture Integration for Accelerating Hybrid Quantum-Classical Computing. ISCA 2025: 299-312
[c47]Haomin Li
, Fangxin Liu
, Yichi Chen
, Zongwu Wang
, Shiyuan Huang
, Ning Yang
, Dongxu Lyu
, Li Jiang
:
FATE: Boosting the Performance of Hyper-Dimensional Computing Intelligence with Flexible Numerical DAta TypE. ISCA 2025: 1269-1282
[i17]Fangxin Liu, Haomin Li, Zongwu Wang, Bo Zhang, Mingzhe Zhang, Shoumeng Yan, Li Jiang, Haibing Guan:
ALLMod: Exploring Area-Efficiency of LUT-based Large Number Modular Reduction via Hybrid Workloads. CoRR abs/2503.15916 (2025)
[i16]Zongwu Wang, Peng Xu, Fangxin Liu, Yiwei Hu, Qingxiao Sun, Gezi Li, Cheng Li, Xuan Wang, Li Jiang, Haibing Guan:
MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Quantization. CoRR abs/2504.03661 (2025)
[i15]Ning Yang, Fangxin Liu, Junjie Wang, Tao Yang, Kang Liu, Haibing Guan, Li Jiang:
DASH: Input-Aware Dynamic Layer Skipping for Efficient LLM Inference with Markov Decision Policies. CoRR abs/2505.17420 (2025)
[i14]Wenhao Dai, Haodong Deng, Mengfei Rong, Xinyu Yang, Hongyu Liu, Fangxin Liu, Hailong Yang, Weifeng Liu, Qingxiao Sun:
Flexible Operator Fusion for Fast Sparse Transformer with Diverse Masking on GPU. CoRR abs/2506.06095 (2025)
[i13]Fangxin Liu, Zongwu Wang, JinHong Xia, Junping Zhao, Jian Liu, Haibing Guan, Li Jiang:
FlexQuant: A Flexible and Efficient Dynamic Precision Switching Framework for LLM Quantization. CoRR abs/2506.12024 (2025)
[i12]Fangxin Liu, Ning Yang, Junping Zhao, Tao Yang, Haibing Guan, Li Jiang:
LCD: Advancing Extreme Low-Bit Clustering for Large Language Models via Knowledge Distillation. CoRR abs/2506.12038 (2025)
[i11]Fangxin Liu, Haomin Li, Bowen Zhu, Zongwu Wang, Zhuoran Song, Habing Guan, Li Jiang:
ASDR: Exploiting Adaptive Sampling and Data Reuse for CIM-based Instant Neural Rendering. CoRR abs/2508.02304 (2025)
[i10]Yilong Zhao, Mingyu Gao, Huanchen Zhang, Fangxin Liu, Gongye Chen, He Xian, Haibing Guan, Li Jiang:
PUSHtap: PIM-based In-Memory HTAP with Unified Data Storage Format. CoRR abs/2508.02309 (2025)
[i9]Haomin Li, Fangxin Liu, Chenyang Guan, Zongwu Wang, Li Jiang, Haibing Guan:
LaMoS: Enabling Efficient Large Number Modular Multiplication through SRAM-based CiM Acceleration. CoRR abs/2511.03341 (2025)
[i8]Zhixiong Zhao, Haomin Li, Fangxin Liu, Yuncheng Lu, Zongwu Wang, Tao Yang, Li Jiang, Haibing Guan:
QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations. CoRR abs/2511.06767 (2025)- 2024
[j11]Fangxin Liu
, Wenbo Zhao
, Zongwu Wang
, Yongbiao Chen
, Xiaoyao Liang
, Li Jiang
:
ERA-BS: Boosting the Efficiency of ReRAM-Based PIM Accelerator With Fine-Grained Bit-Level Sparsity. IEEE Trans. Computers 73(9): 2320-2334 (2024)
[j10]Fangxin Liu
, Zongwu Wang, Wenbo Zhao, Ning Yang, Yongbiao Chen
, Shiyuan Huang
, Haomin Li
, Tao Yang, Songwen Pei, Xiaoyao Liang, Li Jiang:
Exploiting Temporal-Unrolled Parallelism for Energy-Efficient SNN Acceleration. IEEE Trans. Parallel Distributed Syst. 35(10): 1749-1764 (2024)
[c46]Fangxin Liu, Yingjie Pei, Xuefei Zhang, Xiaofeng Tao:
Performance Analysis of ASTARS-Assisted Uplink Communication Networks. APCC 2024: 371-376
[c45]Fangxin Liu, Haomin Li
, Ning Yang, Yichi Chen, Zongwu Wang, Tao Yang, Li Jiang:
PAAP-HD: PIM-Assisted Approximation for Efficient Hyper-Dimensional Computing. ASPDAC 2024: 46-51
[c44]Haomin Li
, Fangxin Liu, Yichi Chen, Li Jiang:
HyperFeel: An Efficient Federated Learning Framework Using Hyperdimensional Computing. ASPDAC 2024: 716-721
[c43]Fangxin Liu, Haomin Li
, Ning Yang, Zongwu Wang, Tao Yang, Li Jiang:
TEAS: Exploiting Spiking Activity for Temporal-wise Adaptive Spiking Neural Networks. ASPDAC 2024: 842-847
[c42]Shiyuan Huang, Fangxin Liu, Tian Li, Zongwu Wang, Haomin Li
, Li Jiang:
TSTC: Enabling Efficient Training via Structured Sparse Tensor Compilation. ASPDAC 2024: 884-889
[c41]Zhuoran Song
, Chunyu Qi
, Fangxin Liu
, Naifeng Jing
, Xiaoyao Liang
:
CMC: Video Transformer Acceleration via CODEC Assisted Matrix Condensing. ASPLOS (2) 2024: 201-215
[c40]Fangxin Liu
, Ning Yang
, Zhiyan Song
, Zongwu Wang
, Haomin Li
, Shiyuan Huang
, Zhuoran Song
, Songwen Pei
, Li Jiang
:
INSPIRE: Accelerating Deep Neural Networks via Hardware-friendly Index-Pair Encoding. DAC 2024: 10:1-10:6
[c39]Ning Yang
, Fangxin Liu
, Zongwu Wang
, Haomin Li
, Zhuoran Song
, Songwen Pei
, Li Jiang
:
EOS: An Energy-Oriented Attack Framework for Spiking Neural Networks. DAC 2024: 58:1-58:6
[c38]Xueyuan Liu, Zhuoran Song, Xiang Liao, Xing Li, Tao Yang, Fangxin Liu, Xiaoyao Liang:
Sava: A Spatial- and Value-Aware Accelerator for Point Cloud Transformer. DATE 2024: 1-6
[c37]Jiahao Sun, Fangxin Liu, Yijian Zhang, Li Jiang, Rui Yang:
RTSA: An RRAM-TCAM based In-Memory-Search Accelerator for Sub-100 µs Collision Detection. DATE 2024: 1-2
[c36]Fangxin Liu, Ning Yang
, Haomin Li
, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang:
SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding. HPCA 2024: 1029-1042
[c35]Ning Yang, Fangxin Liu, Zongwu Wang, Zhiyan Song, Tao Yang, Li Jiang:
T-BUS: Taming Bipartite Unstructured Sparsity for Energy-Efficient DNN Acceleration. ICCD 2024: 68-75
[c34]Zongwu Wang, Fangxin Liu, Xin Tang, Li Jiang:
PS4: A Low Power SNN Accelerator with Spike Speculative Scheme. ICCD 2024: 76-83
[c33]Fangxin Liu, Ning Yang, Zhiyan Song, Zongwu Wang, Li Jiang:
HOLES: Boosting Large Language Models Efficiency with Hardware-Friendly Lossless Encoding. ICCD 2024: 207-214
[c32]Longyu Zhao, Zongwu Wang, Fangxin Liu, Li Jiang:
Ninja: A Hardware Assisted System for Accelerating Nested Address Translation. ICCD 2024: 426-433
[c31]Yilong Zhao
, Mingyu Gao, Fangxin Liu, Yiwei Hu, Zongwu Wang, Han Lin, Jin Li, He Xian, Hanlin Dong, Tao Yang, Naifeng Jing, Xiaoyao Liang, Li Jiang:
UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space. ISCA 2024: 644-659
[c30]Fangxin Liu
, Shiyuan Huang
, Longyu Zhao
, Li Jiang
, Zongwu Wang
:
LowPASS: A Low power PIM-based accelerator with Speculative Scheme for SNNs. ISLPED 2024: 1-6
[c29]Zhuoran Song, Houshu He, Fangxin Liu, Yifan Hao, Xinkai Song, Li Jiang, Xiaoyao Liang:
SRender: Boosting Neural Radiance Field Efficiency via Sensitivity-Aware Dynamic Precision Rendering. MICRO 2024: 525-537
[c28]Zongwu Wang, Fangxin Liu, Ning Yang, Shiyuan Huang, Haomin Li, Li Jiang:
COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation. MICRO 2024: 1090-1106
[i7]Zhibai Huang, Yihan Shen, Yongchen Xie, Zhixiang Wei, Yun Wang, Fangxin Liu, Tao Song, Zhengwei Qi:
Phantom: Constraining Generative Artificial Intelligence Models for Practical Domain Specific Peripherals Trace Synthesizing. CoRR abs/2411.06376 (2024)
[i6]Zongwu Wang, Fangxin Liu, Mingshuai Li, Li Jiang:
TokenRing: An Efficient Parallelism Framework for Infinite-Context LLMs via Bidirectional Communication. CoRR abs/2412.20501 (2024)- 2023
[j9]Tao Yang
, Dongyue Li, Fei Ma, Zhuoran Song
, Yilong Zhao
, Jiaxi Zhang, Fangxin Liu
, Li Jiang
:
PASGCN: An ReRAM-Based PIM Design for GCN With Adaptively Sparsified Graphs. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(1): 150-163 (2023)
[j8]Fangxin Liu
, Zongwu Wang, Yongbiao Chen, Zhezhi He
, Tao Yang
, Xiaoyao Liang, Li Jiang
:
SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(1): 204-217 (2023)
[j7]Tao Yang
, Fei Ma, Xiaoling Li, Fangxin Liu
, Yilong Zhao
, Zhezhi He
, Li Jiang
:
DTATrans: Leveraging Dynamic Token-Based Quantization With Accuracy Compensation Mechanism for Efficient Transformer Architecture. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(2): 509-520 (2023)
[j6]Yongbiao Chen
, Sheng Zhang, Fangxin Liu
, Chenggang Wu, Kaicheng Guo, Zhengwei Qi
:
DVHN: A Deep Hashing Framework for Large-Scale Vehicle Re-Identification. IEEE Trans. Intell. Transp. Syst. 24(9): 9268-9280 (2023)
[j5]Zhuoran Song
, Wanzhen Liu
, Tao Yang
, Fangxin Liu
, Naifeng Jing
, Xiaoyao Liang
:
A Point Cloud Video Recognition Acceleration Framework Based on Tempo-Spatial Information. IEEE Trans. Parallel Distributed Syst. 34(12): 3224-3237 (2023)
[c27]Fangxin Liu, Haomin Li
, Yongbiao Chen, Tao Yang, Li Jiang:
HyperAttack: An Efficient Attack Framework for HyperDimensional Computing. DAC 2023: 1-6
[c26]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Xiaokang Yang, Li Jiang:
SIMSnn: A Weight-Agnostic ReRAM-based Search-In-Memory Engine for SNN Acceleration. DATE 2023: 1-2
[c25]Tao Yang, Hui Ma, Yilong Zhao
, Fangxin Liu, Zhezhi He, Xiaoli Sun, Li Jiang:
PIMPR: PIM-based Personalized Recommendation with Heterogeneous Memory Hierarchy. DATE 2023: 1-6
[c24]Haomin Li
, Fangxin Liu, Yichi Chen, Li Jiang:
HyperNode: An Efficient Node Classification Framework Using HyperDimensional Computing. ICCAD 2023: 1-9
[c23]Fangxin Liu, Ning Yang
, Li Jiang:
PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture. ICCD 2023: 507-514- 2022
[j4]Zihan Jiang
, Jiansong Li
, Fangxin Liu, Wanling Gao, Lei Wang, Chuanxin Lan, Fei Tang, Lei Liu, Tao Li:
A systematic study on benchmarking AI inference accelerators. CCF Trans. High Perform. Comput. 4(2): 87-103 (2022)
[j3]Fangxin Liu
, Wenbo Zhao, Zongwu Wang, Yilong Zhao
, Tao Yang
, Yiran Chen
, Li Jiang
:
IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(12): 5313-5326 (2022)
[c22]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Li Jiang:
SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks. AAAI 2022: 1692-1701
[c21]Qidong Tang, Zhezhi He, Fangxin Liu, Zongwu Wang, Yiyuan Zhou, Yinghuan Zhang, Li Jiang:
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine. ASP-DAC 2022: 226-231
[c20]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks. DAC 2022: 259-264
[c19]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Zhezhi He, Rui Yang, Qidong Tang, Tao Yang, Cheng Zhuo, Li Jiang:
PIM-DH: ReRAM-based processing-in-memory architecture for deep hashing acceleration. DAC 2022: 1087-1092
[c18]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Tao Yang, Zhezhi He, Xiaokang Yang, Li Jiang:
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture. DAC 2022: 1105-1110
[c17]Tao Yang, Dongyue Li, Zhuoran Song, Yilong Zhao, Fangxin Liu, Zongwu Wang, Zhezhi He, Li Jiang:
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture. DATE 2022: 700-705
[c16]Zongwu Wang, Zhezhi He, Rui Yang, Shiquan Fan, Jie Lin, Fangxin Liu, Yueyang Jia
, Chenxi Yuan
, Qidong Tang, Li Jiang:
Self-Terminating Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing. DATE 2022: 1251-1256
[c15]Fangxin Liu, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Fei Dai:
DynSNN: A Dynamic Approach to Reduce Redundancy in Spiking Neural Networks. ICASSP 2022: 2130-2134
[c14]Fangxin Liu, Zongwu Wang, Wenbo Zhao, Yongbiao Chen, Tao Yang, Xiaokang Yang, Li Jiang:
Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs. ICCD 2022: 451-454
[c13]Yongbiao Chen, Kaicheng Guo, Fangxin Liu, Yusheng Huang, Zhengwei Qi:
Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval. ICMR 2022: 44-48
[c12]Yongbiao Chen, Sheng Zhang, Fangxin Liu, Zhigang Chang, Mang Ye
, Zhengwei Qi:
TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval. ICMR 2022: 127-136
[c11]Fangxin Liu, Haomin Li
, Xiaokang Yang, Li Jiang:
L3E-HD: A Framework Enabling Efficient Ensemble in High-Dimensional Space for Language Tasks. SIGIR 2022: 1844-1848
[i5]Yilong Zhao
, Li Jiang, Mingyu Gao, Naifeng Jing, Chengyang Gu, Qidong Tang, Fangxin Liu, Tao Yang, Xiaoyao Liang:
RePAST: A ReRAM-based PIM Accelerator for Second-order Training of DNN. CoRR abs/2210.15255 (2022)- 2021
[j2]Tao Yang, Zhezhi He, Tengchuan Kou, Qingzheng Li, Qi Han, Haibao Yu, Fangxin Liu, Yun Liang, Li Jiang:
BISWSRBS: A Winograd-based CNN Accelerator with a Fine-grained Regular Sparsity Pattern and Mixed Precision Quantization. ACM Trans. Reconfigurable Technol. Syst. 14(4): 18:1-18:28 (2021)
[c10]Tao Yang, Dongyue Li, Yibo Han, Yilong Zhao
, Fangxin Liu, Xiaoyao Liang, Zhezhi He, Li Jiang:
PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration. DAC 2021: 583-588
[c9]Fangxin Liu, Wenbo Zhao, Zongwu Wang, Tao Yang, Li Jiang:
IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration. ACM Great Lakes Symposium on VLSI 2021: 253-258
[c8]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao
, Yongbiao Chen, Li Jiang:
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator. ICCAD 2021: 1-9
[c7]Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao
, Tao Yang, Jingnai Feng, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. ICCD 2021: 417-424
[c6]Fangxin Liu, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang:
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point. ICCV 2021: 5261-5270
[i4]Fangxin Liu, Wenbo Zhao, Yilong Zhao, Zongwu Wang, Tao Yang, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network. CoRR abs/2103.01705 (2021)
[i3]Yongbiao Chen, Sheng Zhang, Fangxin Liu, Zhigang Chang, Mang Ye, Zhengwei Qi:
TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval. CoRR abs/2105.01823 (2021)
[i2]Yongbiao Chen, Sheng Zhang, Fangxin Liu, Chenggang Wu, Kaicheng Guo, Zhengwei Qi:
DVHN: A Deep Hashing Framework for Large-scale Vehicle Re-identification. CoRR abs/2112.04937 (2021)- 2020
[c5]Jiansong Li
, Zihan Jiang, Fangxin Liu, Xiao Dong, Guangli Li, Xueying Wang, Wei Cao, Lei Liu, Yanzhi Wang, Tao Li, Xiaobing Feng:
Characterizing the I/O Pipeline in the Deployment of CNNs on Commercial Accelerators. ISPA/BDCloud/SocialCom/SustainCom 2020: 137-144
[i1]Fangxin Liu, Wenbo Zhao, Yanzhi Wang, Changzhi Dai, Li Jiang:
AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks. CoRR abs/2007.03903 (2020)
2010 – 2019
- 2019
[c4]Fangxin Liu, Kunpeng Xie, Cheng Gong
, Shusheng Liu, Ye Lu, Tao Li:
LHC: A Low-Power Heterogeneous Computing Method on Neural Network Accelerator. ICPADS 2019: 326-334
[c3]Jin Zhang, Xin Wei, Zhen Liu, Fangxin Liu, Tao Li, Tingjuan Lu, Xiaoli Gong:
ExploreBP: A Simulation Tool for Mobile Browser Energy Optimization. SimuTools 2019: 248-257- 2018
[c2]Na Wang, Fei Dai, Fangxin Liu, Guomin Zhang:
Dynamic Obstacle Avoidance Planning Algorithm for UAV Based on Dubins Path. ICA3PP (2) 2018: 367-377
[c1]Na Wang, Nan Di, Fei Dai, Fangxin Liu:
UAV 3D Mobility Model Oriented to Dynamic and Uncertain Environment. ICA3PP (3) 2018: 640-650- 2017
[j1]Ming He, Fangxin Liu, Zhuang Miao, Huan Zhou, Qiuli Chen
:
A mechanism of topology optimization for underwater acoustic sensor networks based on autonomous underwater vehicles. Int. J. Distributed Sens. Networks 13(1) (2017)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-01-05 01:38 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







