default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 64 matches
- 2024
- Yang Li, Feifei Zhao, Dongcheng Zhao, Yi Zeng:
Directly training temporal Spiking Neural Network with sparse surrogate gradient. Neural Networks 179: 106499 (2024) - Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, Danilo P. Mandic:
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 35(10): 14903-14917 (2024) - Ankita Paul, Anup Das:
Learning in Recurrent Spiking Neural Networks with Sparse Full-FORCE Training. ICANN (10) 2024: 365-376 - Junbo Li, Zichen Miao, Qiang Qiu, Ruqi Zhang:
Training Bayesian Neural Networks with Sparse Subspace Variational Inference. ICLR 2024 - Patrick Stricker, Florian Röhrbein, Andreas Knoblauch:
Weight Perturbation and Competitive Hebbian Plasticity for Training Sparse Excitatory Neural Networks. IJCNN 2024: 1-8 - Xueqi Ma, Xingjun Ma, Sarah M. Erfani, James Bailey:
Training Sparse Graph Neural Networks via Pruning and Sprouting. SDM 2024: 136-144 - Fali Wang, Tianxiang Zhao, Suhang Wang:
Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels. WSDM 2024: 712-720 - Fali Wang, Tianxiang Zhao, Suhang Wang:
Distribution Consistency based Self-Training for Graph Neural Networks with Sparse Labels. CoRR abs/2401.10394 (2024) - Junbo Li, Zichen Miao, Qiang Qiu, Ruqi Zhang:
Training Bayesian Neural Networks with Sparse Subspace Variational Inference. CoRR abs/2402.11025 (2024) - Jialin Zhao, Yingtao Zhang, Xinghang Li, Huaping Liu, Carlo Vittorio Cannistraci:
Sparse Spectral Training and Inference on Euclidean and Hyperbolic Neural Networks. CoRR abs/2405.15481 (2024) - Akul Malhotra, Sumeet Kumar Gupta:
Memory Faults in Activation-sparse Quantized Deep Neural Networks: Analysis and Mitigation using Sharpness-aware Training. CoRR abs/2406.10528 (2024) - Lujia Zhong, Shuo Huang, Yonggang Shi:
ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation. CoRR abs/2408.12561 (2024) - 2023
- Horst Petschenig, Robert Legenstein:
Quantized rewiring: hardware-aware training of sparse deep neural networks. Neuromorph. Comput. Eng. 3(2): 24006 (2023) - Chao Fang, Wei Sun, Aojun Zhou, Zhongfeng Wang:
CEST: Computation-Efficient N:M Sparse Training for Deep Neural Networks. DATE 2023: 1-2 - Murtiza Ali, Aditya Arie Nugraha, Karan Nathwani:
Exploiting Sparse Recovery Algorithms for Semi-Supervised Training of Deep Neural Networks for Direction-of-Arrival Estimation. ICASSP 2023: 1-5 - Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu:
RSC: Accelerate Graph Neural Networks Training via Randomized Sparse Computations. ICML 2023: 21951-21968 - Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks at the Edge. ICML 2023: 26215-26227 - Ruibo Fan, Wei Wang, Xiaowen Chu:
Fast Sparse GPU Kernels for Accelerated Training of Graph Neural Networks. IPDPS 2023: 501-511 - Tiechui Yao, Jue Wang, Junyu Gu, Yumeng Shi, Fang Liu, Xiaoguang Wang, Yangang Wang, Xuebin Chi:
A Sparse Matrix Optimization Method for Graph Neural Networks Training. KSEM (1) 2023: 114-123 - Rainer Engelken:
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks. NeurIPS 2023 - Mahdi Nikdan, Tommaso Pegolotti, Eugenia Iofinova, Eldar Kurtic, Dan Alistarh:
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks. CoRR abs/2302.04852 (2023) - Abhisek Kundu, Naveen K. Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey:
AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks. CoRR abs/2304.06941 (2023) - Rainer Engelken:
SparseProp: Efficient Event-Based Simulation and Training of Sparse Recurrent Spiking Neural Networks. CoRR abs/2312.17216 (2023) - 2022
- Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy:
A brain-inspired algorithm for training highly sparse neural networks. Mach. Learn. 111(12): 4411-4452 (2022) - Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan:
EmbRace: Accelerating Sparse Communication for Distributed Training of Deep Neural Networks. ICPP 2022: 7:1-7:11 - Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, Danilo P. Mandic:
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks. CoRR abs/2207.08629 (2022) - Zirui Liu, Shengyuan Chen, Kaixiong Zhou, Daochen Zha, Xiao Huang, Xia Hu:
RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations. CoRR abs/2210.10737 (2022) - 2021
- Noureldin Laban, Bassam Abdellatif, Hala M. Ebeid, Howida A. Shedeed, Mohamed F. Tolba:
Sparse Pixel Training of Convolutional Neural Networks for Land Cover Classification. IEEE Access 9: 52067-52078 (2021) - Shiwei Liu, Iftitahu Ni'mah, Vlado Menkovski, Decebal Constantin Mocanu, Mykola Pechenizkiy:
Efficient and effective training of sparse recurrent neural networks. Neural Comput. Appl. 33(15): 9625-9636 (2021) - Cesar F. Caiafa, Ziyao Wang, Jordi Solé-Casals, Qibin Zhao:
Learning From Incomplete Features by Simultaneous Training of Neural Networks and Sparse Coding. CVPR Workshops 2021: 2621-2630
skipping 34 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-10-21 20:49 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint