Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 100 matches
- 2024
- Yuan Chiang, Chia-Hong Chou, Janosh Riebesell:
LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation. CoRR abs/2401.17244 (2024) - Zeyu Liu, Gourav Datta, Anni Li, Peter Anthony Beerel:
LMUFormer: Low Complexity Yet Powerful Spiking Model With Legendre Memory Units. CoRR abs/2402.04882 (2024) - 2023
- Jary Pomponi, Daniele Dántoni, Alessandro Nicolosi, Simone Scardapane:
Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models. IEEE Access 11: 11298-11306 (2023) - Weijia Kong, Bertrand Jern Han Wong, Harvard Wai Hann Hui, Kai Peng Lim, Yulan Wang, Limsoon Wong, Wilson Wen Bin Goh:
ProJect: a powerful mixed-model missing value imputation method. Briefings Bioinform. 24(4) (2023) - Wujuan Zhong, Aparna Chhibber, Lan Luo, Devan V. Mehrotra, Judong Shen:
A fast and powerful linear mixed model approach for genotype-environment interaction tests in large-scale GWAS. Briefings Bioinform. 24(1) (2023) - Xinyi Yu, Jiashun Xiao, Mingxuan Cai, Yuling Jiao, Xiang Wan, Jin Liu, Can Yang:
PALM: a powerful and adaptive latent model for prioritizing risk variants with functional annotations. Bioinform. 39(2) (2023) - Mustafa Gürman, Bülent Bilgehan, Özlem Sabuncu, Omid Mirzaei:
A powerful probabilistic model for noise analysis in medical images. Int. J. Imaging Syst. Technol. 33(3): 999-1013 (2023) - Shaojie Li, Mingbao Lin, Yan Wang, Yongjian Wu, Yonghong Tian, Ling Shao, Rongrong Ji:
Distilling a Powerful Student Model via Online Knowledge Distillation. IEEE Trans. Neural Networks Learn. Syst. 34(11): 8743-8752 (2023) - Cong Dao Tran, Nhut Huy Pham, Anh Nguyen, Truong Son Hy, Tu Vu:
ViDeBERTa: A powerful pre-trained language model for Vietnamese. EACL (Findings) 2023: 1041-1048 - Jun Bai, Xiaofeng Zhang, Chen Li, Hanhua Hong, Xi Xu, Chenghua Lin, Wenge Rong:
How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey. EMNLP (Findings) 2023: 5369-5382 - Zilin Xiao, Ming Gong, Jie Wu, Xingyao Zhang, Linjun Shou, Daxin Jiang:
Instructed Language Models with Retrievers Are Powerful Entity Linkers. EMNLP 2023: 2267-2282 - Diptarka Chakraborty, Sourav Chakraborty, Gunjan Kumar, Kuldeep S. Meel:
Approximate Model Counting: Is SAT Oracle More Powerful Than NP Oracle? ICALP 2023: 123:1-123:17 - Shreya Bhardwaj, Yasha Hasija:
ChatGPT, a powerful language model and its potential uses in bioinformatics. ICCCNT 2023: 1-6 - Cong Dao Tran, Nhut Huy Pham, Anh Nguyen, Truong Son Hy, Tu Vu:
ViDeBERTa: A powerful pre-trained language model for Vietnamese. CoRR abs/2301.10439 (2023) - Mohammed Barhoush, Louis Salvail:
Powerful Primitives in the Bounded Quantum Storage Model. CoRR abs/2302.05724 (2023) - Xiaokai Wei, Sujan K. Gonugondla, Wasi Uddin Ahmad, Shiqi Wang, Baishakhi Ray, Haifeng Qian, Xiaopeng Li, Varun Kumar, Zijian Wang, Yuchen Tian, Qing Sun, Ben Athiwaratkun, Mingyue Shang, Murali Krishna Ramanathan, Parminder Bhatia, Bing Xiang:
Greener yet Powerful: Taming Large Code Generation Models with Quantization. CoRR abs/2303.05378 (2023) - Felix Michels, Nikolas Adaloglou, Tim Kaiser, Markus Kollmann:
Contrastive Language-Image Pretrained (CLIP) Models are Powerful Out-of-Distribution Detectors. CoRR abs/2303.05828 (2023) - Kehui Tan, Tianqi Pang, Chenyou Fan:
Towards Applying Powerful Large AI Models in Classroom Teaching: Opportunities, Challenges and Prospects. CoRR abs/2305.03433 (2023) - Diptarka Chakraborty, Sourav Chakraborty, Gunjan Kumar, Kuldeep S. Meel:
Approximate Model Counting: Is SAT Oracle More Powerful than NP Oracle? CoRR abs/2306.10281 (2023) - Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, Yujiu Yang:
Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. CoRR abs/2309.08532 (2023) - Zilin Xiao, Ming Gong, Jie Wu, Xingyao Zhang, Linjun Shou, Jian Pei, Daxin Jiang:
Instructed Language Models with Retrievers Are Powerful Entity Linkers. CoRR abs/2311.03250 (2023) - Thanmay Jayakumar, Fauzan Farooqui, Luqman Farooqui:
Large Language Models are legal but they are not: Making the case for a powerful LegalLLM. CoRR abs/2311.08890 (2023) - Zhongjie Duan, Chengyu Wang, Cen Chen, Weining Qian, Jun Huang, Mingyi Jin:
FastBlend: a Powerful Model-Free Toolkit Making Video Stylization Easier. CoRR abs/2311.09265 (2023) - Haoran Zhao, Jake Ryland Williams:
Bit Cipher - A Simple yet Powerful Word Representation System that Integrates Efficiently with Language Models. CoRR abs/2311.11012 (2023) - Jun Bai, Xiaofeng Zhang, Chen Li, Hanhua Hong, Xi Xu, Chenghua Lin, Wenge Rong:
How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey. CoRR abs/2312.04775 (2023) - Henry Hengyuan Zhao, Pan Zhou, Mike Zheng Shou:
Genixer: Empowering Multimodal Large Language Models as a Powerful Data Generator. CoRR abs/2312.06731 (2023) - 2022
- Jin Jin, Yue Wang:
T2-DAG: a powerful test for differentially expressed gene pathways via graph-informed structural equation modeling. Bioinform. 38(4): 1005-1014 (2022) - Yuge Wang, Tianyu Liu, Hongyu Zhao:
ResPAN: a powerful batch correction model for scRNA-seq data through residual adversarial networks. Bioinform. 38(16): 3942-3949 (2022) - François-Rémi Mazy, Pierre-Yves Longaretti:
Towards a generic theoretical framework for pattern-based LUCC modeling: An accurate and powerful calibration-estimation method based on kernel density estimation. Environ. Model. Softw. 158: 105551 (2022) - Zhanshan (Sam) Ma:
Coupling Power Laws Offers a Powerful Modeling Approach to Certain Prediction/Estimation Problems With Quantified Uncertainty. Frontiers Appl. Math. Stat. 8: 801830 (2022)
skipping 70 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-04-25 08:21 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint