Most Downloaded Articles

Open access

ISSN: 2666-6510

Graph neural networks: A review of methods and applications

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface,...

Share article

A survey of transformers

Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots...

Share article

Pre-trained models: Past, present and future

Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training...

Share article

GPT understands, too

Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts...

Share article

Deep learning for fake news detection: A comprehensive survey

The information age enables people to obtain news online through various channels, yet in the meanwhile making false news spread at unprecedented speed. Fake news exerts detrimental effects for it impairs...

Share article

Neural machine translation: A review of methods, resources, and tools

Machine translation (MT) is an important sub-field of natural language processing that aims to translate natural languages using computers. In recent years, end-to-end neural machine translation (NMT)...

Share article

Data augmentation approaches in natural language processing: A survey

As an effective strategy, data augmentation (DA) alleviates data scarcity scenarios where deep learning techniques may fail. It is widely applied in computer vision then introduced to natural language...

Share article

MindLLM: Lightweight large language model pre-training, evaluation and domain application

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial...

Share article

Large language models in law: A survey

The advent of artificial intelligence (AI) has significantly impacted the traditional judicial industry. Moreover, recently, with the development of AI-generated content (AIGC), AI and law have found...

Share article

How to generate popular post headlines on social media?

Posts, as important containers of user-generated-content on social media, are of tremendous social influence and commercial value. As an integral component of post, headline has decisive influence on...

Share article

Sarcasm detection using news headlines dataset

Sarcasm has been an elusive concept for humans. Due to interesting linguistic properties, sarcasm detection has gained traction of the Natural Language Processing (NLP) research community in the past...

Share article

Advances and challenges in conversational recommender systems: A survey

Recommender systems exploit interaction history to estimate user preference, having been heavily used in a wide range of industry applications. However, static recommendation models are difficult to...

Share article

CPT: Colorful Prompt Tuning for pre-trained vision-language models

Vision-Language Pre-training (VLP) models have shown promising capabilities in grounding natural language in image data, facilitating a broad range of cross-modal tasks. However, we note that there...

Share article

Neural, symbolic and neural-symbolic reasoning on knowledge graphs

Knowledge graph reasoning is the fundamental component to support machine learning applications such as information extraction, information retrieval, and recommendation. Since knowledge graphs can...

Share article

A comprehensive survey of entity alignment for knowledge graphs

Knowledge Graphs (KGs), as a structured human knowledge, manage data in an ease-of-store, recognizable, and understandable way for machines and provide a rich knowledge base for different artificial...

Share article

PTR: Prompt Tuning with Rules for Text Classification

Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved promising results on some...

Share article

A comprehensive review on resolving ambiguities in natural language processing

Natural language processing is a known technology behind the development of some widely known AI assistants such as: SIRI, Natasha, and Watson. However, NLP is a diverse technology used for numerous...

Share article

Enhancing neural network classification using fractional-order activation functions

In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of...

Share article

A survey on complex factual question answering

Answering complex factual questions has drawn a lot of attention. Researchers leverage various data sources to support complex QA, such as unstructured texts, structured knowledge graphs and relational...

Share article

Learning towards conversational AI: A survey

Recent years have witnessed a surge of interest in the field of open-domain dialogue. Thanks to the rapid development of social media, large dialogue corpus from the Internet builds up a fundamental...

Share article

An ecosystem for personal knowledge graphs: A survey and research roadmap

This paper presents an ecosystem for personal knowledge graphs (PKGs), commonly defined as resources of structured information about entities related to an individual, their attributes, and the relations...

Share article

Lawformer: A pre-trained language model for Chinese legal long documents

Legal artificial intelligence (LegalAI) aims to benefit legal systems with the technology of artificial intelligence, especially natural language processing (NLP). Recently, inspired by the success...

Share article

CPM-2: Large-scale cost-effective pre-trained language models

In recent years, the size of pre-trained language models (PLMs) has grown by leaps and bounds. However, efficiency issues of these large-scale PLMs limit their utilization in real-world scenarios. We...

Share article

Stay Informed

Register your interest and receive email alerts tailored to your needs. Sign up below.