Journal Information
Pattern Recognition Letters (PRL)
Impact Factor:
Call For Papers
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition. Examples include:

• Statistical, structural, syntactic pattern recognition;
• Neural networks, machine learning, data mining;
• Discrete geometry, algebraic, graph-based techniques for pattern recognition;
• Signal analysis, image coding and processing, shape and texture analysis;
• Computer vision, robotics, remote sensing;
• Document processing, text and graphics recognition, digital libraries;
• Speech recognition, music analysis, multimedia systems;
• Natural language analysis, information retrieval;
• Biometrics, biomedical pattern analysis and information systems;
• Scientific, engineering, social and economical applications of pattern recognition;
• Special hardware architectures, software packages for pattern recognition.

We invite contributions as research reports or commentaries.

Research reports should be concise summaries of methodological inventions and findings, with strong potential of wide applications.
Alternatively, they can describe significant and novel applications of an established technique that are of high reference value to the same application area and other similar areas.

Commentaries can be lecture notes, subject reviews, reports on a conference, or debates on critical issues that are of wide interests.

To serve the interests of a diverse readership, the introduction should provide a concise summary of the background of the work in an accepted terminology in pattern recognition, state the unique contributions, and discuss broader impacts of the work outside the immediate subject area. All contributions are reviewed on the basis of scientific merits and breadth of potential interests.
Last updated by Dou Sun in 2021-03-20
Special Issues
Submission Date: 2021-04-20

DESCRIPTION COVID-19 disease, caused by the SARS- virus, was detected in December 2019 and declared a global pandemic on 11 March 2020 by the WHO. Artificial Intelligence (AI) is a highly effective method for fighting the pandemic COVID-19. AI can be described as Machine Learning (ML), Natural Language Processing (NLP), and Computer Vision applications for present purposes to teach computers to use large data-based models for pattern recognition, description, and prediction. Such functions can help identify (diagnosing), forecasting, and describing (treating) COVID-19 infections and aiding in controlling socioeconomic impacts. After the pandemic epidemic, for these reasons, there has been a rush to use and test AI and other data analytics devices. Such tasks can be useful for identifying (diagnosing), forecasting, and describing (treating) COVID-19 infections and helping to control socioeconomic impacts. Since the onset of the pandemic, there has been a rush to use and test AI and other data mining techniques for these purposes. The risk of the epidemic in terms of life and economic loss would be terrible; much confusion engulfed predictions of how bad and how effective non-pharmaceutical and pharmaceutical solutions would be. A worthy goal is to strengthen AI, one of the most popular data analytics tools that have been developed in the past decade or reduce these uncertainties. Data scientists have been willing to take up the opportunity. In AI, machine learning and its subset(Deep Learning) methods are employed in various applications to solve multiple problems that occur due to uncertainty. But these problems were solved with the help of data collected from the history of occurrences of the event. Most of the machine learning and deep learning algorithms are trained to address the supervised learning problem, where the algorithms know the prediction requirement. On the other hand, the potential measure of the unsupervised learning method is quite high. The ability to explore new possibilities of the outcome is high. In general, supervised learning methods are bounded with biases, in which the set of rules are determined with the DOs and DONTs, which prohibit the thinking of other possibilities. Also, a high effort, manual work, and time are required to label the data for the supervised learning process, in case the labeling is not available. The primary objective of this special issue is to enhance the ability of unsupervised learning into the deep learning methodologies to find a solution to the COVID-19. To improve the behavior and nature of the deep learning method with the quality of the clustering algorithm. So, the unsupervised learning methodology can be implemented in the deep learning algorithms for efficient data classification. The focus of this special issue is to provide a platform and opportunity for the researchers to find the solution for the current pandemic and future hazards like this that humanity has to face using the AI that involves self-learning methodologies. SUBJECT COVERAGE TOPICS MAY INCLUDE, BUT NOT LIMITED TO THE FOLLOWING: · intelligent signal computing based on Deep Embedded clustering · An evolutionary approach to Process the signals and its application · Architectures for Real-time sensing and intelligent processing · Auto-Encoders, Restricted Boltzmann Machines for signal classification · Real-time Signal processing based on DEC · Parallel and distributed algorithm design and implementation in signal sensing · Analytics for multi-dimension data · Intelligent computing on signal for data analysis · Real-time remote sensing signals, such as hyperspectral signal classification, content-based signal indexing, and retrieval, monitoring of natural. · the selection of suitable unsupervised learning methodologies. · the selection of suitable and efficient deep learning methodology. · the selection of diverse datasets and problems to test and validate the research outcomes. · the exploration of the optimal deep learning methodology for data classifications. IMPORTANT DATES Submission of manuscripts: 01 APR 2021 Submission Deadline: 20 APR 2021 Acceptance Deadline: 20 OCT 2021 GUEST EDITOR(S) Prof. Dr. B.Nagaraj M.E., Ph.D., MIEEE Dean - Innovation Centre Rathinam Group of Institutions Coimbatore, Tamilnadu, India Prof. Dr. Danilo Pelusi, University of Teramo, Italy Dept. of Communication Engineering Prof. Valentina E. Balas Professor-Automation and Applied Informatics, Aurel Vlaicu University of Arad, Romania.
Last updated by Dou Sun in 2021-04-13
Submission Date: 2021-05-20

The rapid increase in population has predominantly increased the demand and usage of the motorized vehicles in all areas. This increase in motor vehicular usage has substantially increased the rate of road accidents in the recent decade. Furthermore, injuries, disabilities, and death due to fatal road accidents have been increasing every year despite the safety measures introduced for the public and private transportation system. Congestion of vehicles, a driver under alcohol or drug influence, distracted driving, street racing, faulty design of cars or traffic lights, tailgating, running red lights & stop signs, improper turns and driving in the wrong direction are some of the real causes of accidents across the globe. There are many advanced surveillance systems implemented for road safety, but the prevention of accidents are still being an effective problem. The existing sophisticated vehicles monitored and traffic surveillance system should be used to prevent accidents from occurring. However real-time observations are difficult with an enormous amount of surveillance data running continuously. With the emerging trends in the field of information and computer science, the use of innovative technologies in real-time can be helpful for accident prevention and detection. Computer vision is the technology that is designed to imitate how the human visual system works. The digital image data from the multiple surveillance systems are acquired in real-time and the data is analyzed and if there are any incidents such as speeding, reckless driving, accidents, etc. it is identified and reported by the system concurrently. Image classification, object detection, object tracking, semantic segmentation, and instance segmentation are some of the computer vision-based techniques with advanced deep learning approaches which can be used in the real-time accident detection and prevention processes. Similarly, using neural networks many anomalies can be detected in the movement of vehicles using historical data which can be also used in the prevention of accidents. The recent developments in the use of deep learning approaches in visual recognition can be seen as a significant contribution to advanced computer vision research. Moreover, the assistance of computer vision in the surveillance of traffic for accident prevention and detection in real-time would be more significant. The special issue on “Real-time computer vision for accident prevention and detection” The list of topics that are relevant includes, but it is not limited to, the following: Theoretical analysis of Computer Vision-based Visual recognition for Fatal Accidents Unsupervised, Semi-Supervised and Self-Supervised Feature Learning of Transportation Accidents A Study on Real-time Applications of Computer Vision and Image Analysis in Traffic Congestion Deep Vision-based Learning for Accident and Traffic Collision Reconstruction Future of Computer Vision in Road Safety and Intelligent Traffic Sensors and Early Vision for Post-Accident and Injury Phases Computer Vision for Fatigue Detection and Management Technologies Applications of Neural Networks in Transportation Strategy Planning and Instinctive Decision Making Advanced Visual Learning Methods for Risk-based Accident Prevention Computer Vision Algorithms and methodologies for Pre-Crash Analysis
Last updated by Dou Sun in 2020-07-30
Special Issue on Application of Pattern Recognition in Digital world: Security, Privacy and Reliability (APRDW)
Submission Date: 2021-06-20

Digital technology plays a vital role in humans’ day-to-day activity. It has made the system simple and more powerful and plays its major role in social networks, communication, and digital transaction, etc. The rapid development in digital technology also has downsides in the integrity of data, data privacy, and confidentiality. There has been a need for security, privacy, and reliability in digital technology. Pattern recognition is a computerized recognition that regulates the data in digital technology and plays a vital role in the digital world. A pattern can either be seen physically or it can be observed mathematically by applying algorithms. The pattern recognition techniques have been categorized as statistical techniques, structural techniques, template matching, neural network approach, fuzzy model, and hybrid models. A common platform is always in need to share the views of different researchers relating to the complicated facets of pattern recognition in the areas of security, privacy, and reliability in digital technology. This special issue explores novel concepts and practices with a long-term goal of fully-automated lifestyle fostered by the technological advances of pattern recognition in a wide spectrum of applications. We invite authors from both industry and academia to submit original research and review articles that cover the security, privacy, and reliability in digital technology using the pattern recognition techniques. Models, algorithms, and designs for reliability in digital media Network-assisted rate adaptation for reliability in digital media Reliability based privacy in digital media Reliability, security in digital transaction Malware and virus detection for reliable digital media analytics Development of software tools and technique for integrity of data, data privacy, and confidentiality
Last updated by Dou Sun in 2020-08-11
Special Issue on Few-shot Learning for Human-machine Interactions (FSL-HMI)
Submission Date: 2021-07-20

The widespread use of Web technologies, mobile technologies, and cloud computing have paved a new surge of ubiquitous data available for business, human, and societal research. Nowadays, people interact with the world via various Information and Communications Technology (ICT) channels, generating a variety of data that contain valuable insights into business opportunities, personal decisions, and public policies. Machine learning has become the common task of applications in various application scenarios, e.g., e-commerce, health, transport, security and forensics, sustainable resource management, emergency and crisis management to support intelligent analytics, predictions, and decision-making. It has proven highly successful in data-intensive applications and revolutionized human-machine interactions in many ways in modern society. Essential to machine learning is to deal with a small dataset or few-shot learning, which aims to develop learning models that can generalize rapidly generalize from a few examples. Though challenging, few-shot learning has gained increasing popularity since inception and has mostly focused on the studies in general machine learning contexts. Meanwhile, traditional human-machine interactions research has primarily focused on interaction design and local adaptation for user-friendliness, ergonomics, or efficiency. The emerging topics such as brain-computer interface, multimodal user interfaces, and mobile personal assistants as new means of human-machine interactions are still in their infancies. Few-shot learning is especially important for such new types of human-machine interactions due to the difficulty of acquiring examples with supervised information due to privacy, safety, expense, or ethical concerns. Although the related research is relatively new, it promises a fertile ground for research and innovation. This special issue aims at gathering the recent advances and novel contributions from academic researchers and industry practitioners in the vibrant topic of few-shot learning to achieve the full potential of human-machine interaction applications. It calls for innovative methodological, algorithmic, and computational methods that incorporate the most recent advances in data analytics, artificial intelligence, and interaction research to solve the theoretical and practical problems. It also requires reexamining the existing architectures, models, and techniques in machine learning and deep neural networks to address the challenges to advance state-of-the-art knowledge in this area. Topics of Interest include but not limited to: Novel few-shot, one-shot, or zero-shot learning models and algorithms for sense-making of humans, systems, and their interactions Conceptual frameworks, computational design for few-shot learning or human-centric computing Methods that improve the learnability, efficiency, or usability of systems that interact with humans Techniques to address small datasets, e.g., data imputation/augmentation, generative models, reinforcement learning, active learning. Novel recommender systems in HCI related aspects Trust, security/privacy, and performance evaluations for few-shot learning Interface or interaction designs based on few shot examples to enable humans to interact with computers in novel ways Other technologies and applications that advocates a better understanding of or exploiting values from human-machine interactions
Last updated by Dou Sun in 2020-11-03
Special Issue on Mobile and Wearable Biometrics (VSI:MWB)
Submission Date: 2021-09-20

Mobile devices such as smartphones and tablets are nowadays daily employed by more than 3 billion people, with an expected further worldwide penetration up to 5 billion users by 2025. Among the reasons for such astonishing growth, from the early years of mobile communications to the present day, there is the fact that modern mobile devices offer the possibility to perform many tasks and access several services, such as taking pictures or perform on-line payments, with an extreme ease of use. As a matter of fact, the share of internet users making mobile online payments is above 30% in most regions of the world. As the next step in terms of technological revolution, wearable devices such as smart glasses, chestbands, and wristbands, are also rapidly becoming widespread. Thanks to their ability in capturing physiological signals like those related to the heart rate, a vast number of applications is being developed for wearable platforms, ranging from activity tracker and healthcare to social sharing in the context of the Internet of Things. It has yet to be observed that most of the services which can be performed through mobile and wearable devices are typically accessed and used by providing sensitive and valuable data, such as passwords, credit card numbers, and so forth. Furthermore, the information commonly captured by the sensors with which these devices are equipped, and stored within them, is highly personal, with consequent possible security and privacy issues in case unauthorized subjects try to access such content. It is therefore of paramount importance to design effective and secure mechanisms to access these devices. In this regard, resorting to biometric recognition systems seems a natural choice. Mobile and wearable devices are in fact commonly equipped with several sensors which could be exploited to acquire discriminative traits, thus allowing to recognize the authorized users. Furthermore, the possibility of performing biometric recognition within mobile and wearable devices may come in handy to use them as authenticating tokens, providing the means to perform decentralized access control, thus exploiting mobile and wearable technology as authenticating means by combining their capabilities with biometric solutions. Such approach would for instance allow to design reliable systems performing continuous recognition, monitoring the identity of a subject during a period of indefinite temporal extension, hence providing robustness against session hi-jacking, in which an intruder may seize control of an ongoing session after a successful login of a legitimate user. It is yet worth remarking that the systems to be implemented for such devices should be designed while taking into account the specific peculiarities of the considered scenarios. For instance, with respect to solutions dedicated to desktop systems, where physical characteristics are commonly preferred, approaches based on either behavioural or cognitive traits might be more appropriate when dealing with mobile and wearable devices. The computational complexity of the required processing may also represent a concern for systems with limited resources available. The present special issue therefore seeks for recent and innovative developments in pattern recognition fields with applications to the design of biometric recognition systems for mobile and wearable devices. Topics of interest include, for example, the analysis and processing of the discriminative information (biosignals, images) which can be captured through mobile and wearable devices, the design of hardware architectures or software packages which could be effectively implemented in such environments, the proposal of machine learning approaches requiring limited computational resources, among others. The topics of the Special Issue include, but are not limited to: Mobile biometrics in the wild; Continuous biometric recognition using wearable devices; Sensors for wearable technology (smartwatches, smart eyewear, smart t-shirt, etc.); Physical and behavioral in the mobile environment; Cognitive biometrics for wearable devices; Age and aging effects in mobile biometrics; Machine learning with limited computational resources; Biometric template protection: challenges and solutions in the mobile environment; Usability, interfaces, and human factors; Hardware architectures and software for biometric recognition on mobile and wearable devices; Affective computing in biometric recognition.
Last updated by Dou Sun in 2020-11-03
Special Issue on Self-Learning Systems and Pattern Recognition and Exploitation (SeLSPRE)
Submission Date: 2021-10-20

Self-Learning Systems aim to achieve a goal -without being pre-programmed- in an environment that may be completely unknown initially. Self-learning algorithms are inspired by neuroscience and mimic he way the brain achieves cognition: they explore the environment following a try-and-error approach, or acquire knowledge from demonstrations provided by experts. The development of such a kind of systems is pushed forward by AI technologies such as Reinforcement Learning, Inverse Reinforcement Learning, and Learning by Demonstration. Their application spams from robotics and autonomous driving up to healthcare and precision medicine. This special issue focuses on pattern recognition and their successive exploitation by Self-Learning Systems. The way Inverse Reinforcement Learning or Learning by Demonstration extract patterns from ‘demonstrated trajectories’, and how such patterns are successively exploited by a self-learning algorithm to optimize its policy or fasten its learning process, is of interest of this special issue. Topics of interest Inverse Reinforcement Learning Learning-by-Demonstration and Imitation Learning Pattern Recognition via Inverse Reinforcement Learning Pattern Recognition from Demonstrations Pattern exploitation in Self-Learning Systems Pattern recognition in partially observable environments Action-State trajectories analysis for pattern recognition and reward engineering Pattern recognition and exploitation in Multi-Agent Self-Learning Systems Pattern recognition and exploitation in Hierarchical Self-Learning Systems
Last updated by Dou Sun in 2021-02-28
Special Issue on Computational Linguistics Processing in Indigenous Language (CLPIL)
Submission Date: 2021-11-20

Natural language processing (NLP) involves building models of the language environment and inferring the consequences of inter-language processing. In the Machine Learning (ML) research, this technology has traditionally been facilitated by a technique called state-of-the-art machine translation, in which a translation model is developed and using this the meaning of each word from the original language is extracted. This type of model can be extended to several different languages, and for this reason, it can be useful for words those are identical in meaning or form are found to have a common meaning in each language. Textual participation helps to facilitate natural language interpretation, to allow computer applications, and to characterize how the text is interpreted by natural language devices. Automated algorithms for lexical task participation may be extended to different applications in the processing of natural languages. In particular, automated parsing tools have a crucial role to play in developing a computational approach to natural language processing for general purposes. The aim of the virtual special issue is to investigate the computational complexity of indigenous languages and to provide a solution for a problem from an obvious point of view: How do we solve a classification problem? Natural language processing is the application of artificial intelligence to the English language. Additionally, this issue will give an introduction to the mathematical machinery behind the classification problem of indigenous language. The purpose of this issue is to summarize the research techniques related to the future trends in Artificial Intelligence (AI), computational engineering, information science, Natural Language Processing. This issue tends to present several interesting open problems with future research directions for data engineering, computational engineering, data science, Multilingual models, Social Media mining and big data. Topics of Interest Potential topics include, but are not limited to the following: Automated Language Translation and Grammar Correction Computational language Processing Distributional models and semantics Evolutionary language modeling for pattern recognition Indigenous language problems Lexical Knowledge Representation and pattern recognition Multilingual and cross-lingual distributional representations and universal language models Multimodal NLP, text-image and image-text processing Multimodal NLP: Audio, Image, Video Natural Language Toolkit for Virtual Libraries NLP for Remote Access E-Resources Opinion Mining and pattern recognition on social media Ontology based language pattern recognition Pattern Recognition using AI/ML Pattern recognition for virtual, augmented and mixed reality languages Privacy and security on language libraries Sentiments Pattern Analysis through NLP for Document Management Syntactic, semantic, and context parsing and analysis Speech synthesis and Pattern recognition
Last updated by Dou Sun in 2021-02-28
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
DMAInternational Conference on Data Mining and Applications2021-03-132021-03-202021-03-27
bb2IJCNLPInternational Joint Conference on Natural Language Processing2021-02-012021-05-052021-08-01
IRCDLItalian Research Conference on Digital Libraries2018-10-052018-10-312019-01-31
ICINInternational ICIN Conference Innovations in Clouds, Internet and Networks2020-11-012020-12-152021-03-01
ICBIPInternational Conference on Biomedical Signal and Image Processing2021-04-302021-05-202021-08-20
cb1BIBEInternational Conference on Bioinformatics & Bioengineering2015-08-302015-09-152015-11-02
ICICT''International Conference on Information and Computer Technologies2019-11-252019-12-152020-03-09
BRAINSConference on Blockchain Research & Applications for Innovative Networks and Services2021-04-122021-05-312021-09-27
VizSecIEEE Symposium on Visualization for Cyber Security2018-07-222018-08-152018-10-22
FDGInternational Conference on the Foundations of Digital Games2020-01-132020-03-092020-09-15