Journal Information
Journal of Parallel and Distributed Computing
Impact Factor:

Call For Papers
The Journal of Parallel and Distributed Computing (JPDC) is directed to researchers, scientists, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing. The goal of the journal is to publish in a timely manner original research, critical review articles, and relevant survey papers on the theory, design, implementation, evaluation, programming, and applications of parallel and/or distributed computing systems. The journal provides an effective forum for communication among researchers and practitioners from various scientific areas working in a wide variety of problem areas, sharing a fundamental common interest in improving the ability of parallel and distributed computer systems to solve increasing numbers of difficult and complex problems as quickly and as efficiently as possible.

The scope of the journal includes (but is not restricted to) the following topics as they relate to parallel and/or distributed computing:

• Theory of parallel and distributed computing
• Parallel algorithms and their implementation
• Innovative computer architectures
• Shared-memory multiprocessors
• Peer-to-peer systems
• Distributed sensor networks
• Pervasive computing
• Optical computing
• Software tools and environments
• Languages, compilers, and operating systems
• Fault-tolerant computing
• Applications and performance analysis
• Bioinformatics
• Cyber trust and security
• Parallel programming
• Grid computing
Last updated by Dou Sun in 2016-09-25
Special Issues
Special Issue on Tools and Techniques for End-to-End Monitoring of Quality of Service in Internet of Things Application Ecosystems
Submission Date: 2017-06-01

The Internet of Things (IoT) paradigm promises to help solve a wide range of issues that relate to our wellbeing. This paradigm is touted to benefit a wide range of application domains including (but not limited to) smart cities, smart home systems, smart agriculture, health care monitoring, and environmental monitoring (e.g. landslides, heatwave, flooding). Invariably, these application use cases produce big data generated by different types of human media (e.g. social media sources such as Twitter, Instagram, and Facebook) and digital sensors (e.g. rain gauges, weather stations, pore pressure sensors, tilt meters). Traditionally, the big data sets generated by IoT application ecosystems have been hosted and processed by traditional cloud datacenters (e.g. Amazon Web Services, Microsoft Azure). However, in recent times the traditional centralized model of cloud computing is undergoing a paradigm shift towards a decentralized model, so that these existing scheduling models can cope with the recent evolution of the smart hardware devices at the network edge such as smart gateways (e.g. Raspberry Pi 3, UDOO board, esp8266) and network function virtualisation solutions (e.g. Cisco IOx, HP OpenFlow and Middlebox Technologies). These devices on the network edge can offer computing and storage capabilities on a smaller scale often referred to as Edge datacenter to support the traditional cloud datacenter in tackling the future data processing and application management challenges that arise in the IoT application ecosystems as discussed above. Ultimately, the success of IoT applications will critically depend on the intelligence of tools and techniques that can monitor and verify the correct operation of such IoT ecosystems from end to end including the sensors, big data programming models, and the hardware resources available in the edge and cloud datacenters that form an integral part of an end-to-end IoT ecosystem. In the past 20 years a large body of research has developed frameworks and techniques to monitor the performance of hardware resources and applications in distributed system environments (grids, clusters, clouds). Monitoring tools that were popular in the grid and cluster computing era included R-GMA, Hawkeye, Network Weather Service (NWS), and Monitoring and Directory Service (MDS). These tools were concerned only with monitoring performance metrics at the hardware resource-level (CPU percentage, TCP/IP performance, available non-paged memory), and not at the application-level (e.g. event detection delay in the context of particular IoT applications). On the other hand, cluster-wide monitoring frameworks (Nagios, Ganglia - adopted by big data orchestration platforms such as YARN, Apache Hadoop, Apache Spark) provide information about hardware resource-level metrics (cluster utilisation, CPU utilisation, memory utilisation). In the public cloud computing space, monitoring frameworks and techniques (e.g. Amazon CloudWatch used by Amazon Elastic MapReduce, Azure Fabric Controller) typically monitor an entire CPU resource as a black box, and so cannot monitor application-level performance metrics specific to IoT ecosystem whereas techniques and frameworks such as Monitis and Nimsoft can monitor application-specific performance metrics (such as web server response time).
Last updated by Dou Sun in 2016-12-16
Special Issue on Towards the Internet of Data: Applications, opportunities and Future Challenges
Submission Date: 2017-08-01

In the new digital era, the Internet of Things (IoT) is a now a familiar concept for many, producing a sheer volume of data generated by an ever increasing network of connected devices that collect and exchange information. A research challenge is how to manage and process the data to adapt the issues of data mining and analysis in the IoT. There is no simple answer to the question of where and how data should be processed, analysed and stored. In this scenario, the Internet of Data (IoD) represents a concept of network composed by data entities coming from the Interne of Things (IoT). The IoD can be considered an extension of the IoT into the digital world, since the amount of data being collected is staggering. The opportunities created by IoD have the potential to be infinite. The IoD presents an ambitious purpose; organizing the data to be interconnected as a network in order to infer useful information for data analysis and creates useful, customized and location-based services. By means of parallel and distributed computing methodologies it will be possible to opportunely solve large-scale problems and process data. This special issue focused on the Internet of Data (IoD) seeks high-quality papers addressing recent advances in data storing, processing and analysis in the IoD realm, also exploiting parallel and distributed computing techniques to smartly manage the massive volume of data.
Last updated by Dou Sun in 2017-05-23
Related Publications
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa1HPDCInternational ACM Symposium on High-Performance Parallel and Distributed Computing2017-01-102017-03-292017-06-26
b2FoIKSInternational Symposium on Foundations of Information and Knowledge Systems2015-10-252015-12-042016-03-07
MIWAIMulti-Disciplinary International Workshop on Artificial Intelligence2016-08-222016-09-092016-12-07
bFUNInternational conference on Fun with Algorithms2012-01-232012-02-202012-06-04
baHOT CHIPSSymposium on High Performance Chips2017-04-072017-05-012017-08-20
cbb1SECInternational Conference on ICT Systems Security and Privacy Protection2017-01-092017-02-242017-05-29
ba2ICCDInternational Conference on Computer Design2017-06-092017-09-012017-11-05
ca2ASP-DACAsia and South Pacific Design Automation Conference2016-07-082016-09-122017-01-16
cb3ParCoInternational Conference on Parallel Computing2015-02-282015-05-152015-09-01
baa1ICDCSInternational Conference on Distributed Computing Systems2016-12-052017-03-062017-06-05