Journal Information
Future Generation Computer Systems (FGCS)
https://www.sciencedirect.com/journal/future-generation-computer-systems
Impact Factor:
6.200
Publisher:
Elsevier
ISSN:
0167-739X
Viewed:
82045
Tracked:
179
Call For Papers
The International Journal of eScience

Computing infrastructures and systems are rapidly developing and so are novel ways to map, control and execute scientific applications which become more and more complex and collaborative.
Computational and storage capabilities, databases, sensors, and people need true collaborative tools. Over the last years there has been a real explosion of new theory and technological progress supporting a better understanding of these wide-area, fully distributed sensing and computing systems. Big Data in all its guises require novel methods and infrastructures to register, analyze and distill meaning.

FGCS aims to lead the way in advances in distributed systems, collaborative environments, high performance and high performance computing, Big Data on such infrastructures as grids, clouds and the Internet of Things (IoT).

The Aims and Scope of FGCS cover new developments in:

[1] Applications and application support:

    Novel applications for novel e-infrastructures
    Complex workflow applications
    Big Data registration, processing and analyses
    Problem solving environments and virtual laboratories
    Semantic and knowledge based systems
    Collaborative infrastructures and virtual organizations
    Methods for high performance and high throughput computing
    Urgent computing
    Scientific, industrial, social and educational implications
    Education

[2] Methods and tools:

    Tools for infrastructure development and monitoring
    Distributed dynamic resource management and scheduling
    Information management
    Protocols and emerging standards
    Methods and tools for internet computing
    Security aspects

[3] Theory:

    Process specification;
    Program and algorithm design
    Theoretical aspects of large scale communication and computation
    Scaling and performance theory
    Protocols and their verification
Last updated by Dou Sun in 2024-07-11
Special Issues
Special Issue on AIFI – Artificial Intelligence for Interoperability
Submission Date: 2024-11-15

Motivation and Scope With the increasing number of data processing technologies, interoperability represents one of the biggest challenges in gathering information from heterogeneous sources. Data collection involves different hardware and software solutions, the adoption of different formats, and the sharing of different meanings. Data interoperability encompasses how diverse datasets are exchanged, merged or aggregated in seamless and meaningful ways, enabling the extraction of knowledge that can be inferred from the whole dataset but not from single sources. While research in this domain explored diverse strategies, the advent of Artificial Intelligence (AI) represents a new frontier in enhancing data interoperability. AI can intervene at different steps of the data processing lifecycle: at acquisition time to adapt data collection for future use or compatibility with other technologies, during the storing process to perform semantic enrichment and alignment with a broader dataset, during the data analysis to federate knowledge from multiple repositories. This special issue explores diverse approaches, challenges, and benefits in AI-driven data interoperability, focusing on the domains of the Internet of Things, Big Data, and Knowledge Graphs. This special issue can raise valuable insights into how AI can enhance data interoperability, enabling efficient use of diverse datasets to drive innovation and ease knowledge discovery. The topics of this special issue include (but are not limited to): Middleware Strategies empowered by AI AI-Driven Standardization and Adaptive Data Formats Federated Knowledge Extraction Interoperability solutions based on Generative AI Interoperable architecture, protocols, and standards for large-scale IoT deployments enabled by AI. Automatic design and integration of IoT standards, such as the W3C Web of Things (WoT) specifications, powered by AI techniques. Methods and algorithms to improve interoperability at the knowledge level AI solutions for blockchain interoperability Explainable AI for enhanced interoperability AI-based data privacy and security in interoperable systems AI-driven ontology alignment and semantic interoperability Guest Editors Luca SciulloUniversity of Bologna, Italyluca.sciullo@unibo.it Ivan ZyrianoffUniversity of Bologna, Italyivandimitry.ribeiro@unibo.it Ronaldo C. PratiFederal University of ABC, Brazilronaldo.prati@ufabc.edu.br Lionel MediniUniversity of Claude Bernard Lyon 1, France lionel.medini@liris.cnrs.fr Important Dates Submission portal opens: May 15th, 2024 Deadline for paper submission: November 15th, 2024 Latest acceptance deadline for all papers: March 15th, 2025
Last updated by Dou Sun in 2024-06-02
Special Issue on Explainable Artificial Intelligence in Drug Discovery and Development
Submission Date: 2024-12-15

Motivation and Scope 'Artificial Intelligence' (AI) has recently revolutionized the field of drug discovery and development, achieving breakthroughs in areas such as molecular design, chemical synthesis planning, protein structure prediction, and macromolecular target identification. Despite various computational methods proposed to address practical challenges, the complexity of these algorithms often results in limited explainability of the models, hindering our ability to understand and explain their underlying mechanisms. Given the rapid advancement of AI in drug discovery and related fields, there is an increasing demand for methods that help us understand and interpret the underlying models. Consequently, proposing 'Explainable Artificial Intelligence' (XAI) methods to address the challenge posed by the lack of explainability in deep learning models and enhancing human reasoning and decision-making capabilities have become imperative. This special issue aims to gather papers that focus on integrating and applying advanced XAI algorithms to address the most fundamental questions in drug discovery and development, including drug repositioning, potential drug target identification, and small drug molecule target interaction and binding affinity prediction, etc. We expect the articles covering this special issue can effectively promote the drug discovery in methodology and meanwhile provide interesting insights or new biological observations. The topics of this special issue include but not limited to: Prediction of drug properties with XAI Explaining drug-drug/target interaction through XAI Development of explainable large language models for drug discovery XAI for drug and target feature representation XAI for ab initio drug design XAI for virtual screening drugs Guest Editors Leyi WeiShandong University, Chinaweileyi@sdu.edu.cn Balachandran ManavalanSungkyunkwan University, South Koreabala2022@skku.edu Xiucai YeUniversity of Tsukuba, Japan yexiucai@cs.tsukuba.ac.jp Dariusz MrozekSilesian University of Technology, PolandDariusz.Mrozek@polsl.pl Important Dates Submission portal opens: March 20, 2024 Deadline for paper submission: Dec. 15, 2024 Latest acceptance deadline for all papers: March 1, 2025
Last updated by Dou Sun in 2024-06-02
Special Issue on Approximate Computing: the need for efficient and sustainable computing
Submission Date: 2025-01-31

Motivation and Scope Today, computing systems face unprecedented computational demands. They serve as bridges between the digital and physical world, processing vast data from diverse sources. Our digital world is constantly producing an immense volume of data. According to recent estimates, millions of terabytes of data are created each day. To handle this immense volume of data, increasingly sophisticated and resource-constrained devices are deployed at the edge, where energy and power efficiency takes center stage. Additionally, the growth of modern AI models, particularly neural networks, has led to boundless computational and power demands. For example, GPT-3 featuring 175 billion parameters, BERT large equipped with 340 million parameters, require high energy costs for training and inferencing data. Approximate Computing (AxC) provides a promising solution: by intentionally allowing slight inaccuracies in computations, AxC significantly reduces overhead (including energy, area, and latency) while preserving practical accuracy levels. These paradigms find applications across several domains. With the intent of navigating the intricate balance between accuracy, reliability, and energy efficiency, exploring Approximate Computing (AxC) techniques becomes crucial. The proposed Special Issue (SI) investigates the intersection of energy-efficient computing and accuracy of state-of-the-art workloads, shedding light on innovative approaches and practical implementations. The potential areas of interest for the proposed SI include, but are not limited to, the following topics: Approximation for Deep Learning applications, including Large Language Models (LLMs) Approximation techniques for emerging processor and memory technologies Approximation-induced error modeling and propagation Approximation in edge computing applications Approximation in HPC and embedded systems Approximation in Foundation Models Approximation in reconfigurable computing Architectural support for approximation Cross-layer approximate computing Hardware/software co-design of approximate systems Dependability of approximate circuits and systems Design automation of approximate architectures Design of approximate reconfigurable architectures Error resilient Near-Threshold Computing Methods for monitoring and controlling approximation quality Modeling, specification, and verification of approximate circuits, and systems Safety and reliability applications of approximate computing Security in the context of approximation Software-based fault-tolerant technique for approximate computing Test and fault tolerance of approximate systems Guest Editors Annachiara Ruospo Politecnico di Torino, Italy annachiara.ruospo@polito.it Salvatore Barone University of Naples Federico II, Italy salvatore.barone@unina.it Jorge Castro-Godinez School of Electronics Engineering Instituto Tecnologico de Costa Rica, Costa Rica jocastro@itcr.ac.cr Important Dates Submission portal opens: November 1st, 2024 Deadline for paper submission: January 31st, 2025 Latest acceptance deadline for all papers: May 31st, 2025
Last updated by Dou Sun in 2024-09-28
Special Issue on On-device Artificial Intelligence solutions with applications on Smart Environments
Submission Date: 2025-02-25

Motivation and Scope The recent advancements in Artificial Intelligence (AI) and the increasing computational power acted as catalyzer for the widespread diffusion of Intelligent Cyber Physical Systems (ICPSs) as a novel way to run smart applications with a “reasoning” component. Unfortunately, the limited hardware capabilities of these devices pose significant limitations on the complexity of the tasks and Deep Learning models that can be run effectively. During the years, solutions like weights compression or quantization have been proposed to address this issue, but they usually require a careful tuning and most of the time they consist in a post-training operation. From the very beginning, the training of complex Deep Learning models has always been reserved to powerful machines with large computing capabilities (typically identified in the Cloud), limiting the Edge only to the inference. However, these solutions do not work especially when latency, security, and high customization aspects become key prerequisites. In such a context emerges the need of novel methods to deliver the intelligence into an embedded system without the data leaving the device. Originally born as a complementary technology, On-device AI is expected to become a hot topic in the next years as a new paradigm where both training and inference processes are performed on the same device. If on the one hand, the possibility to run intelligent algorithms on these systems is a challenging task, on the other the benefits in terms of response time and energy efficiency derived from this technology are going to be the foundations for a novel type of “reasoning” systems. To this aim novel architectures and frameworks should be explored to enable the access to AI based tailored services. Considering a scenario where the Edge would potentially store sensitive data (that should never travers the Internet), it is evident that these devices could become the target of attacks by malicious users. In this sense, privacy and security aspects represent another key elements to be carefully considered and implemented. This special issue has the goal to promote original, unpublished, high-quality research about On-device AI solutions applied to the Smart Environments and Industry 4.0 contexts. The topics of interest include, but are not limited to: On-device training solutions On-device AI applications Federated Learning training and inference strategies on Edge devices AI Intelligent Systems AI for Microcontrollers AI applications at the Edge AI methods for Industrial applications AI based services at the Edge Hardware efficient Deep Learning applications Energy efficient Deep Learning algorithms Privacy and Security for Deep Learning Comparative analysis of on-device AI frameworks Implementation case studies Low-power AI applications and methods Lightweight AI algorithms for Edge devices Edge architectures and frameworks for AI Important Dates Submission portal opens: July 25, 2024 Deadline for paper submission: February 25, 2025 Latest acceptance deadline for all papers: June 30, 2025
Last updated by Dou Sun in 2024-07-11
Special Issue on Generative AI in Cybersecurity
Submission Date: 2025-05-15

Motivation and Scope The world of cybersecurity is changing very rapidly, and the integration of Generative Artificial Intelligence (GenAI) represents a major transition in defense systems as well as attack tactics. Over the last decade, AI developments, especially through chatbots such as ChatGPT, Gemini, and DALL-E, have permeated several sectors, enhancing efficiency in operations and availing innovative approaches. This transformative technology is now at the center stage of cybersecurity by offering unprecedented possibilities and new challenges. Generative AI’s power transcends traditional use cases, fortifying defenses but also creating fresh angles for cyber threats. This special issue intends to examine the multifaceted influence of GenAI on cybersecurity by providing a comprehensive understanding of its potential to transform threat detection, mitigation, and response strategies. In particular, we are looking for ground-breaking studies that address topics including vulnerability assessment, automated hacking, ransomware and malware generation, as well as automation in cyber-defense mechanisms. There is also a need for papers examining the ethical concerns surrounding the use of GenAI within the cybersecurity landscape, hence promoting a balanced approach toward this potent tool. The topics of Interest Include, but are not limited to: Vulnerability Assessment: Enhancing detection and assessment methodologies with GenAI. Social Engineering and Phishing Attacks: Crafting sophisticated social engineering attacks and developing prevention strategies. Automated Hacking and Attack Payload Generation: Automating hacking processes and generating complex attack payloads. Ransomware and Malware Code Generation: Creating and detecting advanced malicious software. Polymorphic Malware Generation: Generating and neutralizing dynamic, AI-generated threats. Cyberdefense Automation: Automating and enhancing defense mechanisms through AI integration. Cybersecurity Reporting and Threat Intelligence: Leveraging AI for advanced threat intelligence and proactive defense. Secure Code Generation and Detection: Employing AI for secure code generation and vulnerability detection. Identification of Cyber Attacks: Real-time attack identification and response using AI. Developing Ethical Guidelines: Establishing ethical norms for AI deployment in cybersecurity. Enhancing Cybersecurity Technologies: Augmenting existing tools and methodologies with AI. Incident Response Guidance: Utilizing AI in incident response and management. Malware Detection: Advancing detection techniques with AI. Social, Legal, and Ethical Implications of Generative AI: Comprehensive analysis of societal impacts and ethical considerations. Guest Editors S. Leili Mirtaheri University of Calabria, Italy leili.mirtaheri@dimes.unical.it Andrea Pugliese University of Calabria, Italy andrea.pugliese@unical.it Valerio Pascucci University of Utah, United States of America pascucci@acm.org Important Dates Submission portal opens: November 1, 2024 Deadline for paper submission: May 15, 2025 Latest acceptance deadline for all papers: September 15, 2025
Last updated by Dou Sun in 2024-09-28
Special Issue on High-performance Computing Heterogeneous Systems and Subsystems
Submission Date: 2025-05-30

Motivation and Scope High-performance computing (HPC) is at a pivotal juncture, characterized by significant advancements in computing technologies and architectural features. This special issue explores this dynamic field's latest advancements, challenges, and innovations. Heterogeneous HPC systems integrate diverse computational resources, including CPUs, GPUs, FPGAs, and other specialized accelerators, to deliver superior performance for various applications. To harness their potential fully, these systems require novel resource management, scheduling, programming models, and performance optimization approaches. Combining cutting-edge research and practical insights, this special issue provides a comprehensive overview of heterogeneous HPC systems' current state and future directions. It is a valuable resource for researchers, practitioners, and policymakers interested in leveraging heterogeneous computing to solve complex scientific, engineering, and data-intensive problems more efficiently and effectively. The topics include but are not limited to: 1. Heterogeneous Programming Models and Runtime Systems: Models, parallel resource management, and automated parallelization Algorithms, libraries, and frameworks for heterogeneous systems 2. Heterogeneous Architectures: Power/energy management, reliability, and non-von Neuman architectures Memory and interconnection designs Data allocation, caching, and disaggregated memory Consistency models, persistency, and failure-atomicity 3. Heterogeneous Resource Management: System and software designs for dynamic resources High-level programming, run-time techniques, and resource frameworks Scheduling algorithms, resource management, and I/O provisioning 4. Heterogeneity in Artificial Intelligence: AI/ML/DL predictive models and optimized systems for heterogeneous workflows and applications Tools and workflows for AI/ML/DL in scientific applications Guest Editors Sergio Iserte Barcelona Supercomputing Center, Spain sergio.iserte@bsc.es Pedro Valero-Lara Oak Ridge National Laboratory, USA valerolarap@ornl.gov Kevin A. Brown Argonne National Laboratory, USA kabrown@anl.gov Important Dates Submission portal opens: October 01, 2024 Deadline for paper submission: May 30, 2025 Latest acceptance deadline for all papers: July 31, 2025
Last updated by Dou Sun in 2024-09-28
Special Issue on Cloud Continuum
Submission Date: 2025-08-30

Motivation and Scope Cloud computing has become a common commodity with many different providers and solutions. Several new architectural models are being developed and applied to ensure scalability, quality of service, and resilience. The models focus both on the providers, optimizing the use of their infrastructure, and on the users' side, optimizing the response times and/or costs. This scenario is becoming more complex with the possibility of having computing power close to the users on edge/fog models. All this scenario can be seen as the Cloud Continuum. There are already some conferences that have the Cloud Continuum in their call-for-papers. However, only some of them have explicitly focused on the applications. We aim to attract a broader range of papers, from software engineering to High-Performance Computing applications. All of them will be discussed in the Cloud Continuum scenario. The following list includes some of the major topics for this special issue: Energy Efficiency AI-powered Services Security IoT Applications Architectural Models Serverless Computing Elasticity Storage Virtualization Sustainable Models Programming Models QoS for Applications Optimization and Performance Issues Communication Protocols Big Data High-Performance Computing Applications Innovative Cloud Applications and Experiences Availability and Reliability Microservices New Models (e.g., spot instances) Frameworks and APIs HPC as a Service Guest Editors Alfredo Goldman University of São Paulo, Brazil gold@ime.usp.br Eduardo Guerra University of Bolzano, Italy eduardo.guerra@unibz.it Jean Luca Bez Lawrence Berkeley National Laboratory, USA jlbez@lbl.gov Important Dates; Submission Portal Opens: April, 15th, 2025; Deadline for paper submission: August, 30th 2025; Latest acceptance deadline for all papers: February 15th, 2026.
Last updated by Dou Sun in 2024-09-28
Related Journals
CCFFull NameImpact FactorPublisherISSN
Journal of Forecasting3.400Wiley-Blackwell0277-6693
cBehaviour & Information Technology2.900Taylor & Francis0144-929X
bACM Transactions on Internet Technology3.900ACM1533-5399
IEEE Internet Computing Magazine3.700IEEE1089-7801
The Scientific World JournalHindawi1537-744X
IEEE Transactions on Control Systems Technology4.900IEEE1063-6536
International Journal of Information Technology and Web Engineering IGI Global1554-1045
bIEEE Transactions on VLSI Systems2.800IEEE1063-8210
Journal of Function Spaces1.900Hindawi2314-8896
bFrontiers of Computer Science3.400Springer2095-2228
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa2PACTInternational Conference on Parallel Architectures and Compilation Techniques2024-03-252024-07-012024-10-13
bb1ECBSEuropean Conference on the Engineering of Computer Based Systems2019-05-152019-06-152019-09-02
ICTCInternational Conference on ICT Convergence2024-08-232024-09-102024-10-16
NVICTInternational Conference on New Visions for Information and Communication Technology2014-12-312015-03-152015-05-27
NATAPInternational Conference on Natural Language Processing and Trends2022-06-042022-06-142022-06-18
ba1MobisysInternational Conference on Mobile Systems, Applications and Services2023-11-232024-03-062024-06-03
ICeNDInternational Conference on e-Technologies and Networks for Development2017-06-112017-06-202017-07-11
ECPDCInternational Academic Conference on Edge Computing, Parallel and Distributed Computing2024-03-012024-04-102024-04-19
APSACInternational Conference on Applied Physics, System Science and Computers2017-06-30 2018-09-26
ECELEuropean Conference on e-Learning2020-04-222020-04-222020-10-29
Recommendation