Información de la conferencia
IPDPS 2025: International Parallel & Distributed Processing Symposium
http://www.ipdps.org/
Día de Entrega:
2024-10-03
Fecha de Notificación:
2024-12-19
Fecha de Conferencia:
2025-06-03
Ubicación:
Milan, Italy
Años:
39
CCF: b   CORE: a   QUALIS: a1   Vistas: 70795   Seguidores: 216   Asistentes: 33

Solicitud de Artículos
Authors are invited to submit manuscripts that present novel and impactful research in high performance computing (HPC) in parallel and distributed processing. Works focusing on emerging technologies, interdisciplinary work spanning multiple IPDPS focus areas, and novel open-source artifacts are welcome. Topics of interest include but are not limited to the following areas:

    Algorithms:
    This track focuses on algorithms for computational and data science in parallel and distributed computing environments (including cloud, edge, fog, distributed memory, and accelerator-based computing). Examples include structured and unstructured mesh and meshless methods, dense and sparse linear algebra computations, spectral methods, n-body computations, clustering, data mining, compression, and combinatorial algorithms such as graph and string algorithms. Also included in this track are algorithms that apply to tightly or loosely coupled systems, such as those supporting communication, synchronization, power management, distributed resource management, distributed data and transactions, and mobility. Novel algorithm designs and implementations tailored to emerging architectures (such as ML/AI accelerators or quantum computing systems) are also included.

    Applications:
    This track focuses on real-world applications (combinatorial, scientific, engineering, data analysis, and visualization) that use parallel and distributed computing concepts. Papers submitted to this track are expected to incorporate innovations that originate in specific target application areas, and contribute novel methods and approaches that address core challenges in their scalable implementation. Contributions include the design, implementation, and evaluation of parallel and distributed applications, including implementations targeting emerging architectures (such as ML/AI accelerators) and application domain advances enabled by ML/AI.

    Architecture:
    This track focuses on existing and emerging architectures for high performance computing, including architectures for instruction-level and thread-level parallelism; manycore, multicore, accelerator, domain-specific and special-purpose architectures (including ML/AI accelerators); reconfigurable architectures; memory technologies and hierarchies; volatile and non-volatile emerging memory technologies; co-design paradigms for processing-in-memory architectures; solid-state devices; exascale system designs; data center and warehouse-scale architectures; novel big data architectures; network and interconnect architectures; emerging technologies for interconnects; parallel I/O and storage systems; power-efficient and green computing systems; resilience, security, and dependable architectures; and emerging architectural principles for machine learning, approximate computing, quantum computing, neuromorphic, analog, and bio-inspired computing.

    Machine Learning and Artificial Intelligence (ML/AI):
    This track focuses on all areas of ML/AI that are relevant to parallel and distributed computing, including ML/AI training on resource-limited platforms; computational optimization methods for AI such as pruning, quantization and knowledge distillation; parallel and distributed learning algorithms; energy-efficient methods for ML/AI; federated learning; design and implementation of ML/AI algorithms on parallel architectures (including distributed memory, GPUs, tensor cores and emerging ML/AI accelerators); new ML/AI methods benefitting HPC applications or HPC system management; and design and development of ML/AI software pipelines (e.g., frameworks for distributed training, integration of compression into ML/AI pipelines, compiler techniques and DSLs). Papers submitted to the ML/AI track should emphasize new ML/AI technology that is best reviewed by ML/AI experts. Papers that emphasize core parallel computing topics applied to ML/AI workloads or applications benefitting from use of existing ML/AI tools should be submitted to the topic domain tracks rather than this ML/AI track.

    Measurements, Modeling, and Experiments:
    This track focuses on experiments and performance-oriented studies in the practice of parallel and distributed computing. “Performance” may be construed broadly to include metrics related to time, energy, power, accuracy, and resilience, for instance. Topics include methods, experiments, and tools for measuring, evaluating, and/or analyzing performance for large-scale applications and systems; design and experimental evaluation of applications of parallel and distributed computing in simulation and analysis; experiments on the use of novel commercial or research accelerators and architectures, including quantum, neuromorphic, and other non-Von Neumann systems; innovations made in support of large-scale infrastructures and facilities; and experiences and methods for allocating and managing system and facility resources.

    Programming Models, Compilers, and Runtime Systems:
    This track covers topics ranging from the design of parallel programming models and paradigms to languages and compilers supporting these models and paradigms to runtime and middleware solutions. Software that is close to the application (as opposed to the bare hardware) but not specific to an application is included. Examples include frameworks targeting cloud and distributed systems; application frameworks for fault tolerance and resilience; software supporting data management, scalable data analytics and similar workloads; and runtime systems for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing. Novel compiler techniques and frameworks leveraging machine learning methods are included in this track.

    System Software:
    This track focuses on software that is close to the bare high-performance computing (HPC) hardware. Topics include storage and I/O systems; system software for resource management, job scheduling, and energy-efficiency; system software support for accelerators and heterogeneous HPC computing systems; interactions between the operating system, hardware, and other software layers; system software solutions for ML/AI workloads (e.g., energy-efficient software methods for ML/AI); system software support for fault tolerance and resilience; containers and virtual machines; specialized operating systems and related support for high-performance computing; system software for future novel computing platforms including quantum, neuromorphic, and bio-inspired computing; and system software advances enabled by ML/AI.               
Última Actualización Por Dou Sun en 2024-08-05
Coeficiente de Aceptación
AñoEnviadosAceptadosAceptados(%)
202044611024.7%
201937210327.7%
201846111324.5%
201750811622.8%
201649611423%
201549610922%
201454111421.1%
201349010822%
201256811820.8%
201157111219.6%
201052712724.1%
200944010123%
200841010525.6%
200741910926%
200653112523.5%
200534311533.5%
200444714231.8%
200340711929.2%
20022589838%
200127610036.2%
200030310735.3%
199926011343.5%
199832411836.4%
199729811237.6%
199635312635.7%
Conferencias Relacionadas
CCFCOREQUALISAbreviaciónNombre CompletoEntregaNotificaciónConferencia
baHOT CHIPSSymposium on High Performance Chips2024-04-192024-05-062024-08-25
cbb3ISPAInternational Symposium on Parallel and Distributed Processing with Applications2024-07-012024-08-012024-10-30
ISBASTInternational Symposium on Biometrics and Security Technologies2014-06-15 2014-08-26
KEMCSInternational Conference on Key Engineering Materials and Computer Science2013-01-25 2013-03-03
ICNGNInternational Conference on Intelligent Computing and Next Generation Networks2024-09-052024-10-102024-11-23
ca2ASP-DACAsia and South Pacific Design Automation Conference2024-07-052024-09-042025-01-20
baa2ICPPInternational Conference on Parallel Processing2024-04-082024-05-272024-08-12
baa1HPDCInternational ACM Symposium on High-Performance Parallel and Distributed Computing2025-01-232025-03-242025-07-20
cb2ICMLAInternational Conference on Machine Learning and Applications2024-07-312024-09-072024-12-18
WATCVWorkshop on Applications and Technologies of Computer Vision2022-06-052022-06-202022-07-08
Recomendaciones