仕訳帳情報
IEEE Micro
https://www.computer.org/csdl/magazine/mi
インパクト ・ ファクター:
2.800
出版社:
IEEE
ISSN:
0272-1732
閲覧:
20478
追跡:
12
論文募集
IEEE Micro, a bimonthly publication of the IEEE Computer Society, reaches an international audience of microcomputer and microprocessor designers, system integrators, and users. Readers want to increase their technical knowledge and learn the latest industry trends.

Scope

IEEE Micro addresses users and designers of microprocessors and microprocessor systems, including managers, engineers, consultants, educators, and students involved with computers and peripherals, components and subassemblies, communications, instrumentation and control equipment, and guidance systems. Contributions should relate to the design, performance, or application of microprocessors and microcomputers. Tutorials, review papers, and discussions are also welcome. Sample topic areas include architecture, communications, data acquisition, control, hardware and software design/implementation, algorithms (including program listings), digital signal processing, microprocessor support hardware, operating systems, computer aided design, languages, application software, and development systems.
最終更新 Dou Sun 2024-07-28
Special Issues
Special Issue on Cache Coherent Interconnects and Resource Disaggregation Techniques
提出日: 2024-12-01

In the era of exponential data growth, modern data centers and large-scale computing environments are challenged by the limitations of traditional, monolithic system designs that tightly integrate compute, memory, and storage resources. These conventional, all-integrated systems confront significant difficulties including resource over-provisioning, under-utilization, and hitting the memory capacity wall, highlighting the urgent need for innovative architectures. Resource disaggregation emerges as a compelling paradigm, promising to break down such monolithic system architectures into pools of shared, distributed resources. However, the transition to disaggregated resources introduces its own set of challenges, including the need for significant code refactoring, potential performance penalties, substantial new hardware investments, increased complexity in system maintenance, and security concerns. Amidst this landscape, cache coherent interconnects, like Intel’s Ultra Path Interconnect (UPI)/QuickPath Interconnect (QPI), AMD’s Infinity Fabric, and Compute Express Link (CXL), offer a promising solution for disaggregated resources. By facilitating efficient access to remote memory through cache coherence for minimal latency and overhead, these interconnects are poised to significantly enhance the feasibility of resource disaggregation. This special issue of IEEE Micro seeks articles on the cutting-edge developments in cache coherent interconnects and their role in enabling resource disaggregation across computing, memory, and storage. Topics include, but are not limited to: Coherent Interconnect Protocols and Models for Resource Disaggregation Systems. Software/Hardware Co-Designs for High-Performance Disaggregated Coherency Management. Processor/Accelerator Designs Oriented towards Management in Coherent Disaggregated Systems. Application-Architecture Co-Designs, Exploiting Coherent Disaggregation Techniques. Reliability, Testability, and Debuggability of Coherent Disaggregation Systems. Applications Based on Coherent Interconnects and Disaggregated Systems.
最終更新 Dou Sun 2024-11-23
Special Issue on Data-Centric Computing
提出日: 2025-02-27

With the proliferation of mobile and edge computing devices, data generation continues to grow at an exponential rate, reaching an estimated 181 zettabytes processed per year by 2025. In response, computing systems large and small need to process ever-increasing amounts of data quickly and efficiently, leading to the rise of data-centric computing. Data-centric computing covers a broad range of hardware and software co-design topics, spanning techniques that (1) reduce the amount data transmitted, (2) optimize data movement using knowledge of latency and bandwidth of the connections between compute and sources of data, (3) integrate specialized heterogeneous or non-von-Neumann components in data-processing systems, or (4) develop new methods to synthesize or summarize data in place or minimize the overhead of data accesses. A common thread emerging across data-centric computing techniques is the need for hardware/software co-design in compute, memory, storage, and interconnect to deliver sizable improvements in performance and energy efficiency that rely on both traditional and unconventional scaling techniques This special issue of IEEE Micro solicits academic and industrial research on co-designed solutions that revisit traditional boundaries between compute, memory, storage, interconnect and the software to support new architectures and programming abstractions. The solutions that will meet the test of time will balance specificity with generality, classify general principles, and denote metrics to measure a solution’s benefits and highlight remaining challenges. These solutions will serve as a template for how to apply future innovations in hardware and software to emerging use cases requiring even more generated data. TOPICS OF INTEREST Novel systems that address application domains currently limited by bandwidth or media latency (e.g., large-scale AI training and inference, databases, computational genomics, HPC), and demonstrate dramatic improvements to end-to-end application performance and/or reduction in overall in energy use Computation near or in media (e.g., processing-in-memory, processing-near-memory, processing-using-memory, in-storage computing) using digital or analog computational devices and the end-to-end hardware/software infrastructure required to prepare the data for computation Techniques to monitor lifetime of data and ensure long-term data resilience of retained data in data-centric computing solutions Operational datacenter challenges of migrating existing data and applications to use new data-centric computing solutions to meet future application requirements Techniques to mitigate the overhead of multi-tenant data-intensive applications and data processing infrastructure Primitives or systems/hardware architectural enhancements using data processing unit/infrastructure processing unit (DPU/IPU) or peer-to-peer data movement for enabling application software to schedule selective parts of large data sets for optimal data movement for when compute becomes available Tools to characterize and synthesize data-intensive workloads to model and explore possible system architectures and find new opportunities for efficient data process in compute, interconnects, storage media, and software
最終更新 Dou Sun 2024-11-23
Special Issue on Contemporary Industry Products
提出日: 2025-03-13

Topics of Interest Paper topics are not limited to hardware tapeouts. Papers that are software-centric papers relevant to the computer architecture audience are welcome in this track (e.g. datacenter software work, compiler work, accelerator software stack work), but they should adhere to the tenet that they must be industry papers about production-level work – whether retrospective, planned and on the roadmap, or planned but canceled. Processors, SoCs, GPUs, and domain-specific accelerators Systems and interconnect technologies for HPC, cloud, or data centers Embedded, mobile, and IoT processors FPGA or reconfigurable architectures Storage and emerging memory systems Architectures using emerging technology. Architectures for emerging applications including generative AI and bioinformatics. Architectures for commercialization of quantum computing
最終更新 Dou Sun 2024-11-23
Special Issue on AI for Hardware and Hardware for AI
提出日: 2025-04-25

For years, the computational landscape, stretching from data centers and supercomputers to simple home devices, has predominantly depended on general-purpose processors which were sustainable while Moore’s law guaranteed that chip transistor counts would double approximately every two years. Today, however, as the pace of Moore’s law decelerates, we have witnessed an increasing shift toward hardware accelerators, designed to efficiently utilize hardware resources by concentrating solely on implementing the specific demands of target applications. Hardware accelerators, primarily engineered for an array of AI applications, from computer vision to recommendation systems and natural language processing, have been gaining growing traction, with substantial industrial investments and increasing scholarly interest. While the shift toward hardware accelerators has proven their capabilities, they face new challenges with major AI growth. AI algorithms are not only scaling in size rapidly but also evolving at an accelerated rate. The scale and diversity in modern AI pose a substantial challenge in the design of hardware accelerators for them. As a result, this IEEE Micro Special Issue seeks articles not only related to the hardware accelerators for the next generation of AI but also to the exploration of how AI itself can facilitate the creation of cost-efficient, fast, and scalable hardware. This issue’s topics of interest include, but are not limited to: Scalable hardware accelerators for the next generation of large AI models Deploying new technologies (e.g., in-memory computing, photonics, analog computing) for AI efficiency Sparsity-aware optimizations techniques for efficient AI Integration of AI techniques to expedite the hardware/software co-design Rethinking the software/hardware stack for heterogeneous AI accelerator systems Interconnection networks and data movement optimizations for the future of AI Using AI methods to enhance the reliability of hardware accelerators, design validation, and architecture front-end and backend Investigating security and privacy challenges in AI-assisted hardware accelerator design
最終更新 Dou Sun 2024-11-23
ベスト ペーパー
関連仕訳帳
CCF完全な名前インパクト ・ ファクター出版社ISSN
Transportation Research Part C: Emerging Technologies7.600Elsevier0968-090X
Nonlinear EngineeringWalter de Gruyter2192-8010
Interaction Studies0.900John Benjamins Publishing Company1572-0373
Journal of Cloud Computing Springer2192-113X
International BiomechanicsTaylor & Francis2333-5432
Advanced Computational Intelligence: An International JournalAIRCC2454-3934
International Journal on Communications Antenna and PropagationPraise Worthy Prize2039-5086
Image Processing On LineIPOL2105-1232
Brain Sciences2.700MDPI2076-3425
Journal of Decision SystemsTaylor & Francis1246-0125
完全な名前インパクト ・ ファクター出版社
Transportation Research Part C: Emerging Technologies7.600Elsevier
Nonlinear EngineeringWalter de Gruyter
Interaction Studies0.900John Benjamins Publishing Company
Journal of Cloud Computing Springer
International BiomechanicsTaylor & Francis
Advanced Computational Intelligence: An International JournalAIRCC
International Journal on Communications Antenna and PropagationPraise Worthy Prize
Image Processing On LineIPOL
Brain Sciences2.700MDPI
Journal of Decision SystemsTaylor & Francis
関連会議
CCFCOREQUALIS省略名完全な名前提出日通知日会議日
TICSTInternational Conference on Science and Technology2017-09-152017-09-302017-12-07
cba1GlobecomIEEE Global Communications Conference2024-04-012024-08-012024-12-08
MDSISAsia Conference on Mathematics, Data Science and Information System2023-09-202023-10-052023-10-27
cb4CIMSAInternational Conference on Computational Intelligence for Measurement Systems and Applications2012-04-152012-04-302012-07-02
GIoTSGlobal IoT Summit2020-03-082020-04-152020-06-03
baa2MiddlewareInternational Middleware Conference  2025-12-15
CACS'International Automatic Control Conference2018-08-312018-09-252018-11-04
cINCInternational Network Conference2020-07-302020-08-152020-09-19
ESIHISEInternational Conference on Evolution of the Sciences, Informatics, Human Integration and Scientific Education 2019-09-27 2019-10-03
ESTELIEEE-AESS European Conference on Satellite Telecommunications2012-04-012012-04-152012-10-02
おすすめ