期刊信息
International Journal of Human-Computer Interaction (IJHCI)
请登录查看期刊网址
影响因子: |
4.9 |
出版商: |
Taylor & Francis |
ISSN: |
1044-7318 |
浏览: |
32842 |
关注: |
10 |
征稿
Aims and scope
The International Journal of Human-Computer Interaction addresses the cognitive, creative, social, health, and ergonomic aspects of interactive computing.
It emphasizes the human element in relation to the systems and contexts in which humans perform, operate, network, and communicate, including mobile apps, social media, online communities, and digital accessibility. The journal publishes original articles including reviews and reappraisals of the literature, empirical studies, and quantitative and qualitative contributions to the theories and applications of HCI.
All submitted manuscripts are subject to initial appraisal by the Editors, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees. All submissions and peer review are conducted online.
Publication office: Taylor & Francis, Inc., 530 Walnut Street, Suite 850, Philadelphia, PA 19106.
Readership:
Professionals with an interest in the scientific implications and practical relevance of how computer systems should be designed and/or how they are actually used.
The International Journal of Human-Computer Interaction addresses the cognitive, creative, social, health, and ergonomic aspects of interactive computing.
It emphasizes the human element in relation to the systems and contexts in which humans perform, operate, network, and communicate, including mobile apps, social media, online communities, and digital accessibility. The journal publishes original articles including reviews and reappraisals of the literature, empirical studies, and quantitative and qualitative contributions to the theories and applications of HCI.
All submitted manuscripts are subject to initial appraisal by the Editors, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees. All submissions and peer review are conducted online.
Publication office: Taylor & Francis, Inc., 530 Walnut Street, Suite 850, Philadelphia, PA 19106.
Readership:
Professionals with an interest in the scientific implications and practical relevance of how computer systems should be designed and/or how they are actually used.
最后更新 Dou Sun 在 2025-12-30
Special Issues
Special Issue on 40th Anniversary Special Issue edited by the IJHCI Editors截稿日期: 2026-06-01Special Issue Editor(s)
Constantine Stephanidis, University of Crete and ICS-FORTH, Greece
Gavriel Salvendy, University of Central Florida, USA
Edited by the IJHCI Editors: Constantine Stephanidis and Gavriel Salvendy
The Editors of the International Journal of Human–Computer Interaction (IJHCI) announce a 40th Anniversary Special Issue, celebrating four decades of advancing the science and practice of human–computer interaction. Since its inception, IJHCI has been a leading venue for seminal contributions that have defined the field and shaped the trajectory of interactive technologies worldwide.
This landmark issue will showcase field-defining, highly cited papers that not only reflect the state of the art but also set the agenda for the decades ahead. Submissions must demonstrate clear potential for enduring impact through strong theoretical, methodological, or empirical advances.
It is planned that this Special Issue will be widely distributed internationally.
Scope
We welcome papers that:
Introduce groundbreaking theories or models and methodologies with potential to reshape HCI research
Offer comprehensive, insightful and authoritative review and appraisal papers or meta-analyses that synthesize key knowledge for scholars and practitioners
Present transformative empirical findings or design innovations addressing urgent and emerging challenges
Provide visionary perspectives that chart compelling future directions for the discipline
Selection Criteria
Given the commemorative nature of this issue, only manuscripts of the highest scholarly quality will be considered. We seek contributions that:
Demonstrate originality and conceptual and/or methodological depth
Have clear potential for wide citation and influence
Appeal to a broad, interdisciplinary readership in HCI and beyond
Important Dates
Manuscript submission deadline:[1st of June 2026]
Notification of review decisions:[1st of September 2026]
Final manuscripts due:[15th of October 2026]
Publication date:[January 2027]
This special issue offers a unique opportunity to publish in a commemorative volume that will serve as both a capstone for the past 40 years of HCI research and a foundation for the decades to come. We look forward to receiving your most innovative and impactful work.
Please address any queries to: Constantine Stephanidis <cs@ics.forth.gr>
Constantine Stephanidis, University of Crete and ICS-FORTH, Greece
Gavriel Salvendy, University of Central Florida, USA
Edited by the IJHCI Editors: Constantine Stephanidis and Gavriel Salvendy
The Editors of the International Journal of Human–Computer Interaction (IJHCI) announce a 40th Anniversary Special Issue, celebrating four decades of advancing the science and practice of human–computer interaction. Since its inception, IJHCI has been a leading venue for seminal contributions that have defined the field and shaped the trajectory of interactive technologies worldwide.
This landmark issue will showcase field-defining, highly cited papers that not only reflect the state of the art but also set the agenda for the decades ahead. Submissions must demonstrate clear potential for enduring impact through strong theoretical, methodological, or empirical advances.
It is planned that this Special Issue will be widely distributed internationally.
Scope
We welcome papers that:
Introduce groundbreaking theories or models and methodologies with potential to reshape HCI research
Offer comprehensive, insightful and authoritative review and appraisal papers or meta-analyses that synthesize key knowledge for scholars and practitioners
Present transformative empirical findings or design innovations addressing urgent and emerging challenges
Provide visionary perspectives that chart compelling future directions for the discipline
Selection Criteria
Given the commemorative nature of this issue, only manuscripts of the highest scholarly quality will be considered. We seek contributions that:
Demonstrate originality and conceptual and/or methodological depth
Have clear potential for wide citation and influence
Appeal to a broad, interdisciplinary readership in HCI and beyond
Important Dates
Manuscript submission deadline:[1st of June 2026]
Notification of review decisions:[1st of September 2026]
Final manuscripts due:[15th of October 2026]
Publication date:[January 2027]
This special issue offers a unique opportunity to publish in a commemorative volume that will serve as both a capstone for the past 40 years of HCI research and a foundation for the decades to come. We look forward to receiving your most innovative and impactful work.
Please address any queries to: Constantine Stephanidis <cs@ics.forth.gr>
最后更新 Dou Sun 在 2025-12-30
Special Issue on Trust and Mistrust in Artificial Intelligence: Human, Technological and Societal Impact Considerations截稿日期: 2026-06-15Special Issue Editor(s)
Stavroula Ntoa, ICS-FORTH, Greece
stant@ics.forth.gr
Introduction
Trust is a central concept across scholarly and public discussions in politics, economics, and society. The literature offers numerous definitions, extensive investigations into its antecedents, and even lines of research seeking to understand its neurological foundations. Achieving a comprehensive understanding of trust requires attention to its many dimensions: the dispositions, perceptions, beliefs, attitudes, expectations, and intentions of the trustor; the qualities and behaviors of the trustee; and the contextual conditions shaping their interaction. Its significance in human relationships is profound, serving as a foundational element that supports the cohesion of societies, underpinning every form of social interaction, evolving over time, shaped through observation and learning.
As technological systems become more deeply integrated into everyday life, trust in technology has gained equal prominence. Existing literature distinguishes between two main approaches to conceptualizing trust in technology. One direction adapts human-oriented dimensions such as benevolence, integrity, and ability, while the other emphasizes system-oriented attributes such as helpfulness, reliability, and functionality. When it comes to traditional technological artifacts, users rarely view them as moral agents.
The rise of Artificial Intelligence (AI), however, has introduced new complexity into how trust in technology is understood. AI systems now undertake tasks that were once exclusively human, such as offering recommendations that influence personal and institutional decisions, automating processes, engaging in creative tasks, such as writing texts and creating images, interacting with users in ways that can appear distinctly human-like. Yet despite these capacities, AI is fundamentally distinct from humans. It remains a non-conscious digital operating system, with correspondingly different cognitive qualities than biological creatures. Research demonstrates that trust in AI diverges markedly from interpersonal trust across multiple dimensions, including its underlying bases, the way it must be calibrated across contexts, the qualities attributed to the AI system, and the persistent paradox in which individuals extend trust to algorithms despite knowing they can make errors, mislead, or “hallucinate.”
Scope
This Special Issue is dedicated to exploring trust and mistrust in artificial intelligence. It seeks original, rigorous, and impactful contributions that address relevant foundations, challenges, as well as technological and human considerations. Relevant topics may include, but are not limited to:
Foundational theories and conceptual frameworks of trust and mistrust
Psychological, cognitive, and behavioral determinants of trusting or mistrusting AI
Behavioral investigations of trust calibration, over-trust, and under-trust in AI
Comparative examinations of trust in humans, traditional technologies, and AI systems
User perceptions, expectations, and mental models of AI trustworthiness
Cross-cultural, demographic, or contextual variations in AI trust and mistrust
Design and evaluation of trust-enabling, trust-repair, or trust-calibration strategies
Metrics, instruments, and modelling approaches for assessing trust in AI or AI trustworthiness
Technical methods for engineering AI reliability, transparency, robustness, and accountability
Misinformation, disinformation, deepfakes, and synthetic media
Bias mitigation and fairness-enhancing algorithms
Privacy-preserving machine learning
Data misuse, surveillance, and the impact on trust in AI systems
Human–AI interaction and collaboration approaches that influence trust dynamics
Human-Centered Design of trustworthy AI
Ethical, legal, and policy considerations related to trustworthy and responsible AI
Governance models, regulatory mechanisms, and oversight structures shaping trust and mistrust in AI
Domain-specific investigations of trust in areas such as healthcare, education, transportation, public administration, creative industries, robotics, or defence
Explorations of mistrust, scepticism, resistance, and contestation of AI systems
Interdisciplinary perspectives on the AI trust paradox and its implications
We encourage work that identifies emerging challenges, proposes innovative solutions, or develops frameworks that can guide future research and practice. Contributions should offer strong scientific grounding, methodological rigor, and clear relevance to the technological and societal dimensions of trust in AI. Interdisciplinary approaches are especially welcome.
Submission Instructions
Important Dates
Full paper submission due date: June 15, 2026
Notification of the first-round review decision: August 15, 2026
Revisions due date: October 15, 2026
Editorial decision: December 30, 2026
Targeted special issue publication date: early 2027
Stavroula Ntoa, ICS-FORTH, Greece
stant@ics.forth.gr
Introduction
Trust is a central concept across scholarly and public discussions in politics, economics, and society. The literature offers numerous definitions, extensive investigations into its antecedents, and even lines of research seeking to understand its neurological foundations. Achieving a comprehensive understanding of trust requires attention to its many dimensions: the dispositions, perceptions, beliefs, attitudes, expectations, and intentions of the trustor; the qualities and behaviors of the trustee; and the contextual conditions shaping their interaction. Its significance in human relationships is profound, serving as a foundational element that supports the cohesion of societies, underpinning every form of social interaction, evolving over time, shaped through observation and learning.
As technological systems become more deeply integrated into everyday life, trust in technology has gained equal prominence. Existing literature distinguishes between two main approaches to conceptualizing trust in technology. One direction adapts human-oriented dimensions such as benevolence, integrity, and ability, while the other emphasizes system-oriented attributes such as helpfulness, reliability, and functionality. When it comes to traditional technological artifacts, users rarely view them as moral agents.
The rise of Artificial Intelligence (AI), however, has introduced new complexity into how trust in technology is understood. AI systems now undertake tasks that were once exclusively human, such as offering recommendations that influence personal and institutional decisions, automating processes, engaging in creative tasks, such as writing texts and creating images, interacting with users in ways that can appear distinctly human-like. Yet despite these capacities, AI is fundamentally distinct from humans. It remains a non-conscious digital operating system, with correspondingly different cognitive qualities than biological creatures. Research demonstrates that trust in AI diverges markedly from interpersonal trust across multiple dimensions, including its underlying bases, the way it must be calibrated across contexts, the qualities attributed to the AI system, and the persistent paradox in which individuals extend trust to algorithms despite knowing they can make errors, mislead, or “hallucinate.”
Scope
This Special Issue is dedicated to exploring trust and mistrust in artificial intelligence. It seeks original, rigorous, and impactful contributions that address relevant foundations, challenges, as well as technological and human considerations. Relevant topics may include, but are not limited to:
Foundational theories and conceptual frameworks of trust and mistrust
Psychological, cognitive, and behavioral determinants of trusting or mistrusting AI
Behavioral investigations of trust calibration, over-trust, and under-trust in AI
Comparative examinations of trust in humans, traditional technologies, and AI systems
User perceptions, expectations, and mental models of AI trustworthiness
Cross-cultural, demographic, or contextual variations in AI trust and mistrust
Design and evaluation of trust-enabling, trust-repair, or trust-calibration strategies
Metrics, instruments, and modelling approaches for assessing trust in AI or AI trustworthiness
Technical methods for engineering AI reliability, transparency, robustness, and accountability
Misinformation, disinformation, deepfakes, and synthetic media
Bias mitigation and fairness-enhancing algorithms
Privacy-preserving machine learning
Data misuse, surveillance, and the impact on trust in AI systems
Human–AI interaction and collaboration approaches that influence trust dynamics
Human-Centered Design of trustworthy AI
Ethical, legal, and policy considerations related to trustworthy and responsible AI
Governance models, regulatory mechanisms, and oversight structures shaping trust and mistrust in AI
Domain-specific investigations of trust in areas such as healthcare, education, transportation, public administration, creative industries, robotics, or defence
Explorations of mistrust, scepticism, resistance, and contestation of AI systems
Interdisciplinary perspectives on the AI trust paradox and its implications
We encourage work that identifies emerging challenges, proposes innovative solutions, or develops frameworks that can guide future research and practice. Contributions should offer strong scientific grounding, methodological rigor, and clear relevance to the technological and societal dimensions of trust in AI. Interdisciplinary approaches are especially welcome.
Submission Instructions
Important Dates
Full paper submission due date: June 15, 2026
Notification of the first-round review decision: August 15, 2026
Revisions due date: October 15, 2026
Editorial decision: December 30, 2026
Targeted special issue publication date: early 2027
最后更新 Dou Sun 在 2025-12-30
相关期刊
| CCF | 全称 | 影响因子 | 出版商 | ISSN |
|---|---|---|---|---|
| Computers & Education | 10.5 | Elsevier | 0360-1315 | |
| a | ACM Transactions on Computer-Human Interaction | 6.6 | ACM | 1073-0516 |
| ACM Transactions on Human-Robot Interaction | 5.5 | ACM | 2573-9522 | |
| a | International Journal of Human-Computer Studies | 5.1 | Elsevier | 1071-5819 |
| b | International Journal of Human-Computer Interaction | 4.9 | Taylor & Francis | 1044-7318 |
| c | Computer Communications | 4.5 | Elsevier | 0140-3664 |
| c | Journal of Computer Information Systems | 4.2 | Taylor & Francis | 0887-4417 |
| International Journal of Computer Integrated Manufacturing | 4.0 | Taylor & Francis | 0951-192X | |
| Research in Biomedical Engineering and Technology | 2.1 | Taylor & Francis | 2326-263X | |
| Advances in Human-Computer Interaction | 1.9 | Hindawi | 1687-5893 |
相关会议
| CCF | CORE | QUALIS | 简称 | 全称 | 截稿日期 | 通知日期 | 会议日期 |
|---|---|---|---|---|---|---|---|
| c | b | a2 | ICMI | International Conference on Multimodal Interaction | 2026-04-13 | 2026-07-01 | 2026-10-05 |
| b | b | a2 | MobileHCI | International Conference on Human-Computer Interaction with Mobile Devices and Services | 2026-01-29 | 2026-05-28 | 2026-08-31 |
| a | a2 | CC | International Conference on Compiler Construction | 2025-11-10 | 2025-12-10 | 2026-01-31 | |
| c | b | b1 | CGI | Computer Graphics International | 2025-05-02 | 2025-06-05 | 2025-07-14 |
| c | a | b1 | INTERACT | International Conference on Human-Computer Interaction | 2025-02-17 | 2025-04-28 | 2025-09-08 |
| a2 | HRI | International Conference on Human-Robot Interaction | 2024-09-23 | 2024-12-02 | 2025-03-04 | ||
| c | ACHI | International Conference on Advances in Computer-Human Interactions | 2023-02-01 | 2023-02-28 | 2023-04-24 | ||
| c | ICIS''' | International Conference on Computer and Information Science | 2020-08-10 | 2020-08-27 | 2020-11-18 | ||
| b2 | HCII | International Conference on Human-Computer Interaction | 2015-11-06 | 2015-12-04 | 2016-07-17 | ||
| c | APCHI | Asia Pacific Conference on Computer Human Interaction | 2014-04-30 | 2014-10-22 |