期刊信息
International Journal of Human-Computer Interaction (IJHCI)
https://www.tandfonline.com/journals/hihc20影响因子: |
4.9 |
出版商: |
Taylor & Francis |
ISSN: |
1044-7318 |
浏览: |
30183 |
关注: |
10 |
征稿
Aims and scope The International Journal of Human-Computer Interaction addresses the cognitive, creative, social, health, and ergonomic aspects of interactive computing. It emphasizes the human element in relation to the systems and contexts in which humans perform, operate, network, and communicate, including mobile apps, social media, online communities, and digital accessibility. The journal publishes original articles including reviews and reappraisals of the literature, empirical studies, and quantitative and qualitative contributions to the theories and applications of HCI. All submitted manuscripts are subject to initial appraisal by the Editors, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees. All submissions and peer review are conducted online. Publication office: Taylor & Francis, Inc., 530 Walnut Street, Suite 850, Philadelphia, PA 19106. Readership: Professionals with an interest in the scientific implications and practical relevance of how computer systems should be designed and/or how they are actually used.
最后更新 Dou Sun 在 2025-12-30
Special Issues
Special Issue on The Age of AI Agents: Data Power, Social, Political and Ethical Challenges截稿日期: 2026-03-15Special Issue Editor(s) Mirko Farina , Huaqiao University, China and Institute for Digital Economy and Artificial Systems [Xiamen University and Moscow State University] farinamirko@gmail.com Andrea Lavazza, Pegaso University, Italy lavazza67@gmail.com Background, Rationale and Objectives 2024 witnessed the rapid rise of AI agents and widespread investment by the tech industry. Big tech companies and numerous well-regarded LLM startups have collectively invested hundreds of billions of dollars in AI agents. Deloitte predicts that by the end of 2025, 25% of companies developing AI will launch pilot projects or proof-of-concept for AI agents, and this figure will increase to 50% by 2027. In the increasingly fierce competition, big tech companies are racing towards a common goal - to upgrade chatbots (like ChatGPT) to not only provide answers for humans but also to control computers and take actions on behalf of human users. These tech companies unanimously promise that AI agents will be the next leap in the forthcoming AI revolution, fundamentally transforming human-computer interaction. Their promotion efforts at conferences and keynote speeches of various kind place significant emphasis on "personalization", "customization", and "user-centricity", closely embedding these ideas into the term "agent", while gradually phasing out the old term "assistant". Unlike existing generative AI tools that rely on a certain amount of user hand-holding to prompt their output, AI agents are engineered to autonomously analyze data, reason, make decisions, and execute operations following user-defined parameters, while engaging in continuous self-learning and self-adaptation over time. This necessitates several critical capabilities: a deep understanding of users to ensure goal alignment, the ability to adapt to changing circumstances to provide proactive support, and the capacity to independently plan and execute cross-platform tasks to achieve ultimate goals. Realizing these advanced capabilities demands the integration of sophisticated solutions such as adaptive training, planning, tool invocation, external knowledge retrieval, and memory retention. At the time of writing, a number of big AI companies have been actively unveiling their respective initiatives for AI agent development. In December 2024, Google officially announced its entry into the Agentic AI era by launching Gemini 2.0, its core multimodal model, and developing new experimental prototypes based on it, such as Project Mariner. Mariner was developed as an extension for Google's widely utilized web browser, Chrome, allowing users to input requests directly into their Chrome browser and have Mariner execute tasks on their behalf.Microsoft's strategic agenda is more focused on developing AI agents for enterprises and organizations to enhance the efficiency of business activities for employees. Its "M365 Copilot" platform introduces a comprehensive suite of AI agents that cover key business areas such as sales, services, finance, supply chain management, enterprise resource planning, and customer relationship management. Chinese technology companies have also demonstrated significant interest and long-term strategic planning in the development of AI agents. At the time of writing, an AI system named Manus represents the latest advancement in China's research and development within this domain. Manus is designed as a fully autonomous AI agent with the ability to independently think, plan, and execute tasks without human intervention. In demonstration videos, Manus has showcased its proficiency in handling three distinct tasks: screening resumes, analyzing stock correlations, and searching for real estate information in New York. Manus operates within a cloud-based virtual computing environment, enabling users to set objectives and subsequently disconnect while providing a "console" window for users to monitor the agent's operations in real-time and intervene when necessary. In the face of this technological revolution it is becoming increasingly clear that to render AI agents fully potent and personalized, big tech companies will necessitate extensive access to data. However, the structural integration of AI agent agendas with those of Big Tech corporations raises a central concern: whether AI agents will further consolidate the monopolistic data power of these corporations, thereby exacerbating social issues such as capitalist exploitation and corporate dominance. Scholars have already critiqued the increasingly data-intensive practices of tech giants. Crawford (2021) -for example- highlighted that despite the rapid expansion of global AI, only a few corporations wield control over the predominant infrastructure platforms, which significantly influence the accessibility and viability of AI development and deployment (Farina et al., 2025). More recently, other scholars have introduced the concept of "Big AI" (Vlist et al., 2024) "AI Empire" (Tacheva and Ramasubramanian, 2023), "Technofeudalism" (Varoufakis, 2023) to illustrate how AI systems evolve into a networked entity comprising several monopolistic corporate axes, agendas, and powers, characterized by the deep interdependence between AI and the infrastructure, resources, and investments of existing tech giants. However, the monopolistic tendencies of AI also present substantial risks associated with increased datafication and the corporate centralization of data within society. Numerous researchers (e.g., Dencik, 2025; Zuboff, 2019) have already highlighted the connections between this phenomenon and concerns such as bias, discrimination, mass surveillance, and privacy infringements (Stahl, 2018, Sartori and Theodorou, 2022, Sadowski, 2020). As AI continues to incorporate advanced machine learning capabilities for data processing, it may exacerbate the potential misuse of data (García, 2024; Gabriel, 2020). Failing to prioritize transparency and fairness in data-intensity practices could lead to significant adverse effects on the economy, individual privacy, and democratic institutions. Yet, the data-intensive nature of future AI agents and their association with the concentration of data power among tech giants have not garnered adequate academic attention (Winkel, 2024). This Topical Collection aims to close this gap and address this oversight by bringing together interdisciplinary researchers from East and West to analyse the problematic aspects of datafication inherent to the development of future AI agents, in 10 crucial dimensions: algorithms and big data (4.1), decentralized technologies (4.2); privacy and security (4.3), bias and fairness (4.4), explainability and accountability (4.5), social power asymmetry (4.6), AI governance (4.7), business ethics (4.8); AI public literacy (4.9), and environmental impact (4.10). The analysis conducted across these 10 dimensions will be instrumental in formulating mitigation strategies and solutions for the development of future AI agents and more equitable and sustainable AI ecosystems (Taddeo and Floridi, 2018). List of topics relevant to this special issue include but are not limited to: Philosophy and Ethics of AI Agency Philosophical and ethical foundations of AI agents and autonomous systems The concept of "agency" in artificial intelligence: philosophical, psychological, and computational perspectives From assistants to agents: conceptual, historical, and sociotechnical transitions Epistemology of delegation: trust, responsibility, and epistemic authority in AI agents Explainability, interpretability, and accountability in AI-driven actions The role of memory, context-awareness, and personalization in AI agents: cognitive and ethical concerns The future of human autonomy in an agent-saturated digital world Responsible innovation and design ethics for next-generation AI agents Power, Capital, and Data Governance The political economy of AI agents: platform capitalism, corporate dominance, and digital monopolies AI agents and the concentration of data power: risks for democracy and informational justice Datafication and surveillance: ethical and legal implications of agentic AI Business ethics of agentic ecosystems: customer manipulation, data ownership, and market asymmetries Philosophical critiques of technofeudalism, “Big AI,” and infrastructural dependency AI agents and digital sovereignty: national and global policy perspectives Bias, Fairness, and Human Rights Bias, discrimination, and fairness in autonomous AI decision-making Privacy, consent, and user autonomy in agent-mediated digital environments Public understanding and literacy of AI agents: risks of anthropomorphism and techno-solutionism Transparency, accountability, and democratic oversight in agentic systems Applications and Sectorial Implications AI agents in the public sector: education, healthcare, legal systems AI agents and the future of work: labour displacement, productivity, and worker surveillance Agentic AI in the enterprise: decision-making, automation, and organizational control Environmental and sustainability challenges of data-intensive agentic infrastructures Comparative and Interdisciplinary Approaches Comparative perspectives on AI agents: Western and Eastern approaches to ethics, governance, and design Interdisciplinary methodologies for studying AI agents (philosophy, law, computer science, STS, sociology) Cross-cultural imaginaries and narratives of artificial agency AI governance models: from technical standards to socio-political frameworks Big Data in Computer Science Big data analytics, machine learning, data mining, and cloud computing. Additionally, research focusing on data management, data structures, architectures for big data analytics, as well as the application of big data in various fields, such as social media.
最后更新 Dou Sun 在 2025-12-30
Special Issue on 40th Anniversary Special Issue edited by the IJHCI Editors截稿日期: 2026-06-01Special Issue Editor(s)
Constantine Stephanidis, University of Crete and ICS-FORTH, Greece
Gavriel Salvendy, University of Central Florida, USA
Edited by the IJHCI Editors: Constantine Stephanidis and Gavriel Salvendy
The Editors of the International Journal of Human–Computer Interaction (IJHCI) announce a 40th Anniversary Special Issue, celebrating four decades of advancing the science and practice of human–computer interaction. Since its inception, IJHCI has been a leading venue for seminal contributions that have defined the field and shaped the trajectory of interactive technologies worldwide.
This landmark issue will showcase field-defining, highly cited papers that not only reflect the state of the art but also set the agenda for the decades ahead. Submissions must demonstrate clear potential for enduring impact through strong theoretical, methodological, or empirical advances.
It is planned that this Special Issue will be widely distributed internationally.
Scope
We welcome papers that:
Introduce groundbreaking theories or models and methodologies with potential to reshape HCI research
Offer comprehensive, insightful and authoritative review and appraisal papers or meta-analyses that synthesize key knowledge for scholars and practitioners
Present transformative empirical findings or design innovations addressing urgent and emerging challenges
Provide visionary perspectives that chart compelling future directions for the discipline
Selection Criteria
Given the commemorative nature of this issue, only manuscripts of the highest scholarly quality will be considered. We seek contributions that:
Demonstrate originality and conceptual and/or methodological depth
Have clear potential for wide citation and influence
Appeal to a broad, interdisciplinary readership in HCI and beyond
Important Dates
Manuscript submission deadline:[1st of June 2026]
Notification of review decisions:[1st of September 2026]
Final manuscripts due:[15th of October 2026]
Publication date:[January 2027]
This special issue offers a unique opportunity to publish in a commemorative volume that will serve as both a capstone for the past 40 years of HCI research and a foundation for the decades to come. We look forward to receiving your most innovative and impactful work.
Please address any queries to: Constantine Stephanidis 最后更新 Dou Sun 在 2025-12-30
Special Issue on Trust and Mistrust in Artificial Intelligence: Human, Technological and Societal Impact Considerations截稿日期: 2026-06-15Special Issue Editor(s)
Stavroula Ntoa, ICS-FORTH, Greece
stant@ics.forth.gr
Introduction
Trust is a central concept across scholarly and public discussions in politics, economics, and society. The literature offers numerous definitions, extensive investigations into its antecedents, and even lines of research seeking to understand its neurological foundations. Achieving a comprehensive understanding of trust requires attention to its many dimensions: the dispositions, perceptions, beliefs, attitudes, expectations, and intentions of the trustor; the qualities and behaviors of the trustee; and the contextual conditions shaping their interaction. Its significance in human relationships is profound, serving as a foundational element that supports the cohesion of societies, underpinning every form of social interaction, evolving over time, shaped through observation and learning.
As technological systems become more deeply integrated into everyday life, trust in technology has gained equal prominence. Existing literature distinguishes between two main approaches to conceptualizing trust in technology. One direction adapts human-oriented dimensions such as benevolence, integrity, and ability, while the other emphasizes system-oriented attributes such as helpfulness, reliability, and functionality. When it comes to traditional technological artifacts, users rarely view them as moral agents.
The rise of Artificial Intelligence (AI), however, has introduced new complexity into how trust in technology is understood. AI systems now undertake tasks that were once exclusively human, such as offering recommendations that influence personal and institutional decisions, automating processes, engaging in creative tasks, such as writing texts and creating images, interacting with users in ways that can appear distinctly human-like. Yet despite these capacities, AI is fundamentally distinct from humans. It remains a non-conscious digital operating system, with correspondingly different cognitive qualities than biological creatures. Research demonstrates that trust in AI diverges markedly from interpersonal trust across multiple dimensions, including its underlying bases, the way it must be calibrated across contexts, the qualities attributed to the AI system, and the persistent paradox in which individuals extend trust to algorithms despite knowing they can make errors, mislead, or “hallucinate.”
Scope
This Special Issue is dedicated to exploring trust and mistrust in artificial intelligence. It seeks original, rigorous, and impactful contributions that address relevant foundations, challenges, as well as technological and human considerations. Relevant topics may include, but are not limited to:
Foundational theories and conceptual frameworks of trust and mistrust
Psychological, cognitive, and behavioral determinants of trusting or mistrusting AI
Behavioral investigations of trust calibration, over-trust, and under-trust in AI
Comparative examinations of trust in humans, traditional technologies, and AI systems
User perceptions, expectations, and mental models of AI trustworthiness
Cross-cultural, demographic, or contextual variations in AI trust and mistrust
Design and evaluation of trust-enabling, trust-repair, or trust-calibration strategies
Metrics, instruments, and modelling approaches for assessing trust in AI or AI trustworthiness
Technical methods for engineering AI reliability, transparency, robustness, and accountability
Misinformation, disinformation, deepfakes, and synthetic media
Bias mitigation and fairness-enhancing algorithms
Privacy-preserving machine learning
Data misuse, surveillance, and the impact on trust in AI systems
Human–AI interaction and collaboration approaches that influence trust dynamics
Human-Centered Design of trustworthy AI
Ethical, legal, and policy considerations related to trustworthy and responsible AI
Governance models, regulatory mechanisms, and oversight structures shaping trust and mistrust in AI
Domain-specific investigations of trust in areas such as healthcare, education, transportation, public administration, creative industries, robotics, or defence
Explorations of mistrust, scepticism, resistance, and contestation of AI systems
Interdisciplinary perspectives on the AI trust paradox and its implications
We encourage work that identifies emerging challenges, proposes innovative solutions, or develops frameworks that can guide future research and practice. Contributions should offer strong scientific grounding, methodological rigor, and clear relevance to the technological and societal dimensions of trust in AI. Interdisciplinary approaches are especially welcome.
Submission Instructions
Important Dates
Full paper submission due date: June 15, 2026
Notification of the first-round review decision: August 15, 2026
Revisions due date: October 15, 2026
Editorial decision: December 30, 2026
Targeted special issue publication date: early 2027最后更新 Dou Sun 在 2025-12-30
相关期刊
| CCF | 全称 | 影响因子 | 出版商 | ISSN |
|---|---|---|---|---|
| b | International Journal of Human-Computer Interaction | 4.9 | Taylor & Francis | 1044-7318 |
| b | Human–Computer Interaction | Taylor & Francis | 0737-0024 | |
| Advances in Human-Computer Interaction | 2.300 | Hindawi | 1687-5893 | |
| ACM Transactions on Human-Robot Interaction | 5.5 | ACM | 2573-9522 | |
| c | Proceedings of the ACM on Human-Computer Interaction | ACM | 2573-0142 | |
| International Journal of Child-Computer Interaction | Elsevier | 2212-8689 | ||
| a | ACM Transactions on Computer-Human Interaction | 6.6 | ACM | 1073-0516 |
| Brain-Computer Interfaces | 1.800 | Taylor & Francis | 2326-263X | |
| a | International Journal of Human-Computer Studies | 5.1 | Elsevier | 1071-5819 |
| International Journal of Computer Integrated Manufacturing | 4.0 | Taylor & Francis | 0951-192X |
| 全称 | 影响因子 | 出版商 |
|---|---|---|
| International Journal of Human-Computer Interaction | 4.9 | Taylor & Francis |
| Human–Computer Interaction | Taylor & Francis | |
| Advances in Human-Computer Interaction | 2.300 | Hindawi |
| ACM Transactions on Human-Robot Interaction | 5.5 | ACM |
| Proceedings of the ACM on Human-Computer Interaction | ACM | |
| International Journal of Child-Computer Interaction | Elsevier | |
| ACM Transactions on Computer-Human Interaction | 6.6 | ACM |
| Brain-Computer Interfaces | 1.800 | Taylor & Francis |
| International Journal of Human-Computer Studies | 5.1 | Elsevier |
| International Journal of Computer Integrated Manufacturing | 4.0 | Taylor & Francis |
相关会议
| CCF | CORE | QUALIS | 简称 | 全称 | 截稿日期 | 通知日期 | 会议日期 |
|---|---|---|---|---|---|---|---|
| c | b | a2 | ICMI | International Conference on Multimodal Interaction | 2025-04-18 | 2025-07-01 | 2025-10-13 |
| c | ICIS''' | International Conference on Computer and Information Science | 2020-08-10 | 2020-08-27 | 2020-11-18 | ||
| a | a2 | CC | International Conference on Compiler Construction | 2025-11-10 | 2025-12-10 | 2026-01-31 | |
| c | APCHI | Asia Pacific Conference on Computer Human Interaction | 2014-04-30 | 2014-10-22 | |||
| c | b | b1 | CGI | Computer Graphics International | 2025-05-02 | 2025-06-05 | 2025-07-14 |
| b | b | a2 | MobileHCI | International Conference on Human-Computer Interaction with Mobile Devices and Services | 2025-01-30 | 2025-05-29 | 2025-09-22 |
| c | ACHI | International Conference on Advances in Computer-Human Interactions | 2023-02-01 | 2023-02-28 | 2023-04-24 | ||
| a2 | HRI | International Conference on Human-Robot Interaction | 2024-09-23 | 2024-12-02 | 2025-03-04 | ||
| c | a | b1 | INTERACT | International Conference on Human-Computer Interaction | 2025-02-17 | 2025-04-28 | 2025-09-08 |
| b2 | HCII | International Conference on Human-Computer Interaction | 2015-11-06 | 2015-12-04 | 2016-07-17 |
| 简称 | 全称 | 会议日期 |
|---|---|---|
| ICMI | International Conference on Multimodal Interaction | 2025-10-13 |
| ICIS''' | International Conference on Computer and Information Science | 2020-11-18 |
| CC | International Conference on Compiler Construction | 2026-01-31 |
| APCHI | Asia Pacific Conference on Computer Human Interaction | 2014-10-22 |
| CGI | Computer Graphics International | 2025-07-14 |
| MobileHCI | International Conference on Human-Computer Interaction with Mobile Devices and Services | 2025-09-22 |
| ACHI | International Conference on Advances in Computer-Human Interactions | 2023-04-24 |
| HRI | International Conference on Human-Robot Interaction | 2025-03-04 |
| INTERACT | International Conference on Human-Computer Interaction | 2025-09-08 |
| HCII | International Conference on Human-Computer Interaction | 2016-07-17 |