期刊信息
International Journal of Human-Computer Interaction (IJHCI)
https://www.tandfonline.com/journals/hihc20
影响因子:
4.9
出版商:
Taylor & Francis
ISSN:
1044-7318
浏览:
30183
关注:
10
征稿
Aims and scope

The International Journal of Human-Computer Interaction addresses the cognitive, creative, social, health, and ergonomic aspects of interactive computing.

It emphasizes the human element in relation to the systems and contexts in which humans perform, operate, network, and communicate, including mobile apps, social media, online communities, and digital accessibility. The journal publishes original articles including reviews and reappraisals of the literature, empirical studies, and quantitative and qualitative contributions to the theories and applications of HCI.

All submitted manuscripts are subject to initial appraisal by the Editors, and, if found suitable for further consideration, to peer review by independent, anonymous expert referees. All submissions and peer review are conducted online.

Publication office: Taylor & Francis, Inc., 530 Walnut Street, Suite 850, Philadelphia, PA 19106.

Readership:

Professionals with an interest in the scientific implications and practical relevance of how computer systems should be designed and/or how they are actually used. 
最后更新 Dou Sun 在 2025-12-30
Special Issues
Special Issue on The Age of AI Agents: Data Power, Social, Political and Ethical Challenges
截稿日期: 2026-03-15

Special Issue Editor(s) Mirko Farina , Huaqiao University, China and Institute for Digital Economy and Artificial Systems [Xiamen University and Moscow State University] farinamirko@gmail.com Andrea Lavazza, Pegaso University, Italy lavazza67@gmail.com Background, Rationale and Objectives 2024 witnessed the rapid rise of AI agents and widespread investment by the tech industry. Big tech companies and numerous well-regarded LLM startups have collectively invested hundreds of billions of dollars in AI agents. Deloitte predicts that by the end of 2025, 25% of companies developing AI will launch pilot projects or proof-of-concept for AI agents, and this figure will increase to 50% by 2027. In the increasingly fierce competition, big tech companies are racing towards a common goal - to upgrade chatbots (like ChatGPT) to not only provide answers for humans but also to control computers and take actions on behalf of human users. These tech companies unanimously promise that AI agents will be the next leap in the forthcoming AI revolution, fundamentally transforming human-computer interaction. Their promotion efforts at conferences and keynote speeches of various kind place significant emphasis on "personalization", "customization", and "user-centricity", closely embedding these ideas into the term "agent", while gradually phasing out the old term "assistant". Unlike existing generative AI tools that rely on a certain amount of user hand-holding to prompt their output, AI agents are engineered to autonomously analyze data, reason, make decisions, and execute operations following user-defined parameters, while engaging in continuous self-learning and self-adaptation over time. This necessitates several critical capabilities: a deep understanding of users to ensure goal alignment, the ability to adapt to changing circumstances to provide proactive support, and the capacity to independently plan and execute cross-platform tasks to achieve ultimate goals. Realizing these advanced capabilities demands the integration of sophisticated solutions such as adaptive training, planning, tool invocation, external knowledge retrieval, and memory retention. At the time of writing, a number of big AI companies have been actively unveiling their respective initiatives for AI agent development. In December 2024, Google officially announced its entry into the Agentic AI era by launching Gemini 2.0, its core multimodal model, and developing new experimental prototypes based on it, such as Project Mariner. Mariner was developed as an extension for Google's widely utilized web browser, Chrome, allowing users to input requests directly into their Chrome browser and have Mariner execute tasks on their behalf.Microsoft's strategic agenda is more focused on developing AI agents for enterprises and organizations to enhance the efficiency of business activities for employees. Its "M365 Copilot" platform introduces a comprehensive suite of AI agents that cover key business areas such as sales, services, finance, supply chain management, enterprise resource planning, and customer relationship management. Chinese technology companies have also demonstrated significant interest and long-term strategic planning in the development of AI agents. At the time of writing, an AI system named Manus represents the latest advancement in China's research and development within this domain. Manus is designed as a fully autonomous AI agent with the ability to independently think, plan, and execute tasks without human intervention. In demonstration videos, Manus has showcased its proficiency in handling three distinct tasks: screening resumes, analyzing stock correlations, and searching for real estate information in New York. Manus operates within a cloud-based virtual computing environment, enabling users to set objectives and subsequently disconnect while providing a "console" window for users to monitor the agent's operations in real-time and intervene when necessary. In the face of this technological revolution it is becoming increasingly clear that to render AI agents fully potent and personalized, big tech companies will necessitate extensive access to data. However, the structural integration of AI agent agendas with those of Big Tech corporations raises a central concern: whether AI agents will further consolidate the monopolistic data power of these corporations, thereby exacerbating social issues such as capitalist exploitation and corporate dominance. Scholars have already critiqued the increasingly data-intensive practices of tech giants. Crawford (2021) -for example- highlighted that despite the rapid expansion of global AI, only a few corporations wield control over the predominant infrastructure platforms, which significantly influence the accessibility and viability of AI development and deployment (Farina et al., 2025). More recently, other scholars have introduced the concept of "Big AI" (Vlist et al., 2024) "AI Empire" (Tacheva and Ramasubramanian, 2023), "Technofeudalism" (Varoufakis, 2023) to illustrate how AI systems evolve into a networked entity comprising several monopolistic corporate axes, agendas, and powers, characterized by the deep interdependence between AI and the infrastructure, resources, and investments of existing tech giants. However, the monopolistic tendencies of AI also present substantial risks associated with increased datafication and the corporate centralization of data within society. Numerous researchers (e.g., Dencik, 2025; Zuboff, 2019) have already highlighted the connections between this phenomenon and concerns such as bias, discrimination, mass surveillance, and privacy infringements (Stahl, 2018, Sartori and Theodorou, 2022, Sadowski, 2020). As AI continues to incorporate advanced machine learning capabilities for data processing, it may exacerbate the potential misuse of data (García, 2024; Gabriel, 2020). Failing to prioritize transparency and fairness in data-intensity practices could lead to significant adverse effects on the economy, individual privacy, and democratic institutions. Yet, the data-intensive nature of future AI agents and their association with the concentration of data power among tech giants have not garnered adequate academic attention (Winkel, 2024). This Topical Collection aims to close this gap and address this oversight by bringing together interdisciplinary researchers from East and West to analyse the problematic aspects of datafication inherent to the development of future AI agents, in 10 crucial dimensions: algorithms and big data (4.1), decentralized technologies (4.2); privacy and security (4.3), bias and fairness (4.4), explainability and accountability (4.5), social power asymmetry (4.6), AI governance (4.7), business ethics (4.8); AI public literacy (4.9), and environmental impact (4.10). The analysis conducted across these 10 dimensions will be instrumental in formulating mitigation strategies and solutions for the development of future AI agents and more equitable and sustainable AI ecosystems (Taddeo and Floridi, 2018). List of topics relevant to this special issue include but are not limited to: Philosophy and Ethics of AI Agency Philosophical and ethical foundations of AI agents and autonomous systems The concept of "agency" in artificial intelligence: philosophical, psychological, and computational perspectives From assistants to agents: conceptual, historical, and sociotechnical transitions Epistemology of delegation: trust, responsibility, and epistemic authority in AI agents Explainability, interpretability, and accountability in AI-driven actions The role of memory, context-awareness, and personalization in AI agents: cognitive and ethical concerns The future of human autonomy in an agent-saturated digital world Responsible innovation and design ethics for next-generation AI agents Power, Capital, and Data Governance The political economy of AI agents: platform capitalism, corporate dominance, and digital monopolies AI agents and the concentration of data power: risks for democracy and informational justice Datafication and surveillance: ethical and legal implications of agentic AI Business ethics of agentic ecosystems: customer manipulation, data ownership, and market asymmetries Philosophical critiques of technofeudalism, “Big AI,” and infrastructural dependency AI agents and digital sovereignty: national and global policy perspectives Bias, Fairness, and Human Rights Bias, discrimination, and fairness in autonomous AI decision-making Privacy, consent, and user autonomy in agent-mediated digital environments Public understanding and literacy of AI agents: risks of anthropomorphism and techno-solutionism Transparency, accountability, and democratic oversight in agentic systems Applications and Sectorial Implications AI agents in the public sector: education, healthcare, legal systems AI agents and the future of work: labour displacement, productivity, and worker surveillance Agentic AI in the enterprise: decision-making, automation, and organizational control Environmental and sustainability challenges of data-intensive agentic infrastructures Comparative and Interdisciplinary Approaches Comparative perspectives on AI agents: Western and Eastern approaches to ethics, governance, and design Interdisciplinary methodologies for studying AI agents (philosophy, law, computer science, STS, sociology) Cross-cultural imaginaries and narratives of artificial agency AI governance models: from technical standards to socio-political frameworks Big Data in Computer Science Big data analytics, machine learning, data mining, and cloud computing. Additionally, research focusing on data management, data structures, architectures for big data analytics, as well as the application of big data in various fields, such as social media.
最后更新 Dou Sun 在 2025-12-30
Special Issue on 40th Anniversary Special Issue edited by the IJHCI Editors
截稿日期: 2026-06-01

Special Issue Editor(s) Constantine Stephanidis, University of Crete and ICS-FORTH, Greece Gavriel Salvendy, University of Central Florida, USA Edited by the IJHCI Editors: Constantine Stephanidis and Gavriel Salvendy The Editors of the International Journal of Human–Computer Interaction (IJHCI) announce a 40th Anniversary Special Issue, celebrating four decades of advancing the science and practice of human–computer interaction. Since its inception, IJHCI has been a leading venue for seminal contributions that have defined the field and shaped the trajectory of interactive technologies worldwide. This landmark issue will showcase field-defining, highly cited papers that not only reflect the state of the art but also set the agenda for the decades ahead. Submissions must demonstrate clear potential for enduring impact through strong theoretical, methodological, or empirical advances. It is planned that this Special Issue will be widely distributed internationally. Scope We welcome papers that: Introduce groundbreaking theories or models and methodologies with potential to reshape HCI research Offer comprehensive, insightful and authoritative review and appraisal papers or meta-analyses that synthesize key knowledge for scholars and practitioners Present transformative empirical findings or design innovations addressing urgent and emerging challenges Provide visionary perspectives that chart compelling future directions for the discipline Selection Criteria Given the commemorative nature of this issue, only manuscripts of the highest scholarly quality will be considered. We seek contributions that: Demonstrate originality and conceptual and/or methodological depth Have clear potential for wide citation and influence Appeal to a broad, interdisciplinary readership in HCI and beyond Important Dates Manuscript submission deadline:[1st of June 2026] Notification of review decisions:[1st of September 2026] Final manuscripts due:[15th of October 2026] Publication date:[January 2027] This special issue offers a unique opportunity to publish in a commemorative volume that will serve as both a capstone for the past 40 years of HCI research and a foundation for the decades to come. We look forward to receiving your most innovative and impactful work. Please address any queries to: Constantine Stephanidis
最后更新 Dou Sun 在 2025-12-30
Special Issue on Trust and Mistrust in Artificial Intelligence: Human, Technological and Societal Impact Considerations
截稿日期: 2026-06-15

Special Issue Editor(s) Stavroula Ntoa, ICS-FORTH, Greece stant@ics.forth.gr Introduction Trust is a central concept across scholarly and public discussions in politics, economics, and society. The literature offers numerous definitions, extensive investigations into its antecedents, and even lines of research seeking to understand its neurological foundations. Achieving a comprehensive understanding of trust requires attention to its many dimensions: the dispositions, perceptions, beliefs, attitudes, expectations, and intentions of the trustor; the qualities and behaviors of the trustee; and the contextual conditions shaping their interaction. Its significance in human relationships is profound, serving as a foundational element that supports the cohesion of societies, underpinning every form of social interaction, evolving over time, shaped through observation and learning. As technological systems become more deeply integrated into everyday life, trust in technology has gained equal prominence. Existing literature distinguishes between two main approaches to conceptualizing trust in technology. One direction adapts human-oriented dimensions such as benevolence, integrity, and ability, while the other emphasizes system-oriented attributes such as helpfulness, reliability, and functionality. When it comes to traditional technological artifacts, users rarely view them as moral agents. The rise of Artificial Intelligence (AI), however, has introduced new complexity into how trust in technology is understood. AI systems now undertake tasks that were once exclusively human, such as offering recommendations that influence personal and institutional decisions, automating processes, engaging in creative tasks, such as writing texts and creating images, interacting with users in ways that can appear distinctly human-like. Yet despite these capacities, AI is fundamentally distinct from humans. It remains a non-conscious digital operating system, with correspondingly different cognitive qualities than biological creatures. Research demonstrates that trust in AI diverges markedly from interpersonal trust across multiple dimensions, including its underlying bases, the way it must be calibrated across contexts, the qualities attributed to the AI system, and the persistent paradox in which individuals extend trust to algorithms despite knowing they can make errors, mislead, or “hallucinate.” Scope This Special Issue is dedicated to exploring trust and mistrust in artificial intelligence. It seeks original, rigorous, and impactful contributions that address relevant foundations, challenges, as well as technological and human considerations. Relevant topics may include, but are not limited to: Foundational theories and conceptual frameworks of trust and mistrust Psychological, cognitive, and behavioral determinants of trusting or mistrusting AI Behavioral investigations of trust calibration, over-trust, and under-trust in AI Comparative examinations of trust in humans, traditional technologies, and AI systems User perceptions, expectations, and mental models of AI trustworthiness Cross-cultural, demographic, or contextual variations in AI trust and mistrust Design and evaluation of trust-enabling, trust-repair, or trust-calibration strategies Metrics, instruments, and modelling approaches for assessing trust in AI or AI trustworthiness Technical methods for engineering AI reliability, transparency, robustness, and accountability Misinformation, disinformation, deepfakes, and synthetic media Bias mitigation and fairness-enhancing algorithms Privacy-preserving machine learning Data misuse, surveillance, and the impact on trust in AI systems Human–AI interaction and collaboration approaches that influence trust dynamics Human-Centered Design of trustworthy AI Ethical, legal, and policy considerations related to trustworthy and responsible AI Governance models, regulatory mechanisms, and oversight structures shaping trust and mistrust in AI Domain-specific investigations of trust in areas such as healthcare, education, transportation, public administration, creative industries, robotics, or defence Explorations of mistrust, scepticism, resistance, and contestation of AI systems Interdisciplinary perspectives on the AI trust paradox and its implications We encourage work that identifies emerging challenges, proposes innovative solutions, or develops frameworks that can guide future research and practice. Contributions should offer strong scientific grounding, methodological rigor, and clear relevance to the technological and societal dimensions of trust in AI. Interdisciplinary approaches are especially welcome. Submission Instructions Important Dates Full paper submission due date: June 15, 2026 Notification of the first-round review decision: August 15, 2026 Revisions due date: October 15, 2026 Editorial decision: December 30, 2026 Targeted special issue publication date: early 2027
最后更新 Dou Sun 在 2025-12-30
相关会议
CCFCOREQUALIS简称全称截稿日期通知日期会议日期
cba2ICMIInternational Conference on Multimodal Interaction2025-04-182025-07-012025-10-13
cICIS'''International Conference on Computer and Information Science2020-08-102020-08-272020-11-18
aa2CCInternational Conference on Compiler Construction2025-11-102025-12-102026-01-31
cAPCHIAsia Pacific Conference on Computer Human Interaction2014-04-302014-10-22
cbb1CGIComputer Graphics International2025-05-022025-06-052025-07-14
bba2MobileHCIInternational Conference on Human-Computer Interaction with Mobile Devices and Services2025-01-302025-05-292025-09-22
cACHIInternational Conference on Advances in Computer-Human Interactions2023-02-012023-02-282023-04-24
a2HRIInternational Conference on Human-Robot Interaction2024-09-232024-12-022025-03-04
cab1INTERACTInternational Conference on Human-Computer Interaction2025-02-172025-04-282025-09-08
b2HCIIInternational Conference on Human-Computer Interaction2015-11-062015-12-042016-07-17