Amin, M. A. S., Kim, S., & Kim, D. J. (Forthcoming). Trust in humans versus trust in machines/systems: A systematic literature review (SLR) and roadmap for future research. The DATA BASE for Advances in Information Systems, In Press.
- ABDC Ranking: A
- CRediT Taxonomy:
- Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Resources, Software, Visualization, Writing – original draft, Writing – review & editing
- Google Scholar
- Faculty Profile
- LinkedIn Profile
Introduction
Trust is a fundamental aspect of human interactions, extending into our relationships with technology. With the rise of artificial intelligence (AI) and automated systems, understanding trust in human-machine interactions has become increasingly important. This document simplifies the key insights from a systematic literature review on trust in AI, highlighting trends, concerns, and future directions.
Understanding Trust in AI
What is trust? Trust generally refers to the belief that someone or something will act as expected. When applied to machines, trust depends on:
- Reliability: Does the AI or system perform consistently?
- Transparency: Can users understand how decisions are made?
- Security: Is the system protected from errors or misuse?
Why Trust in AI Matters
AI is being integrated into critical fields like healthcare, finance, and business operations. As automation expands, ensuring people trust AI is essential to its adoption. Lack of trust can lead to hesitation in using AI-powered tools, affecting productivity and innovation.
Key Trends in AI Trust Research
A systematic review of research from 2010-2023 identified these emerging themes:
- Growing reliance on AI across industries, increasing the need for ethical AI design.
- Transparency concerns as AI decisions often lack clear explanations.
- Bias and fairness issues that affect AI decision-making, especially in hiring, lending, and legal contexts.
- Security risks that impact user confidence in AI.
- Personalization vs. privacy, as AI systems collect and use vast amounts of user data.
Challenges in Building AI Trust
Several challenges must be addressed to enhance trust in AI:
- Understanding AI decision-making: Many AI models operate as “black boxes,” meaning their internal processes are difficult to interpret.
- Ensuring fairness: AI can inherit biases from training data, leading to unfair outcomes.
- Enhancing security: Users need reassurance that AI systems are protected from cyber threats.
- Balancing automation and human oversight: People need to feel in control when interacting with AI systems.
Roadmap for Strengthening AI Trust
Researchers propose a four-phase approach to improving trust in AI:
- Phase 1: Building Reliable AI – Ensure AI systems function accurately and consistently.
- Phase 2: Enhancing Transparency – Develop AI that explains its decisions in an understandable way.
- Phase 3: Increasing Ethical Accountability – Establish guidelines for fairness, security, and responsible AI usage.
- Phase 4: Preparing for Future AI – Address potential challenges as AI evolves toward advanced intelligence.
Conclusion
Trust in AI is crucial for its successful integration into society. By focusing on reliability, transparency, fairness, and security, we can create AI systems that people trust and use effectively. Future research should continue refining AI trust models to ensure these systems remain beneficial and responsible.
Keywords: AI trust, human-machine interaction, transparency, AI ethics, AI security, future research roadmap.ical, fair, and accountable—not just efficient.
Alignment with U.N. Sustainable Development Goals
The research paper “Trust in Humans versus Trust in Machines/Systems: A Systematic Literature Review (SLR) and Roadmap for Future Research” by Amin et al. aligns with the following U.N. Sustainable Development Goals (SDGs) prioritized in the Dr. Sam Pack College of Business (DSPCOB) strategic plan.
Goal 8: Decent Work and Economic Growth
- Examines the integration of AI in industries such as healthcare, finance, and e-commerce.
- Emphasizes the need for trustworthy AI systems to enhance productivity while mitigating risks like job displacement and ethical concerns.
- Identifies research gaps and provides a roadmap for AI trust, supporting responsible technological advancement and workforce sustainability.
Goal 9: Industry, Innovation, and Infrastructure
- Investigates human-machine trust in AI-driven decision-making.
- Highlights the importance of secure and transparent AI governance for sustainable infrastructure.
- Emphasizes fairness and accountability in AI adoption, aligning with DSPCOB’s mission to foster ethical and innovative business solutions.
This research contributes to Basic or Discovery Scholarship, expanding theoretical knowledge on trust in AI and guiding future research on responsible AI implementation.