ISSN: XXXX-XXXX

Trust Configurations in AI Adoption: Analyzing Organizational Dynamics

Abstract

This paper examines the dynamics of trust configurations in AI adoption within organizations and their implications for AI performance and organizational integration. The study investigates how varying degrees of trust influence AI acceptance and use, analyzing five trust states: full trust, full distrust, uncomfortable trust, blind trust, and their organizational consequences. Through qualitative research, incorporating real-life observations and interviews, the study provides a nuanced understanding of trust’s role in AI adoption. Findings indicate that full trust enhances engagement and decision-making, while distrust and blind trust present adoption challenges. The study highlights the importance of balanced trust configurations for optimizing AI integration.

References

  1. Rai, A., Constantinides, P., & Sarker, S. (2019). "Next-generation digital platforms: Toward human–AI hybrids." MIS Quarterly, 43(1), 3-9.
  2. Gefen, D., Benbasat, I., & Pavlou, P. A. (2008). "A research agenda for trust in online environments." Journal of Management Information Systems, 24(4), 275-286.
  3. Langer, M., & Landers, R. N. (2021). "Trust in artificial intelligence: A review and agenda for future research." Computers in Human Behavior Reports, 3, 100057.
  4. Lee, J. D., & See, K. A. (2004). "Trust in automation: Designing for appropriate reliance." Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50-80.
  5. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). "An integrative model of organizational trust." Academy of Management Review, 20(3), 709-734.
  6. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). "A model for types and levels of human interaction with automation." IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 30(3), 286-297.
  7. Wang, D., & Siau, K. (2019). "Artificial intelligence, machine learning, automation, robotics, future of work, and future of humanity." Journal of Database Management (JDM), 30(1), 61-79.
  8. Siau, K., & Wang, W. (2018). "Building trust in artificial intelligence, machine learning, and robotics." Journal of Information Technology Management, 29(4), 18-21.
  9. Benbasat, I., & Wang, W. (2005). "Trust in and adoption of online recommendation agents." Journal of the Association for Information Systems, 6(3), 72-101.
  10. Taddeo, M., & Floridi, L. (2018). "How AI can be a force for good." Science, 361(6404), 751-752.
  11. Zhang, B., Dafoe, A., & Dafoe, H. (2020). "Artificial intelligence: American attitudes and trends." Journal of Behavioral Public Administration, 3(1).
  12. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). "Developing and validating trust measures for e-commerce: An integrative typology." Information Systems Research, 13(3), 334-359.
  13. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). "A meta-analysis of factors affecting trust in human-robot interaction." Human Factors, 53(5), 517-527.
  14. Waytz, A., Heafner, J., & Epley, N. (2014). "The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle." Journal of Experimental Social Psychology, 52, 113-117.
  15. Dignum, V. (2019). "Responsible Artificial Intelligence: Designing AI for human values." ITU Journal: ICT Discoveries, 2(1), 1-8.
Download PDF

How to Cite

Ashvini Kumar Mishra, (2025-02-21 14:37:31.331). Trust Configurations in AI Adoption: Analyzing Organizational Dynamics. Abhi International Journal of Management Studies, Volume hfkvqQvJOOQfQ4jxyAbR, Issue 1.