ISSN: XXXX-XXXX

Historical Imprints and Firm Strategy: The case of German Firms and Nazi legacy

Abstract

This paper explores the role of trust in the adoption and integration of artificial intelligence (AI) within organizations. It analyzes the impact of trust on AI acceptance, focusing on five distinct trust configurations: full trust, full distrust, uncomfortable trust, blind trust, and balanced trust. Using qualitative research methods, including real-life observations and interviews, the study highlights how trust dynamics influence AI performance and organizational outcomes. The findings suggest that balanced trust fosters effective AI adoption, while extremes of trust or distrust hinder engagement and decision-making. The study underscores the importance of cultivating an optimal trust balance for successful AI implementation.

References

  1. Rai, A., Constantinides, P., & Sarker, S. (2019). "Next-generation digital platforms: Toward human–AI hybrids." MIS Quarterly, 43(1), 3-9.
  2. Gefen, D., Benbasat, I., & Pavlou, P. A. (2008). "A research agenda for trust in online environments." Journal of Management Information Systems, 24(4), 275-286.
  3. Langer, M., & Landers, R. N. (2021). "Trust in artificial intelligence: A review and agenda for future research." Computers in Human Behavior Reports, 3, 100057.
  4. Lee, J. D., & See, K. A. (2004). "Trust in automation: Designing for appropriate reliance."
  5. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50-80.
  6. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). "An integrative model of organizational trust." Academy of Management Review, 20(3), 709-734.
  7. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). "A model for types and levels of human interaction with automation." IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 30(3), 286-297.
  8. Wang, D., & Siau, K. (2019). "Artificial intelligence, machine learning, automation, robotics, future of work, and future of humanity." Journal of Database Management (JDM), 30(1), 61-79.
  9. Siau, K., & Wang, W. (2018). "Building trust in artificial intelligence, machine learning, and robotics." Journal of Information Technology Management, 29(4), 18-21.
  10. Benbasat, I., & Wang, W. (2005). "Trust in and adoption of online recommendation agents." Journal of the Association for Information Systems, 6(3), 72-101.
  11. Taddeo, M., & Floridi, L. (2018). "How AI can be a force for good." Science, 361(6404), 751-752.
  12. Zhang, B., Dafoe, A., & Dafoe, H. (2020). "Artificial intelligence: American attitudes and trends." Journal of Behavioral Public Administration, 3(1).
  13. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). "Developing and validating trust measures for e-commerce: An integrative typology." Information Systems Research, 13(3), 334-359.
  14. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). "A meta-analysis of factors affecting trust in human-robot interaction." Human Factors, 53(5), 517-527.
  15. Waytz, A., Heafner, J., & Epley, N. (2014). "The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle." Journal of Experimental Social Psychology, 52, 113-117.
  16. Dignum, V. (2019). "Responsible Artificial Intelligence: Designing AI for human values." ITU Journal: ICT Discoveries, 2(1), 1-8.
Download PDF

How to Cite

Ivanenko Liudmyla, (2025-02-21 14:41:03.608). Historical Imprints and Firm Strategy: The case of German Firms and Nazi legacy. Abhi International Journal of Management Studies, Volume hfkvqQvJOOQfQ4jxyAbR, Issue 1.