Relational Leadership and AI: Balancing Humanity with Automation

As generative AI and predictive analytics are becoming increasingly sophisticated, leaders across industries are feeling the pressure to adopt these tools quickly. Phrases like “AI literacy” are rampant, though definitions of exactly what this means vary. While some leaders are quicker to adopt AI capabilities, others worry doing so will come at a cost. 

Foundationally, defining what it means to lead in a hybrid human-machine environment (Wilson & Daugherty) seems to be the order of the day. But many companies are grappling with what this truly looks like, and how they can possibly keep up. 

AI offers clear advantages. It can enhance decision-making, support forecasting, and streamline development metrics. According to a 2024 MIT Sloan report, leaders using AI-driven analytics saw improvements in both productivity and strategic responsiveness. Yet not all AI applications are accountable.  

And the leadership challenge isn’t just technological. It’s ethical and emotional.  

In regard to accountability and responsibility, intellectual property is under threat. “A healthy knowledge system requires honesty and accountability, and not long ago this was taken for granted. But plagiarism is now everywhere, and taken for granted (Goia, 2025).” 

For leaders, this can erode trust and empathy, which are central to what it means to be an effective leader. Errors and plagiarism aside, AI lacks emotional intelligence, cultural awareness, and moral reasoning.  

Leaders must therefore act as “ethical translators”, ensuring AI tools are used transparently and in ways that align with organizational values (Floridi & Cowls, 2021). But can this sometimes be more work? For example, performance-monitoring AI can easily veer into surveillance and errors could cost someone their livelihood, especially if it is not paired with informed leadership. 

The human side of leadership is necessary for sustainable culture-building and high morale. Personal coaching, for instance, and development might be more necessary than ever in an AI-reliant environment.  

Leaders who adopt AI quickly are advised to do so with caution. Ultimately, it is still up to humans to model curiosity, humility, and adaptability, fostering a culture where experimentation and feedback loops are encouraged (Daugherty & Wilson, 2018). 

AI-informed leadership is a threat to leadership values only if we become blindly reliant on it. If used with care and a certain amount of caution, it may be (as often advertised) an opportunity to improve efficiency and find more robust information. But if leveraged without emotional intelligence, self-awareness, and continued dedication to excellence and humanity, it might not only lead to costly mistakes, but irreparable ones. 


References

"Learning to Manage Uncertainty, With AI" (MIT Sloan Management Review & BCG, Nov 2024) 

"Human + Machine: Reimagining Work in the Age of AI" by Paul R. Daugherty & H. James Wilson (2018) 

“Collaborative Intelligence” by Wilson & Daugherty, Harvard Business Review (2023) 

“The Ten Warning Signs” by Ted Gioia on The Honest Broker (2025) 

Disclaimer

Here at Lead Read Today, we endeavor to take an objective (rational, scientific) approach to analyzing leaders and leadership. All opinion pieces will be reviewed for appropriateness, and the opinions shared are solely of the author and not representative of The Ohio State University or any of its affiliates.