Blog

Human Side Up: Leadership in the age of AI with Charlene Li

Human Side Up: Leadership in the age of AI with Charlene Li

Human Side Up: Leadership in the age of AI with Charlene Li

Inclusive leadership: a conversation with Charlene Li Charlene Li is a New York Times bestselling author as well as the Founder and CEO of Quantum Networks, a consulting group. She’s previously worked as an analyst at Forrester Research and a research officer at PA Consulting. She’s previously shared her insights with us and the world through keynotes at the World Economic Forum, TED, and SXSW. She’s also the author of six books on leadership, her most recent being Winning with Generative AI: The 90 Day Blueprint for Success. CLARA Founder & CEO Natasha Nuytten sat down with Li to discuss leadership and how leaders can be better prepared for the coming advances that generative AI promises to bring to the business world. According to Li, the principles of good leadership—clarity of purpose, inspiring others, and showing up with empathy—have remained the same throughout history. However, how leaders implement these principles has evolved, especially in fostering relationships and communicating effectively in a fast-changing world.


Leadership is about change

Li highlighted the need for leaders to prepare themselves and their teams for transformative shifts, particularly in the context of disruptive technologies like generative AI. "Generative AI is highly transformative," she explained. "The first time you use it, your mind is blown. You won’t look at the world the same way. That requires a tremendous amount of leadership for people to show up."
Yet, many leaders fall short in recognizing their role as agents of change. "Most people don’t think about leadership as creating change. They think about showing up, leading people, or giving speeches. But fundamentally, it’s about creating change—and the more transformative the change, the more carefully you have to show up."
"Leadership is fundamentally about creating change," Li emphasizes. "You could be a manager of the status quo, but you wouldn’t be a leader. Leaders create change. You don’t need a title to do that.
"Li stresses the importance of creating psychological safety to enable change. "If people don’t feel safe, they won’t change." For Li, safety is about structure and governance, but not in a stifling way. "Good governance, what I call ‘Goldilocks governance,’ provides just enough structure—not too much, not too little—to help people feel safe and supported."
"Think of a schoolyard. Without a fence, kids stay close to the flagpole. But with a fence, they explore all the way to the edges." Leaders, she explained, must create these "fences" to encourage exploration and risk-taking, while reassuring their teams that they have the necessary guardrails in place.
For Li, effective leadership in times of disruption starts with preparation. "How you show up as a leader matters, especially when the change is disruptive. You need to communicate the vision, make people feel safe, and guide them to step outside their comfort zones.
"Her advice to leaders facing transformative challenges is clear: "Understand how big the change is and prepare yourself and your organization to navigate it. Create a plan, build contingency measures, and show your people that you’ll be there for them." Li believes leaders can inspire their teams to thrive, even in the face of profound disruption. "Transformation is difficult, but it’s where the most meaningful progress happens."

Embracing ethical AI

Li also touched upon the ethical challenges Generative AI raises and provided a useful framework for leaders discussing how to ethically integrate it into their company. She introduced the concept of a “Pyramid of Trust,” inspired by Maslow’s hierarchy of needs, to explain how organizations can build trust step by step when adopting generative AI.
"When it comes to generative AI, you have a need for trust, and you build it step by step," Li explains. At the foundation of this pyramid are safety, security, and privacy, which must be firmly in place before addressing higher levels of trust. If individuals or organizations do not feel safe—if information isn’t protected and privacy isn’t respected—then trust cannot develop. Organizations must ensure their existing data security and privacy policies are updated to reflect the complexities introduced by generative AI.
From there, trust involves addressing bias and fairness, which Li described as complex and subjective. "Fairness isn’t universally defined," she notes. "Are you promoting equity or striving to be equitable? Those are two very different things." Evaluating these dimensions of AI is essential for fostering confidence in its outputs and alignment with organizational values.
The next component involves quality and accuracy, which vary depending on the context. For example, running a power plant demands a vastly different standard of quality than creating social media posts. "Organizations must balance quality against speed, depending on their priorities," Li explained.
Responsibility and transparency are also critical. Establishing accountability—knowing who to turn to when something goes wrong—lays the groundwork for ethical AI implementation. Transparency, meanwhile, forces organizations to decide how and when to disclose AI usage. Li shared an example of a professional services firm grappling with this issue. "Should we disclose to clients that we’re using AI? They might appreciate our use of cutting-edge tools, but they might also question paying the same rates if AI reduces our time and effort."
These decisions, Li argues, are deeply rooted in values. "Ethics comes up when you have two values in contradiction," she explained. "You have to center on your values from the very beginning to know how you’ll react when conflicts arise." For many organizations, values are often aspirational, printed on walls but rarely applied to real-world dilemmas. Generative AI, however, forces leaders to revisit these values and use them to guide critical decisions.Li urges organizations to take a proactive approach: "Generative AI will force us to ask: What do our values mean? Are we prepared to act on them?" Building trust in AI, she emphasized, isn’t just about technology—it’s about relationships, integrity, and staying true to an organization’s principles.

Embracing ethical AI

Li also touched upon the ethical challenges Generative AI raises and provided a useful framework for leaders discussing how to ethically integrate it into their company. She introduced the concept of a “Pyramid of Trust,” inspired by Maslow’s hierarchy of needs, to explain how organizations can build trust step by step when adopting generative AI.
"When it comes to generative AI, you have a need for trust, and you build it step by step," Li explains. At the foundation of this pyramid are safety, security, and privacy, which must be firmly in place before addressing higher levels of trust. If individuals or organizations do not feel safe—if information isn’t protected and privacy isn’t respected—then trust cannot develop. Organizations must ensure their existing data security and privacy policies are updated to reflect the complexities introduced by generative AI.
From there, trust involves addressing bias and fairness, which Li described as complex and subjective. "Fairness isn’t universally defined," she notes. "Are you promoting equity or striving to be equitable? Those are two very different things." Evaluating these dimensions of AI is essential for fostering confidence in its outputs and alignment with organizational values.
The next component involves quality and accuracy, which vary depending on the context. For example, running a power plant demands a vastly different standard of quality than creating social media posts. "Organizations must balance quality against speed, depending on their priorities," Li explained.
Responsibility and transparency are also critical. Establishing accountability—knowing who to turn to when something goes wrong—lays the groundwork for ethical AI implementation. Transparency, meanwhile, forces organizations to decide how and when to disclose AI usage. Li shared an example of a professional services firm grappling with this issue. "Should we disclose to clients that we’re using AI? They might appreciate our use of cutting-edge tools, but they might also question paying the same rates if AI reduces our time and effort."
These decisions, Li argues, are deeply rooted in values. "Ethics comes up when you have two values in contradiction," she explained. "You have to center on your values from the very beginning to know how you’ll react when conflicts arise." For many organizations, values are often aspirational, printed on walls but rarely applied to real-world dilemmas. Generative AI, however, forces leaders to revisit these values and use them to guide critical decisions.
Li urges organizations to take a proactive approach: "Generative AI will force us to ask: What do our values mean? Are we prepared to act on them?" Building trust in AI, she emphasized, isn’t just about technology—it’s about relationships, integrity, and staying true to an organization’s principles.
Watch the latest episode of Human Side Up to hear these insights and more by clicking here.
// Salesloft Code // Zoom //RB2B