AI + The Courage to Stay Human
We stand on the edge of one of the most pivotal decisions leaders will ever face. Artificial Intelligence is no longer knocking on the door of our future - it’s already inside, reshaping how we live, work, and lead. This is not a technological transition; it’s a leadership transformation. And at the heart of this transformation is a defining question: Will AI deepen our humanity - or diminish it?
At Hack Future Lab, we describe this choice as the divide between Warm AI and Cold AI. Not a technical divide, but a human one. One rooted in intention, values, and vision. AI is not neutral - it reflects the character of its creators. The extent to which AI becomes a force that drives positive impact across people, planet, and profit comes down to the choices leaders make today. The future of leadership hinges on whether we wield AI as a tool for empowerment or a mechanism for control.
Warm AI champions transparency, trust, and long-term value. It’s a humanity-first future enabled by AI. It unlocks our full potential, increases trust, equity, and well-being, and elevates what makes us more human. Warm AI, whether operational or customer-facing, is defined by empowerment and empathy, which is a blind spot for many organizations in their AI products and services today.
Cold AI is a machine-first future powered by AI. While fast and efficient, it can potentially downgrade humanity, erode trust, and amplify hidden harms to our health and communities. Cold AI, unchecked, risks optimizing away empathy in the relentless pursuit of speed, cost and efficiency. While Cold AI is not inherently bad, there is a need to balance the best of Warm and Cold AI in operational and customer-facing AI products and services. This balance will become a cultural capability of future-ready enterprises.
Underhyped vs. Overhyped
Will AI be the most disruptive force in history? Embracing AI is not just an option - it's a leadership imperative. AI could solve the energy crisis, add trillions to the global economy, or wipe out humanity. When grasping how AI could reshape business, consider the other significant innovation of our time: GLP-1 medications like Ozempic and Wegovy. Both aid in weight loss by suppressing cravings. AI could become the equivalent of a corporate Ozempic, encouraging CEOs to shed excess weight in their firms and make record layoffs.
As Scott Galloway, Professor of Marketing at NYU Stern School of Business, writes in his blog No Malice/No Mercy: “Similarly, my thesis is that firms (notably tech companies) have also discovered a weight loss drug and are also being coy about it. Recent financial news features two stories: layoffs and record profits. These are related. There’s no mystery to the surface narrative. A company lays off 5%, 10%, or even 25% of its workforce, and, 6 to 12 months later, after severance pay and expenses are flushed through the P/L, its operating margin hits new heights.” (Galloway, 2024). Perhaps AI is already playing a larger role in layoffs than CEOs are willing to admit.
Hack Future Lab’s research shows that Warm AI leaders demonstrate a high empathy orientation, starting with framing AI narratives that emphasise how humans can be elevated by machines, rather than diminished, and asking practical, sense-making questions.
Will AI be the great equalizer? If not, why not?
How do we internally communicate the promises and perils of AI to our employees, co-creating the future together?
How do we align AI to serve the best interests of all our stakeholders?
Will AI turbocharge profits at the expense of ethics?
How do we ensure AI aligns with our vision and purpose for meaningful impact?
How do we set ourselves up to scale ethical AI that’s secure, trusted, and governed correctly?
How do we select the tech and talent stack to support our AI-enabled journey?
How do we manage AI concerns around data privacy, misinformation, and security?
How do I work alongside AI to become a Super Leader?
How can we harness AI confidently and responsibly while elevating what makes us more human?
Leaders are on the brink of a tech revolution that could spur hyper-innovation and growth, but also deepen inequality, lower labor demand, and reduce hiring as AI applications execute many tasks currently performed by humans. But let’s be clear: Cold AI is not inherently harmful. Leaders can be sensitive to both cold and warm AI simultaneously. When applied with integrity and oversight, it can power operational excellence. The challenge - and the opportunity - is to strike a balance that honors both innovation and humanity. The future won’t be written by AI. It will be authored by leaders who prioritize human values alongside speed, efficiency and growth.
More Human, Less Artificial
Let’s explore a few real-world examples that bring the contrast of Warm AI and Cold AI into sharper focus. Consider Microsoft’s AI for Accessibility initiative (https://www.wbcsd.org/). This program uses AI to support people with disabilities - from helping visually impaired users interpret their environment through image captioning, to empowering neurodiverse employees with inclusive design tools. It’s also a powerful demonstration of Warm AI driving social equity and inclusion, aiding in the construction of a more inclusive and fair society. It’s Warm AI in action: ethical, inclusive, and empowering.
Contrast that with automated hiring platforms that use black-box algorithms to scan CVs. Some of these systems - lacking transparency or accountability - have been found to reinforce bias by penalizing gaps in employment or undervaluing credentials from non-traditional backgrounds. The system works, technically. But whom does it serve - and whom does it silence? (People Management, 2025).
At Zoom, engineers redesigned an AI-powered transcription tool after feedback revealed cultural and linguistic inaccuracies. Rather than defend the algorithm, leadership invited community feedback and prioritized inclusive linguistic training sets. The result wasn’t just a better tool, but a better relationship with users. Again, a case of Warm AI. (Zoom, 2025)
On the other hand, predictive policing tools have come under fire for reinforcing systemic biases. These Cold AI systems analyze crime data to forecast future hotspots. But when the underlying data reflects historical inequity, the AI perpetuates it. Without human oversight or ethical review, Cold AI can institutionalize injustice under the banner of logic.
And then there’s Klarna. In 2024, the company’s CEO boldly announced that AI would replace large swaths of customer service jobs, promising faster and more efficient support through automation. A year later, however, the tune had changed. The CEO admitted that AI “couldn’t cut it on calls.” The technology lacked the empathy, nuance, and human understanding required in real customer interactions. It turned out that performance metrics couldn’t capture the emotional complexity of support conversations. The leadership miscalculation stemmed not just from overestimating AI, but from underestimating the human factor. It’s a telling example of Cold AI overreach, followed by a necessary course correction. (Fortune, 2025)
Metrics for Warm AI and Cold AI
Metrics shape mindsets and mindsets shape futures. In the age of AI and sustainability, how we measure success will define not only business outcomes but societal and environmental impacts. Warm AI metrics are evaluated through Key Behaviour Indicators (KBIs) that measure more than just performance - they measure human brilliance. They include psychological safety scores, trust and transparency indexes, employee engagement levels, and Return on Intelligence (ROI). These metrics tell a story not of machines, but of people - how empowered they feel, how safe they are to speak up over silence, and how aligned the system is with their needs.
Cold AI metrics, on the other hand, often prioritize outputs: system uptime, operational efficiency, compliance rates, prediction accuracy, and the speed of decision-making. These numbers matter. They track consistency, control, and cost savings. But without balance, they can lead us down a risky path -where we celebrate speed but ignore burnout, where we optimize workflows but erode trust.
The path forward isn’t to choose between Warm and Cold metrics but to recognise the need to balance the best of both worlds. Leaders can craft an AI Culture dashboard that reflects both hard (Cold AI) and human outcomes (Warm AI), ensuring that operational excellence never comes at the cost of organizational empathy.
Biggest AI Blind Spots in Leadership
In working with leadership teams around the world, one thing is clear: AI blind spots are often leadership blind spots. A common misstep is assuming AI is neutral. It’s not. AI inherits the values, biases, and blind spots of its creators. Leaders who treat algorithms as impartial risk entrench inequality and amplify exclusion. Another frequent oversight lies in over-optimizing for efficiency. When leaders prioritize speed, output, and productivity above all else, they may create systems that excel operationally but fail emotionally. In these environments, trust erodes quietly, engagement withers, and creativity becomes a casualty of compliance.
Leaders also make the mistake of implementing AI without human input. AI systems, especially those impacting people’s roles, responsibilities, and experiences, must be co-created with those they affect. Yet many organizations deploy AI from the top-down, with little consideration for how it will be received by employees. The failure to embed governance is another key blind spot. Without AI ethics boards, escalation protocols, or internal audits, systems operate unchecked. And when things go wrong, accountability is often absent.
Another blind spot is the lack of AI fluency. Grammarly’s latest The Productivity Shift report showed that only 13% of workers are AI Fluent, using AI every day and saving over 80% more time than their colleagues, report higher productivity, and communicate more effectively. AI adoption is growing, but not everyone is comfortable using these tools yet, including the C-suite. If leaders don’t use AI, they won’t trust it. And without trust, adoption stalls and fear takes root. Education and transparency are no longer optional - they’re essential. (Grammarly, 2025)
Warm AI Imperatives
Warm AI doesn’t just work better - it works deeper. Take the example of Unilever, which pioneered AI-driven recruitment tools that helped reduce bias and improve candidate experiences. But they didn’t stop with automation. Human recruiters are still engaged with applicants, offering context and coaching. By combining machine intelligence with human empathy, Unilever created a process that was not only faster but fairer. (Unilever, 2022)
At IKEA, the introduction of an AI-powered inventory system sparked concerns among frontline staff. Instead of pushing through, leadership formed an AI ethics board with representatives from every layer of the business. Store employees contributed to how the tool would impact restocking schedules, workflows, and customer interaction. The outcome wasn’t just a smoother implementation - it was a deepened sense of ownership and trust.
Salesforce embeds ethical audits into every AI release cycle. These audits don’t simply check for errors; they evaluate alignment with company values, potential bias, and inclusion. Employees are encouraged to question outputs, suggest improvements, and voice ethical concerns. This culture of co-creation ensures that AI is not just imposed but infused with human-led purpose (IKEA, 2023).
Avoid Artificial Interactions (AI)
The dangers of Cold AI deployed in isolation are real and growing. One of the most widely known examples is Amazon’s warehouse monitoring system, where AI tracks workers' movements, calculates productivity, and even makes termination decisions - often without human intervention. While operational output rose, the system created a culture of surveillance and fear. Burnout soared, trust collapsed, and attrition became a persistent challenge. (Amazon, 2021)
In the financial services industry, a large firm introduced an AI tool that tracked digital activity - emails sent, meetings attended, keyboard usage - to assess productivity. Employees who failed to meet arbitrary thresholds were flagged for underperformance. Yet the system failed to account for mentorship, creative thinking, or behind-the-scenes collaboration. The result? High-performing yet unconventional employees were penalized. A metric had replaced a conversation. And the culture suffered.
Jumbo Supermarkets also offers a dual lesson. Their initial rollout of an AI-based scheduling tool focused solely on logistics, ignoring the preferences and realities of their employees. The backlash was immediate: morale tanked, absenteeism rose, and engagement dwindled. But when they paused, listened, and redesigned the system to include employee input, the entire narrative changed. Cold AI had failed them. Warm AI saved them. (Jumbo Supermarkets, 2023)
Duolingo provides another revealing case. The language learning app laid off a significant number of human contractors - primarily responsible for creating and reviewing educational content - following a pivot toward generative AI tools. The CEO emphasized the efficiency and cost-effectiveness of automated content production. However, users began noticing a decline in content quality, cultural nuance, and educational depth. Community backlash grew, and some educators voiced concern that automation had replaced expertise. In chasing profits and scale, Duolingo's leadership appeared to favour Cold AI metrics over human-centered outcomes. The result: diminished trust and questions about long-term value. It’s a lesson in how Cold AI can dilute mission-driven work when cost-cutting tops care. (FT, 2025)
The lesson in all these examples is evident: AI systems that disregard organisational culture, worker voice, and ethical limits might deliver short-term gains — but only at the cost of long-term resilience, innovation, and trust.
The Ethics of AI Are the Ethics of Leadership
AI ethics are a reflection of leadership ethics. It’s not about writing more code - it’s about writing better stories. Leaders must go beyond compliance to embrace accountability. That means building systems that can be audited, challenged, and evolved. Strong AI governance starts with cross-functional ethics boards - not as a symbolic gesture, but as a serious infrastructure for oversight. These boards should review datasets, approve deployments, and ensure that every AI product aligns with core values. More importantly, ethical governance must be integrated into everyday decisions - not isolated in reports or annual reviews. (Harvard Business Review. (2022)
Leadership in the age of AI means asking harder questions and having braver conversations. It’s about choosing transparency over secrecy, inclusion over assumption, and courage over complacency. The goal isn’t perfection. It’s progress with purpose.
Return on Intelligence (ROI): A New Leadership Metric
Traditional ROI measures inputs and outputs. But it doesn’t measure meaning. It doesn’t measure the emotional, ethical, or collective intelligence that determines long-term success. That’s why we need a new metric: Return on Intelligence (ROI).
ROI is about emotional intelligence. It asks whether your AI systems support belonging, psychological safety, and human well-being. It’s about ethical intelligence - ensuring that the systems reflect values, not just variables. And it’s about collective intelligence - harnessing the diverse perspectives of your organization to co-create the future.
The 1,500 staff at a British law firm called Shoosmiths launched a Return on Intelligence initiative to boost AI literacy. The firm has set up a £1mn bonus pool to be shared if they collectively use Microsoft Copilot, their chosen AI tool, at least 1mn times this financial year. (That's just 4 times each working day per staff member). It’s Warm AI leadership in action: adopting a sandbox experimentation approach that includes the many, not the few. (FT, 2025)
Organizations that measure ROI look beyond the balance sheet. They measure impact, trust, and belonging. They understand that performance and purpose are not mutually exclusive - but mutually reinforcing. When you prioritize ROI, you’re not just building smarter systems. You’re building stronger Warm AI cultures.
Conclusion: The Future Is a Choice
AI is not destiny. It is a matter of leadership and design. The most critical question we can ask is not what AI will do to us - but what we will do with it. Will we use it to widen opportunity or deepen inequality? Will we use it to replace human judgment - or to scale human insights?
The lesson is that most companies are trapped between the panic zone and the complacency zone when it comes to Warm AI versus Cold AI. It doesn't help that many businesses are still hesitant to roll out AI, despite the apparent productivity gains it offers. To avoid 'Artificial Ignorance,' leaders must 'do faster than doubt' by adopting a sandbox experimentation approach to using AI. In this era of perpetual change, not taking a risk is a risk, and the biggest risk is thinking too small. AI forces us to choose between an incremental and transformative future powered by Humans AND AI.
Warm AI is more than a design feature. It is a leadership decision. It is a cultural foundation. It is a declaration that technology must serve humanity, not the other way around. The next chapter won’t be written by algorithms. It will be written by leaders - those who understand that real intelligence is not artificial. It’s ethical, emotional, and deeply human.
The future isn’t just about technology and trends. It’s about mindsets and choices, too.
Top Five Takeaways
AI is not neutral - it reflects leadership values. Every AI system carries the fingerprints of its designers. Whether intentionally or not, biases, priorities, and ethical blind spots find their way into algorithms. Leaders must realize that deploying AI is a moral act, not a mechanical one. AI systems mirror the philosophy of those who create and govern them. If the philosophy is shallow or self-serving, the systems will be too.
Reflection Question: What values are baked into your organization’s AI systems today - and who decided them?
Warm AI drives engagement and innovation. When people feel included, consulted, and respected, trust grows - and so does creativity. Warm AI invites participation, makes decisions explainable, and nurtures a feedback culture. It transforms AI from a control tool into a collaboration partner. Innovation flourishes not because of the technology itself, but because of how the technology honors the people who use it.
Reflection Question: In what ways could your AI systems better engage the people they affect?
Cold AI, when unchecked, erodes trust. When AI becomes a tool of surveillance rather than support, fear takes root. Employees begin to second-guess their roles and hesitate to speak up. Systems designed to maximize efficiency may simultaneously minimize humanity. The short-term gains from Cold AI come at the cost of long-term cohesion.
Reflection Question: Where in your organization might Cold AI be doing more harm than good?
Return on Intelligence (ROI) is the future of measurement. Organizations that only track traditional ROI are missing the bigger picture. ROI invites a broader, more courageous measurement philosophy - one that includes trust, well-being, inclusion, and ethical resilience. ROI helps leaders gauge not only what their systems achieve, but who they help become better in the process.
Reflection Question: How are you currently measuring trust, inclusion, and ethical impact across AI initiatives?
The future of AI is a leadership decision. AI doesn’t shape society - leaders do. Technology offers tools. Leadership defines the blueprint. The most important AI decisions are not technical but moral, strategic, and cultural. The future will be built not just on code, but on character.
Reflection Question: What kind of legacy do you want your AI leadership to leave?