Thursday, October 9, 2025

Men and AI

Some anecdotes capture the dynamic between humans and AI:

A young software engineer once spent days teaching an AI program to compose music. The AI could generate technically perfect melodies, with harmonies mathematically flawless. Yet, when the engineer played the AI’s compositions for friends, the room felt… empty. There was no tension, no surprise, no heartbeat—no soul. Frustrated, the engineer sat at a piano, improvising a simple tune with mistakes here and there. To everyone’s surprise, the tiny imperfections made the music alive. The AI could calculate every note, but it could not replicate the human experience—the laughter, the hesitation, the joy, and sorrow—that gives music its power.
This story illustrates a key truth: AI can mimic human skill, but it cannot replace the essence of being human. Creativity, emotion, and moral judgement spring from lived experience, not algorithms. Humans give meaning; AI gives efficiency. The two work best together when humans guide, feel, and decide, while AI handles the repetitive and data-driven tasks.

There is another anecdote.

A famous chef entered a cooking contest against an AI-powered kitchen robot. The robot followed every recipe to perfection: precise weights, exact temperatures, flawless plating. When the judges tasted the dishes, they admitted the robot’s food was technically perfect—but lacked “heart.” The chef’s dish, a little messy, a bit too salty, and slightly burnt in spots, made the judges smile. “It tastes like someone actually cares,” they said. The robot could follow instructions, but only humans could infuse love and intuition into food.

Now, let's add another anecdote.

A literature student asked an AI to write a poem about love. The AI produced a beautifully structured sonnet with perfect meter and rhyme. When read aloud, it sounded cold and clinical. Then the student scribbled a messy poem in their notebook, full of clichés, broken lines, and run-on sentences. The poem made the listeners laugh, cry, and sigh. The AI had mastered the form, but humans mastered the feel. Emotions cannot be calculated—they must be experienced.

In truth, the fear that AI will replace every human job and skill is largely misplaced. Artificial intelligence is indeed capable of automating repetitive, data-driven, and highly structured tasks far more efficiently than humans ever could. However, the essence of humanity—the capacity to feel, to imagine, to judge ethically, and to connect emotionally—remains firmly beyond the reach of algorithms. No matter how advanced a machine becomes, it does not experience empathy, moral conflict, or inspiration; it merely simulates them based on patterns of data.

The professions that rely on uniquely human strengths are therefore the ones that AI cannot truly replace. These include roles requiring critical thinking and moral judgement, such as judges, diplomats, and leaders. Creative professions that demand originality, like writers, film directors, artists, and inventors, also depend on human intuition and emotional depth that no algorithm can replicate. Likewise, careers grounded in empathy—such as psychologists, teachers, counsellors, and caregivers—require authentic human connection, not mere programmed responses.

Furthermore, leadership itself is a profoundly human art. Inspiring people, creating visions for the future, and uniting diverse minds under a shared purpose cannot be reduced to data analysis or predictive models. Professions that demand adaptability in unpredictable environments, such as doctors in emergency units, firefighters, and negotiators, also thrive on improvisation and human instinct. Finally, craftsmanship and spiritual guidance—fields that draw upon human touch, artistry, and the search for meaning—will forever belong to people, not machines.

AI will change the landscape of work, but will never eliminate the need for the human mind, heart, and soul. The future is not about humans versus AI—it is about humans working with AI, while preserving what makes us irreplaceably human.

There are certain human skills and professions that artificial intelligence will never be able to replace, no matter how sophisticated it becomes. These are the areas where the essence of humanity—emotion, intuition, creativity, and moral judgement—plays a role that machines cannot imitate. While AI can analyse patterns and execute tasks with extraordinary precision, it lacks the inner consciousness that allows people to care, imagine, and take responsibility for the consequences of their choices.

Professions rooted in empathy and emotional intelligence are among the most irreplaceable. Teachers, therapists, social workers, and caregivers embody a level of understanding and compassion that algorithms cannot reproduce. Similarly, creative fields such as art, writing, music, film-making, and design rely on originality and emotional resonance, both of which stem from uniquely human experience. Even when AI generates images or text, it merely recombines fragments of what already exists—it does not feel or mean what it creates.

Leadership is another domain that belongs to humans. Inspiring others, making ethical decisions in times of uncertainty, and creating a vision that unites people around shared values are acts that depend on conscience and charisma, not computation. The same applies to roles that demand adaptability, quick moral reasoning, and improvisation—like emergency doctors, firefighters, or crisis negotiators—where intuition often matters more than data.

There are crafts and spiritual vocations that draw upon human touch and inner life: the artist shaping clay with feeling, the chef crafting flavours from memory, the cleric guiding hearts toward meaning. These are acts of soul, not code.

AI can assist, accelerate, and even mimic aspects of human work, but it cannot replace the human spirit that gives work its purpose and warmth. The most valuable skills of the future will not be those that compete with machines, but those that express what it truly means to be human.

In The Age of Em (2016, Oxford University Press), economist Robin Hanson imagines a future where human minds can be emulated by machines. Yet, even within this speculative vision, Hanson highlights the enduring importance of human motivation, emotion, and social behaviour—elements that cannot be perfectly duplicated through code. The book ultimately suggests that while technology may transform work, it cannot replace the human essence that gives purpose to it.
Hanson explores a future dominated by brain emulations, or “Ems,” and he paints a world in which work is radically transformed by technology. However, Hanson argues that technology, no matter how sophisticated, cannot fully replace the human essence that gives work its meaning. His reasoning is rooted in the idea that value is not merely a function of efficiency or productivity; it is deeply tied to human preferences, social interactions, and the personal sense of purpose that individuals derive from their activities. Even in a society where Ems can perform tasks faster and more accurately than biological humans, the intrinsic satisfaction, ethical judgment, emotional engagement, and cultural significance of work remain anchored in human consciousness. Hanson’s argument highlights that technology can replicate performance but cannot replicate the subjective experiences and moral frameworks that give human work its broader significance. In other words, machines might do the work, but they cannot fully inhabit the human narrative that makes that work meaningful.

Hanson imagines a future in which human brains can be scanned and uploaded into computers, creating emulations—“Ems”—that can think, act, and work like their biological originals but at vastly accelerated speeds. In this scenario, almost all labour, from intellectual to creative, could potentially be outsourced to Ems. Yet Hanson stresses that the human element—the messy, rich, emotional, and culturally embedded aspects of human experience—cannot simply be digitized. He argues that AI or Ems, while capable of performing tasks with extreme efficiency, cannot reproduce the subjective consciousness, personal satisfaction, or ethical judgment that shape human lives.
One of his key points is that human work derives meaning from social and moral contexts. People don’t just work to produce outputs; they work to solve problems, collaborate, express themselves, and participate in communities. Even if Ems can mimic behaviour, they may lack the authentic intentionality and emotional engagement that make work “human.” In other words, AI might be able to replicate what humans do, but it cannot fully replicate why humans do it in a socially and morally resonant way. Hanson also considers the psychological consequences: a society dominated by Ems might be hyper-efficient, but it could feel alien or hollow to biological humans because the very framework of meaning is tied to human experience, not just productivity.
Hanson’s reflections implicitly suggest a boundary for AI: technological progress can transform and augment human work, but the essence of being human—the consciousness, emotions, and social embeddedness that make life meaningful—cannot be outsourced or uploaded. The human-AI relationship, then, is not simply about substitution, but about how humans and AI coexist, complement each other, and maintain the domains of value that are inherently human.

Hanson suggests that while advanced AI, particularly brain emulations or “Ems,” could drastically transform work, society, and productivity, there is a fundamental limit to what technology can replace: the human essence. He emphasises that meaning, purpose, and value in life are not solely functions of efficiency, speed, or output. Rather, they are deeply tied to human consciousness, emotions, social interactions, and moral reasoning.
Hanson warns that a world dominated by AI or Ems may be technologically impressive but could risk eroding the subjective and ethical dimensions that make life meaningful for humans. In other words, even if machines can perform all the tasks humans do—and perhaps do them better—they cannot replicate the human experience of intentionality, satisfaction, and cultural significance. His underlying message is that AI should be seen as a tool to complement human capacities, not as a substitute for the qualities that define humanity.

In "The Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence" (2017, Arcade Publishing), Richard Yonck explores the development of emotional AI and its potential to simulate empathy and emotional responses. However, he repeatedly stresses that genuine emotion and moral judgement remain uniquely human capacities. According to Yonck, even if AI learns to mimic compassion, it will never truly feel it—because emotion requires consciousness, something machines fundamentally lack.
Yonck explores the emerging field of emotional AI and its profound implications for human society. He argues that technology is no longer confined to cognitive tasks or data processing; it is increasingly capable of recognising, interpreting, and even responding to human emotions. Yonck warns that this development has both thrilling and unsettling consequences: machines could enhance human well-being, strengthen relationships, and transform industries like healthcare and education, yet they also risk manipulating, deceiving, or replacing aspects of human emotional life. The central message of the book is that as we develop emotionally intelligent machines, we must consciously shape the ethical, social, and personal frameworks that govern their use, ensuring they complement rather than diminish our humanity. Yonck emphasises that emotional intelligence is not merely a technical problem but a deeply human one, requiring reflection on our values, empathy, and the essence of connection.

In
Reclaiming Conversation: The Power of Talk in a Digital Age (2015, Penguin Press)
, MIT professor Sherry Turkle argues that technology—and especially AI—has made us more connected but less capable of genuine human conversation. She contends that authentic dialogue, empathy, and moral reflection arise only through face-to-face interaction, not through programmed responses or digital convenience. Turkle’s work powerfully supports the view that emotional intelligence, ethical reasoning, and meaningful communication are distinctly human skills that no machine can truly reproduce. Her message is clear: as AI grows stronger, humanity must reclaim the art of conversation to preserve its moral and emotional depth.
Turkle argues that our growing dependence on digital communication—texts, social media, and instant messaging—is eroding our capacity for meaningful, face-to-face conversation. She emphasises that conversation is not just about exchanging information; it is the foundation for empathy, self-reflection, and deep human connection. Turkle warns that when we replace dialogue with digital interaction, we risk weakening our ability to understand others, manage emotions, and think critically. The central message of the book is a call to consciously reclaim the art of conversation, to prioritise presence, listening, and authentic dialogue in both personal and professional life, to preserve the emotional richness and moral depth of human relationships.

In Homo Deus: A Brief History of Tomorrow (2016, HarperCollins, Yuval Noah Harari explores humanity’s next great project: the pursuit of godlike powers through artificial intelligence and biotechnology. He suggests that as machines become capable of outperforming humans in analytical and mechanical tasks, the question of what remains uniquely human grows ever more urgent. Harari argues that emotions, consciousness, and moral awareness—qualities that emerge not from data but from subjective experience—may be the last bastions of humanity in an age dominated by algorithms. While AI might know what we feel, it will never know how it feels. This, Harari concludes, is the irreducible essence of being human: the capacity for self-awareness, empathy, and meaning, which no machine can ever replicate.
Yuval Noah Harari explores the possible futures of humanity, particularly as artificial intelligence and biotechnology advance at an unprecedented pace. He argues that as humans gain the power to manipulate life, consciousness, and intelligence, our traditional roles, beliefs, and ethical frameworks may be radically challenged. Regarding AI, Harari warns that algorithms could surpass human intelligence in critical domains, making decisions faster, more accurately, and sometimes more effectively than humans. This could lead to a world where humans are no longer the most intelligent or influential agents, raising profound questions about agency, purpose, and inequality. The central message is that AI and related technologies have the potential to redefine what it means to be human, and society must proactively address the ethical, social, and philosophical dilemmas that emerge alongside these advances.

Artificial Intelligence in Education: The Power and Dangers of ChatGPT in the Classroom (2024), edited by Amina Al-Marzouqi, Said Salloum, Mohammed Al-Saidat, Ahmed Aburayya, and Babeet Gupta, offers a comprehensive examination of the integration of ChatGPT and artificial intelligence (AI) in educational settings. It delves into both the transformative potential and the inherent risks associated with AI technologies in the classroom.
The editors present a multifaceted analysis, highlighting how AI, particularly ChatGPT, can enhance personalised learning experiences, automate administrative tasks, and provide instant feedback to students. These advancements promise to revolutionise traditional pedagogical approaches, making education more accessible and tailored to individual needs. However, the book also addresses significant concerns, including ethical dilemmas, data privacy issues, and the potential for AI to perpetuate biases. It emphasises the necessity for responsible implementation, ensuring that AI serves as a complement to, rather than a replacement for, human educators.
Furthermore, the book underscores the importance of empirical research and a global perspective to understand the diverse impacts of AI across various educational contexts. By compiling insights from multiple disciplines, the authors provide a nuanced view of AI's role in education, advocating for a balanced approach that maximises benefits while mitigating risks.

ChatGPT offers several advantages in educational contexts. It enables personalised learning experiences by adapting to individual student needs, facilitating differentiated instruction. The AI can provide instant feedback, aiding in formative assessments and supporting students' learning processes. Additionally, ChatGPT can assist in automating administrative tasks, such as grading and content generation, thereby allowing educators to focus more on pedagogy. Its integration into learning management systems enhances accessibility and engagement, offering interactive and dynamic learning environments.
Despite its benefits, ChatGPT poses several challenges. One significant concern is the potential erosion of critical thinking skills, as students might rely on AI-generated responses without engaging in deeper cognitive processes. There is also the risk of academic dishonesty, with students using ChatGPT to complete assignments without proper understanding or attribution. Furthermore, the AI's outputs can sometimes be inaccurate or misleading, leading to misinformation. Ethical issues arise regarding data privacy and the potential biases embedded in AI algorithms, which may perpetuate existing societal inequalities. The over-reliance on AI tools could also diminish the role of human educators, affecting the teacher-student relationship and the development of social and emotional learning.
While ChatGPT holds the promise of revolutionising education through enhanced personalisation and efficiency, its implementation must be approached with caution. Educational stakeholders should ensure that AI tools are used responsibly, maintaining a balance between technological advancement and the preservation of essential human elements in teaching and learning.

The editors offer several recommendations to mitigate the dangers associated with ChatGPT in educational settings. They emphasise a multi-layered approach that combines policy, pedagogy, and technology to ensure AI is used responsibly.
Firstly, the editors recommend embedding AI literacy into the curriculum so that students not only use tools like ChatGPT but also understand their limitations, potential biases, and ethical considerations. By teaching students critical evaluation skills, educators can reduce the risk of over-reliance and misuse.
Secondly, they advise developing clear institutional policies on AI use, including guidelines for academic integrity, data privacy, and responsible AI deployment. Such policies help maintain accountability and protect both students and educators from ethical and legal pitfalls.
Thirdly, the editors suggest employing AI as a complementary tool rather than a replacement for human teachers. By keeping educators at the centre of teaching, schools can preserve the relational and social aspects of learning that AI cannot replicate.
Finally, ongoing monitoring and evaluation of AI implementation are encouraged. This includes auditing AI outputs for accuracy, fairness, and bias, and continuously refining practices based on empirical evidence and feedback from both students and teachers.
In essence, the editors advocate for a balanced, thoughtful integration of ChatGPT in classrooms—leveraging its power while actively mitigating its risks, and ensuring human guidance remains central to education.

Artificial intelligence, when used thoughtfully and ethically, has the potential to significantly enhance human learning, teaching, and professional productivity, rather than replace it. For students, AI can serve as a personalised tutor, helping them grasp difficult concepts, practise skills, and receive immediate feedback tailored to their level of understanding. This allows learners to focus on critical thinking and creativity, rather than merely memorising information. For teachers, AI can streamline administrative tasks, analyse student performance data, and suggest personalised interventions, freeing educators to devote more time to mentoring, discussion, and fostering emotional engagement in the classroom.

For workers and professionals, AI can act as an intelligent assistant that manages routine tasks, organises information, and generates insights from large datasets. This enables humans to concentrate on strategic decision-making, problem-solving, and innovation—areas that require intuition, judgement, and empathy. In fields such as healthcare, law, or research, AI can provide evidence-based recommendations or predictive models, but it is ultimately the human professional who evaluates, contextualises, and acts upon that information. The key, therefore, is to view AI as a collaborator rather than a replacement, augmenting human capabilities while preserving the uniquely human qualities of judgement, creativity, and moral responsibility.

Several scholarly works support this perspective. Artificial Intelligence in Education (2020) by Wayne Holmes, Maya Bialik, and Charles Fadel argues that AI can personalise learning experiences and enhance teacher effectiveness. Sherry Turkle’s Reclaiming Conversation (2015) emphasises the importance of maintaining genuine human interaction in digital and AI-mediated environments. Finally, Yuval Noah Harari in Homo Deus (2016) underscores that the future of human work lies in leveraging AI while nurturing our irreplaceable emotional, ethical, and creative capacities.

Here's an anecdote: a young coder asked AI to generate a joke. It produced a perfectly structured pun… that no one laughed at. Only when the coder added a quirky twist and a silly face did the room erupt in laughter, proving once again that humans bring the magic machines simply can’t calculate.

The development of AI is inseparable from the growth of human knowledge and skills. AI does not evolve in a vacuum—it relies on human intelligence to design, program, and refine it. Therefore, the more humans cultivate their understanding, creativity, and critical thinking, the more effectively AI can serve humanity rather than control it. To gain maximum benefit from AI, humans must actively engage with it: learning how to interpret its outputs, integrating AI insights into decision-making, and constantly questioning and improving the systems we build. In other words, humans must be the guides and curators of AI, not passive users.
Moreover, human creativity and critical thinking remain the ultimate tools for advancing the welfare of society. These uniquely human faculties allow us to solve complex problems, envision better futures, and create innovations that respect ethical and social boundaries. By applying creativity, humans can design AI applications that address pressing global challenges—such as climate change, healthcare accessibility, or education—while preserving human values. Critical thinking ensures that we evaluate AI outputs carefully, challenge biases, and make moral decisions that technology alone cannot make. Together, creativity and critical reasoning empower humans to use AI as a force for collective benefit, not mere efficiency or profit.

As we navigate the quirky, fast-paced world of AI, it’s clear that humans remain irreplaceable, not because machines are weak, but because our emotions, humour, and improvisation are beyond computation. AI can crunch numbers, write reports, and even compose music, but it can’t laugh at a bad pun, feel the thrill of discovery, or get goosebumps from a sunrise. Our curiosity and critical thinking transform technology from a cold, efficient tool into a partner that can help us imagine, create, and care in ways machines never could on their own.

Even in the workplace, classroom, or studio, human creativity is the secret ingredient that makes life vibrant. AI can assist, suggest, and optimise, but humans infuse meaning, empathy, and ethical judgement. From inventing new ideas to solving urgent problems, our minds and hearts remain central to shaping the world. Ultimately, technology works best when guided by human insight, and our role is to harness AI for collective benefit without losing our uniquely human spark. 

And yes, “the human touch” is profoundly different from “the AI touch.” The human touch represents empathy, intuition, and emotional depth — the subtle qualities that arise from consciousness and lived experience. When a human creates art, writes a story, or comforts another person, there is a warmth that transcends logic. It’s filled with imperfection, but also with meaning. The human touch is not about efficiency; it’s about connection — an understanding that comes from feeling, not programming.
By contrast, “the AI touch” reflects precision, speed, and pattern recognition. AI can imitate style, compose music, or analyse emotions, but it does so without truly feeling them. Its touch is calculated, not compassionate; intelligent, but not intuitive. AI can reproduce the rhythm of humanity, but not its heartbeat. The difference lies not in what they produce, but in what they carry — humans bring intention and soul, while AI brings logic and data. Together, they can create harmony, but apart, only one of them can truly make something feel alive. 

Finally, we close this discussion with an anecdote: A manager relied on AI to schedule meetings, optimise workflows, and send reminders. Everything ran smoothly… until a big crisis hit: a client needed urgent attention, and none of the AI-generated schedules made sense under pressure. The human manager stepped in, called a meeting on the fly, and resolved the issue. AI could organise, predict, and analyse, but it could not improvise when life got messy. Humans are still the pilots; AI is the copilot.

Bahasa