Friday, March 21, 2025

A Little Bit about AI

"Once upon a time, in the prosperous kingdom of Konoha, there once ruled a monarch known as King Fibulus. His reign spanned two terms, during which he mastered the art of deception. Among his many talents, King Fibulus was particularly skilled at weaving tales about his illustrious academic achievements. He often boasted of graduating from Konoha’s most prestigious university—a claim that sent waves of admiration through his loyal subjects," Gareng starts to tell a story.
"But not everyone was convinced. Whispers began to circulate among the alumni of this esteemed institution. “Who is this King Fibulus?” they murmured. “We’ve never seen him in any lectures, nor at the cafeteria struggling with soggy noodles!” Even the professors scratched their heads, unable to recall a student with such royal charisma.
One day, a brave informatics and telematics scholar publicly declared, 'The king’s diploma is fake!' This proclamation was met with outrage—not from the people but from an army of newly discovered 'classmates' who swore they had shared study sessions and late-night ramen with Fibulus. To bolster his credibility, King Fibulus even staged a meeting with his supposed thesis advisor. Unfortunately, when asked about his thesis supervisor, he confidently stated a name that was entirely different from what was written on the document.
The university itself stepped in to defend its honour (and perhaps its funding and uncumbency). 'The diploma is authentic,' they proclaimed loudly, though their voices trembled slightly. Critics who dared question this narrative faced severe consequences; two outspoken dissenter was thrown into the dungeon for 'spreading falsehoods.'
Years passed, and the controversy faded into obscurity—until a forensic expert unearthed irrefutable evidence proving that the king's diploma was indeed fabricated. The expert presented meticulous arguments and undeniable proof to support their claim. Yet, rather than admit fault, the university accused the expert of misleading the public.
Meanwhile, King Fibulus maintained an elegant silence. He neither confirmed nor denied the allegations, choosing instead to distribute rice while ignoring calls to present his original diploma. His loyal defenders continued their crusade, some out of genuine belief and others for reasons best left unspoken.As time went on, the question of whether their king had ever truly graduated remained unanswered, and what's even more surprising, it turns out the prince did the same thing. The tale of King Fibulus became a legend told to children as a cautionary story about truth and power. And so, in Konoha, one thing remained certain: lies could build cracks empire that too deep to repair."

"Now let's jump into our topic," Gareng moves on. "Artificial Intelligence (AI) refers to a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include learning, problem-solving, speech recognition, image recognition, and decision-making. Artificial Intelligence (AI) operates through a combination of algorithms, data, and computational power to simulate human-like intelligence.
Michael Negnevitsky in his Artificial Intelligence: A Guide to Intelligent Systems (2005, Pearson Education) says that philosophers have been trying for over two thousand years to understand and resolve two big questions of the universe: how does a human mind work, and can non-humans have minds? However, these questions are still unanswered.
Some philosophers have picked up the computational approach originated by computer scientists and accepted the idea that machines can do everything that humans can do. Others have openly opposed this idea, claiming that such highly sophisticated behaviour as love, creative discovery and moral choice will always be beyond the scope of any machine—and I support the latter.
The nature of philosophy allows for disagreements to remain unresolved. Engineers and scientists have already built machines that we can call ‘intelligent’. So what does the word ‘intelligence’ mean? According to the dictionary, there are two definitions, first someone’s intelligence is their ability to understand and learn things; and second, intelligence is the ability to think and understand instead of doing things by instinct or automatically.
Thus, says Negnevitsky, according to the first definition, intelligence is the quality possessed by humans. But the second definition suggests a completely different approach and gives some flexibility; it does not specify whether it is someone or something that has the ability to think and understand. Now we should discover what thinking means. Let us consult our dictionary again.
Thinking is the activity of using your brain to consider a problem or to create an idea. So, in order to think, someone or something has to have a brain, or in other words, an organ that enables someone or something to learn and understand things, to solve problems and to make decisions. So we can define intelligence as ‘the ability to learn and understand, to solve problems and to make decisions’.

‘Can machines think?’ The very question that asks whether computers can be intelligent, or whether machines can think, came to us from the ‘dark ages’ of artificial intelligence (from the late 1940s). The goal of artificial intelligence (AI) as a science is to make machines do things that would require intelligence if done by humans. Therefore, the answer to the question was vitally important to the discipline. However, the answer is not a simple ‘Yes’ or ‘No’, but rather a vague or fuzzy one. Your everyday experience and common sense would have told you that. Some people are smarter in some ways than others. Sometimes we make very intelligent decisions but sometimes we also make very silly mistakes. Some of us deal with complex mathematical and engineering problems but are moronic in philosophy and history. Some people are good at making money, while others are better at spending it. As humans, we all can learn and understand, to solve problems and to make decisions; however, our abilities are not equal and lie in different areas. Therefore, we should expect that if machines can think, some of them might be smarter than others in some ways.

Negnevitsky then tells us the history of artificial intelligence, from the ‘Dark Ages’ to knowledge-based systems. Artificial intelligence as a science was founded by three generations of researchers.
The ‘Dark Ages’, or the birth of artificial intelligence (1943 – 56) where the first work recognised in the field of artificial intelligence (AI) was presented by Warren McCulloch and Walter Pitts in 1943. McCulloch had degrees in philosophy and medicine from Columbia University and became the Director of the Basic Research Laboratory in the Department of Psychiatry at the University of Illinois. His research on the central nervous system resulted in the first major contribution to AI: a model of neurons of the brain.
The rise of artificial intelligence, or the era of great expectations (1956 – late 1960s) is characterised by tremendous enthusiasm, great ideas and very limited success. Only a few years before, computers had been introduced to perform routine mathematical calculations, but now AI researchers were demonstrating that computers could do more than that. It was an era of great expectations.
From the mid-1950s, AI researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 1980s, and to exceed human intelligence by the year 2000. By 1970, however, they realised that such claims were too optimistic. Although a few AI programs could demonstrate some level of machine intelligence in one or two toy problems, almost no AI projects could deal with a wider selection of tasks or more difficult real-world problems.

Unfulfilled promises, or the impact of reality (late 1960s – early 1970s) phase. From the mid-1950s, AI researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 1980s, and to exceed human intelligence by the year 2000. By 1970, however, they realised that such claims were too optimistic. Although a few AI programs could demonstrate some level of machine intelligence in one or two toy problems, almost no AI projects could deal with a wider selection of tasks or more difficult real-world problems.
The technology of expert systems, or the key to success (early 1970s – mid-1980s) phase. Probably the most important development in the 1970s was the realisation that the problem domain for intelligent machines had to be sufficiently restricted. Previously, AI researchers had believed that clever search algorithms and reasoning techniques could be invented to emulate general, human-like, problem-solving methods. A general-purpose search mechanism could rely on elementary reasoning steps to find complete solutions and could use weak knowledge about domain. However, when weak methods failed, researchers finally realised that the only way to deliver practical results was to solve typical cases in narrow areas of expertise by making large reasoning steps.
The DENDRAL program is a typical example of the emerging technology. DENDRAL was developed at Stanford University to analyse chemicals. The project was supported by NASA, because an un- manned spacecraft was to be launched to Mars and a program was required to determine the molecular structure of Martian soil, based on the mass spectral data provided by a mass spectrometer. Edward Feigenbaum (a former student of Herbert Simon), Bruce Buchanan (a computer scientist) and Joshua Lederberg (a Nobel prize winner in genetics) formed a team to solve this challenging problem.

How to make a machine learn, or the rebirth of neural networks (mid-1980s – onwards) phase. In the mid-1980s, researchers, engineers and experts found that building an expert system required much more than just buying a reasoning system or expert system shell and putting enough rules in it. Disillusion about the applicability of expert system technology even led to people predicting an AI ‘winter’ with severely squeezed funding for AI projects. AI researchers decided to take a new look at neural networks.
Evolutionary computation, or learning by doing (early 1970s – onwards) phase. Natural intelligence is a product of evolution. Therefore, by simulating biological evolution, we might expect to discover how living systems are propelled towards high-level intelligence. Nature learns by doing; biological systems are not told how to adapt to a specific environment – they simply compete for survival. The fittest species have a greater chance to reproduce and thereby pass their genetic material to the next generation.
The concept of genetic algorithms was introduced by John Holland in the early 1970s. He developed an algorithm for manipulating artificial ‘chromosomes’ (strings of binary digits), using such genetic operations as selection, crossover and mutation. Genetic algorithms are based on a solid theoretical foundation of the Schema Theorem.

In the new era of knowledge engineering, or computing with words (late 1980s – onwards), neural network technology offers more natural interaction with the real world than systems based on symbolic reasoning. Neural networks can learn, adapt to changes in a problem’s environment, establish patterns in situations where rules are not known, and deal with fuzzy or incomplete information. However, they lack explanation facilities and usually act as a black box. The process of training neural networks with current technologies is slow, and frequent retraining can cause serious difficulties.

Now, where is knowledge engineering heading? Expert, neural and fuzzy systems have now matured and have been applied to a broad range of different problems, mainly in engineering, medicine, finance, business and management. Each technology handles the uncertainty and ambiguity of human knowledge differently, and each technology has found its place in knowledge engineering. They no longer compete; rather they complement each other. A synergy of expert systems with fuzzy logic and neural computing improves adaptability, robustness, fault-tolerance and speed of knowledge-based systems. Besides, computing with words makes them more ‘human’. It is now common practice to build intelligent systems using existing theories rather than to propose new ones, and to apply these systems to real-world problems rather than to ‘toy’ problems.

We live in the era of the knowledge revolution when the power of a nation is determined not by the number of soldiers in its army but the knowledge it possesses. Science, medicine, engineering and business propel nations towards a higher quality of life, but they also require highly qualified and skilful people. We are now adopting intelligent machines that can capture the expertise of such knowledgeable people and reason like humans.

AI can be categorized into several types based on its capabilities. First, Narrow AI (Weak AI), is designed for a specific task, such as virtual assistants like Siri or Alexa, which can perform limited functions but do not possess general intelligence.
Second, General AI (Strong AI), a theoretical form of AI that would possess the ability to understand, learn, and apply knowledge across various domains, similar to human intelligence. General AI is still largely a concept and has not yet been achieved.
Third, Superintelligent AI. This refers to an AI that surpasses human intelligence in all aspects. While it is a topic of speculation and debate, superintelligent AI does not currently exist.

How does AI work? According to Michael Negnevitsky, Artificial Intelligence (AI) is a fascinating field that seeks to replicate human intelligence in machines by leveraging computational power, algorithms, and structured knowledge. According to Michael Negnevitsky, AI systems are built on three fundamental pillars: knowledge representation, reasoning, and learning.
The journey begins with knowledge representation, where AI systems organize information in structured formats that machines can understand. This knowledge can be represented using methods like semantic networks, decision trees, or rule-based systems. These structures allow AI to store and access information efficiently, forming the foundation of intelligent decision-making.
Once knowledge is represented, the system uses reasoning mechanisms to draw conclusions or make decisions based on the data it has. Logical reasoning enables AI to simulate thought processes similar to those of humans. For example, an AI system might analyze a set of rules and infer new facts or solve problems by applying logical deductions.
However, reasoning alone is not sufficient for creating truly intelligent systems. This is where learning models come into play. Learning allows AI systems to adapt and improve over time by analyzing patterns in data. Negnevitsky highlights three key types of learning Supervised Learning (the system is trained using labeled examples, where it learns to associate inputs with desired outputs); Unsupervised Learning (the system identifies patterns and relationships within unlabeled data, enabling it to group or cluster information); Reinforcement Learning (the system learns through trial and error by interacting with an environment and receiving feedback in the form of rewards or penalties).
These learning methods enable AI systems to evolve dynamically, improving their accuracy and effectiveness as they process more data.
Negnevitsky also emphasizes that AI systems are not static; they continuously refine their knowledge base by incorporating new information and adapting their reasoning processes. This ability to learn and evolve is what makes AI so powerful—allowing it to solve complex problems across various domains, from healthcare diagnostics to autonomous vehicles.
In essence, AI works by combining structured knowledge representation with logical reasoning and adaptive learning. These components interact seamlessly to create intelligent systems capable of solving problems, making decisions, and improving themselves over time. This narrative captures the essence of how AI operates as explained in Negnevitsky's work while maintaining a clear and engaging flow!
According to Goodfellow, Bengio, and Courville’s Deep Learning (Adaptive Computation and Machine Learning series, 2016, MIT Press), AI works by leveraging deep neural networks that learn from vast amounts of data through structured training processes. By mimicking human cognitive functions, these systems can recognize patterns, make predictions, and adapt over time. This powerful approach has transformed industries and continues to push the boundaries of what machines can achieve.
This narrative encapsulates how AI operates through deep learning as explained in Goodfellow's book while maintaining an engaging flow!
\According to David L. Poole and Alan K. Mackworth’s Artificial Intelligence: Foundations of Computational Agents, 2017, Cambridge University Press), AI works by integrating perception, reasoning, and action into goal-driven agents. These agents learn iteratively, adapt to new information, and operate autonomously—mirroring the flexibility and problem-solving abilities of human intelligence, but grounded in computational principles.
So, AI works by leveraging large datasets, sophisticated algorithms, and computational power to mimic human intelligence in various tasks. Understanding how AI operates is crucial for effectively implementing it in different applications while addressing challenges related to ethics, bias, and transparency.

Despite its remarkable capabilities, AI remains dependent on humans in several critical ways. AI systems are created and programmed by humans, computer scientists and engineers who design algorithms, models, and systems that enable AI to learn and operate. Without human intervention, AI cannot evolve or function effectively.
AI requires data to learn and make decisions, humans are responsible for collecting, cleaning, and organizing this data. The quality and diversity of the data directly impact the performance of AI systems. If the data is biased or incomplete, the results produced by AI will also be flawed.
Although AI can perform many tasks autonomously, human oversight is essential to ensure that the system operates correctly. Humans are needed to adjust AI models if the outcomes are unsatisfactory or if there are changes in context or environment.
Decisions about how and where to use AI often involve ethical considerations and policies that must be made by humans. This includes addressing issues related to privacy, security, and the societal impact of AI applications.
AI is frequently designed to interact with human users, such as virtual assistants, chatbots, or recommendation systems. These interactions require an understanding of context and nuances that humans provide during development.
While AI can automate many tasks and make data-driven decisions, it still relies on humans for development, oversight, ethical considerations, and interaction. Human involvement is crucial to ensure that AI is used effectively and responsibly. Through collaboration between humans and AI, we can achieve better outcomes and innovative solutions.

The age at which individuals should start interacting with Artificial Intelligence (AI) can vary depending on the context and purpose of using the technology. Interaction with AI in educational settings can begin at an early age or in childhood. AI can assist young children in learning through personalized and interactive applications. However, it is essential to ensure that this interaction remains balanced with social and emotional experiences gained from human interactions.
Teenagers (ages 13-18), particularly Generation Z (born between the mid-1990s and early 2010s), tend to be more comfortable interacting with AI systems. They view AI as an efficient tool for completing various tasks, both in educational and entertainment contexts. However, there is a risk of over-reliance on technology that needs to be monitored.
In young adulthood (Ages 18-30), individuals often begin to interact with AI in professional settings. They need to understand how to leverage AI to enhance productivity and efficiency in the workplace. A positive attitude toward AI among this generation can facilitate broader technology adoption.

It is crucial to consider the ethical and social impacts of using AI across all age groups. While AI offers many benefits, empathetic and responsive human interaction remains essential to meet individuals' emotional needs.
Interacting with AI should ideally start at an early age but should be tailored to each individual's context and needs. Parental and educator oversight is vital to ensure that the use of AI provides benefits without diminishing the social and emotional interactions that are critical for children's and teenagers' development.

Now what if AI are used at school? The integration of Artificial Intelligence (AI) in educational settings presents several challenges that need careful consideration.
Students may become overly dependent on AI for answers, which can hinder their ability to think critically and solve problems. This reliance can lead to a superficial understanding of subjects, as students might use AI-generated content without genuinely engaging with the material. Consequently, this could negatively impact their academic performance and learning progression
The use of AI raises various ethical concerns, including:
  • Bias: AI systems may not fairly represent all groups, potentially leading to biased outcomes.
  • Plagiarism: Students might submit AI-generated content as their own, undermining academic integrity.
  • Copyright Issues: There are risks associated with using content that AI has copied from other sources without proper attribution.
  • Unequal Access: Not all students have equal access to AI tools, which could exacerbate existing inequalities in education.
AI systems often require access to personal data, raising concerns about privacy and data security. Schools must ensure that they protect students' information while using AI tools, as breaches could have serious repercussions
Increased reliance on AI can reduce face-to-face interactions between students and teachers. This lack of human engagement may lead to feelings of isolation among students and hinder the development of social skills and emotional intelligence
AI-generated content is only as good as the data it is trained on. Ensuring that the educational material provided by AI is accurate, up-to-date, and relevant is a significant challenge. There is also a risk of homogenization, where standardized content may overlook diverse perspectives and critical thinking
The adoption of AI in schools can expand the attack surface for cyber threats, making educational institutions more vulnerable to attacks such as phishing or malware injections. Schools may lack the resources or expertise to adequately address these cybersecurity challenges
As students rely more on AI for solutions, their critical thinking and problem-solving abilities may diminish. This trend could lead to a generation less equipped to tackle complex issues independently

If humans were to stop thinking, innovating, and writing—thus failing to provide data for AI—several significant consequences could arise. AI relies heavily on data for training and improvement. Without new data input from human creativity and innovation, AI systems would stagnate. They would be unable to learn from new experiences or adapt to changing environments, leading to outdated models that may not perform effectively in real-world scenarios.
AI systems generate outputs based on existing data and patterns. If humans cease to create new ideas or content, AI will lack the foundational material needed to produce innovative or unique works. This could result in a homogenization of ideas, where AI merely replicates existing concepts without the spark of human creativity.
As reliance on AI grows, the skills necessary for critical thinking, problem-solving, and creativity may decline among humans. If people stop engaging in these cognitive processes, future generations might find it challenging to think independently or innovate, leading to a society that is less adaptable and dynamic.
The absence of human input could lead to ethical dilemmas in AI applications. For example, decisions made by AI without human oversight might not align with societal values or ethical standards. This could exacerbate issues like bias in algorithms or misuse of technology.
Without new data and ideas, society might become overly dependent on existing AI systems and algorithms, which can lead to vulnerabilities. For instance, if these systems fail or are compromised, the lack of alternative solutions could create significant challenges.
AI-generated content relies on the quality of the data it processes. If humans stop producing accurate and thoughtful content, AI might inadvertently propagate misinformation or outdated knowledge, further complicating public discourse and understanding.
A halt in human innovation and data provision would not only hinder the progress of AI but also have profound implications for society's creativity, critical thinking abilities, and ethical standards.

In the realm of thought, AI should be used by humans only as a tool, where human minds reside, to foster critical thinking, innovate and create, by embracing our nature as learners.
And from whom do we, humans, learn? The answer is clear, we draw from the Creator, Whose wisdom is so near. With Al-Qalam, the Pen, a sacred gift bestowed, teaching humanity which he knew not, along the path we strode," Gareng concluded.

 [Bahasa]