Saturday, March 22, 2025

Challenges for the Civilian Leaders (1)

Here's a story, "In a bizarre twist of events that has left the journalism community scratching their heads, a prominent Indonesian magazine recently received a rather unusual package: a pig's head. Yes, you read that right! It seems that some individuals took "sending a message" to a whole new level, opting for a delivery service that specializes in... well, unconventional gifts.
In response to this peculiar act of intimidation, the Chief of the Presidential Communications Office made headlines with his rather "tasteful" advice. Instead of condemning the act or expressing solidarity with the journalists, he suggested that they should simply cook the pig’s head! Because nothing says "we support freedom of the press" quite like a good old-fashioned barbecue, right?
Rumour has it that a new recipe book is in the works titled "Cooking with Carcasses: A Journalist's Guide to Intimidation."
Meanwhile, social media is ablaze with debates about whether this intimidation tactic is actually an innovative form of performance art. Critics argue that it’s an avant-garde commentary on the state of press freedom in Indonesia, while others simply want to know where they can get their hands on some of those pig's head tacos.
As Indonesia grapples with serious issues surrounding press freedom, it seems that some individuals prefer to serve up intimidation with a side of irony. While journalists continue to brave the storm and report on critical issues, one thing is for sure: they’ll never look at a pig’s head—or a rat—quite the same way again.
So here's to hoping that next time they receive a package, it’s filled with something a little less... intimidating—like a pizza or maybe even some flowers! After all, who doesn’t love a good slice of peace?"

Now let's move on.
Globally, the quality of democracy has declined. Reports from the Varieties of Democracy Institute indicate that for the first time in two decades, more countries have experienced closed autocracies than liberal democracies. Authoritarian processes are occurring in countries that were previously democratic, such as India, Hungary, and Turkey. Leaders in these nations often use populist rhetoric to undermine democratic values and strengthen their control over institutions. Many authoritarian leaders exploit public dissatisfaction with political elites to legitimize their actions, often involving repression of opposition and control over information.The global tendencies towards "militarization" and "autocracy" have become increasingly evident in recent years, significantly impacting the international order. There is a noticeable increase in tensions among major powers such as the United States, Russia, and China. Conflicts in Europe and the Middle East highlight the direct involvement of the U.S. and Russia, alongside the potential for open conflict in the Asia-Pacific region, particularly regarding Taiwan and the South China Sea. In response to China's growing influence, the U.S. has formed strategic alliances like Quad and Aukus, indicating that nations are preparing to confront rising military threats. The proliferation of advanced technology can exacerbate situations, with terrorist groups also leveraging these advancements to strengthen their positions.

These two trends—militarization and autocratization—indicate that the world is currently in a phase of uncertainty and transition. With increasing rivalry among major powers and a decline in democratic values across many nations, the challenges to global stability are becoming more complex. Adjusting foreign policy and defence strategies is crucial for addressing these dynamics effectively.
The current global trend is leaning more towards militarization rather than democratization, although the situation varies by region. Many countries are boosting their military spending due to geopolitical tensions, such as the Russia-Ukraine war, tensions in the South China Sea, and security concerns in the Middle East. Major powers like the U.S., China, Russia, and India are investing heavily in advanced military technologies, including hypersonic weapons, artificial intelligence in warfare, and missile defence systems. The wars in Ukraine and Gaza, along with China-Taiwan tensions, indicate a growing reliance on military force in global diplomacy.
Some countries are experiencing democratic backsliding, with increasing government control over media, suppression of political opposition, and erosion of civil liberties. Even in democratic nations, challenges such as populism, political polarization, and distrust in institutions are growing. Coups and military takeovers remain a challenge in places like Myanmar and several African nations (e.g., Niger, Burkina Faso, Mali).

In Asia, military spending and geopolitical tensions are increasing, with authoritarianism rising in some nations. In China, increasing authoritarianism under Xi Jinping and heavy military expansion, particularly in the South China Sea and Taiwan Strait. In India and Pakistan, military modernization continues, with border tensions remaining high. Russia, a major global military power, is actively engaged in Ukraine along with increasing authoritarian rule under Putin. In Southeast Asia, mixed trends—Myanmar is under military rule, but Indonesia, Malaysia, and the Philippines maintain democracy despite a lot of challenges. In Japan and South Korea, strengthening military due to North Korean threats and China’s rise, but still strong democracies.
Despite challenges, grassroots movements in countries like Iran, Hong Kong, and Belarus continue to push for democratic reforms. Global organizations such as the UN and human rights groups advocate for transparency and democratic governance. Some nations are still conducting elections, even under difficult circumstances, such as Turkey, Pakistan, and Brazil.
In Africa, many nations are moving toward military rule or conflict, but democratic resistance exists. Countries like Niger, Mali, and Burkina Faso have seen military takeovers, reversing democratic progress. Some leaders stay in power for decades, like in Uganda, Cameroon, and Equatorial Guinea. Ongoing wars in Sudan, Ethiopia, and the Sahel region drive militarization. Ghana, South Africa, and Kenya remain relatively stable democracies.
In North America, democracy remains dominant, but militarization is increasing in security-related policies. In the United States, democracy remains strong, but increasing political polarization and military spending (largest in the world). In Canada, strong democracy with no major military expansion. Mexico struggles with military involvement in drug wars and internal security.
In South America, democracy is holding, but authoritarian tendencies and militarized conflicts exist. In Brazil, Argentina, and Chile, democracies remain stable, though political instability exists. Venezuela and Nicaragua are becoming more authoritarian, with military-backed governments. Colombia and Peru are in ongoing armed conflicts with guerrilla groups and drug cartels keep the military active.
In Europe, more militarization is due to geopolitical tensions. While democracy is strong, military buildup is increasing due to security concerns. NATO countries increasing military spending and sending weapons due to the Russia-Ukraine War. EU and NATO are strengthening military alliances against potential Russian aggression. Hungary and Poland are showing democratic decline where authoritarian shifts, but most of Western Europe remains democratic.
The only demilitarized zone (neutral) is Antarctica. Governed by the Antarctic Treaty, which bans military activity and promotes scientific cooperation. In this continent, no permanent human population, so no democracy or military power struggles.
Australia has a strong democracy but is going to strengthen its military. Partnering with the U.S. and U.K. in the AUKUS military alliance, Australia is increasing defence spending due to China’s influence in the Indo-Pacific.

So, should we prepare our country to become a military state?
The world today is not marching toward an era dominated by pure military rule, but rather, it is engaged in a carefully calculated display of power. Countries no longer seek to expand their borders through traditional conquests, nor do they aim to install military regimes purely for the sake of control. Instead, military strength has become a tool—a lever used to secure economic advantages, project influence, and maintain internal order in an increasingly unstable world.
Meanwhile, some governments do not rely on military control over their own citizens but instead use their armies as a shield for economic security. China’s military buildup in the South China Sea is not an effort to start a war—it is an economic strategy, ensuring that vital shipping lanes, oil reserves, and trade routes remain under its influence. This is the new face of power: nations flexing their military muscles not to conquer but to protect their share of global wealth.
Even in places where military rule has taken over governance, the justification is often economic in nature. In Myanmar, the military did not seize power merely for dominance but to preserve the interests of the elite, ensuring that political instability did not threaten the wealth and industries they controlled. Political stability, or at least the appearance of it, has become a currency of its own, and when governments falter, the military steps in under the pretence of keeping order.
Beyond their borders, powerful nations use their military presence as a tool of negotiation rather than conflict. Russia’s actions in Ukraine, for example, are not just about territory—they are about securing leverage in energy markets, reshaping global alliances, and forcing the world to acknowledge its economic influence. War is no longer fought simply with bullets and bombs; it is fought with trade sanctions, military bases in strategic locations, and the ability to control resources that others depend on.
Thus, the world is not moving toward military rule but toward a show of force, where nations do not seek war but use the threat of war to shape economic and political outcomes. Military strength today is not an end in itself—it is a tool for securing power in a world where economic survival is just as fierce a battlefield as any warzone.

The world is in a constant struggle between seeking economic expansion and defending existing economic power. Every major action taken by governments—whether through diplomacy, military force, trade agreements, or technological advancements—is ultimately tied to economics.
Some nations, particularly emerging powers, are aggressively seeking economic growth. They expand their influence by securing natural resources, dominating trade routes, and investing in foreign markets. China, for instance, is not just building military bases—it is also constructing highways, railroads, and ports across Africa and Asia through its Belt and Road Initiative. While this appears to be an economic project, it also establishes long-term control over key markets and supply chains.
On the other hand, established powers like the United States and Europe are focused on defending their economic supremacy. They impose trade restrictions, set up military alliances, and regulate global finance to ensure they maintain control. The U.S. is not just concerned about China’s military—it is more worried about China’s ability to dominate technology, manufacturing, and trade. That is why we see economic wars fought through sanctions, tariffs, and restrictions on technology exports rather than through direct military confrontation.

Meanwhile, resource-rich but politically unstable countries, such as those in Africa and the Middle East, find themselves caught in the middle. Their economies are often targeted by foreign powers—whether through military intervention, economic partnerships, or corporate influence. Wars in these regions are rarely just about ideology—they are about who controls oil, gas, minerals, and trade routes.
Thus, the modern world order is built on the following dynamics: rising nations are seeking economic dominance; superpowers are defending their existing economic power; smaller nations are struggling to resist being exploited. And while the military is used as a tool in this process, the real battle is being fought over who controls the global economy.

If we analyze current global trends objectively, we can see that militarization is not necessarily the goal of the emerging world order—but it is becoming a dominant tool for shaping international power dynamics. While military expansion and displays of power are certainly prominent features of international relations, they are not necessarily the end goal of global governance. Instead, militarization appears to be a tool—a means by which nations seek to secure economic dominance,
Historically, military strength has always played a role in shaping the world order, but in modern times, this role has evolved. The Cold War era saw military buildup as a way to project ideological supremacy, whereas today, militarization is often linked to economic interests, technological advancements, and geopolitical positioning. Nations invest heavily in their military capabilities, not only for direct conflict but also to secure trade routes, protect energy resources, and influence global governance structures. maintain strategic influence, and assert control over global resources.
However, militarization is not the sole defining feature of the emerging world order. Economic power, technological innovation, and information control are arguably just as influential, if not more so. Countries like China and the United States, for example, use military strength as part of a broader strategy that includes economic diplomacy, cyber capabilities, and infrastructure development projects like China’s Belt and Road Initiative. In this context, the military serves as an instrument of state power rather than the ultimate objective.
Moreover, the interconnectedness of the global economy discourages full-scale military conflicts. Instead, nations engage in strategic competition through economic leverage, sanctions, and technological supremacy. The rise of artificial intelligence, cyber warfare, and space militarization further suggests that future conflicts may not be fought with conventional armies alone but through control over digital infrastructure, supply chains, and critical resources like rare earth minerals.
Thus, while militarization remains an important aspect of global power dynamics, it is not necessarily the primary goal of the world’s new order. Instead, the focus appears to be on maintaining control—over economies, technology, and governance structures—where military strength is just one of many tools used to shape the future. The real question is whether this balance will lead to a more stable global order or one marked by increased tensions and conflicts driven by the pursuit of power.

The struggle between seeking economic power and defending it is shaping the world in ways that go beyond traditional warfare. The battlefield has shifted from trenches and frontlines to boardrooms, financial markets, trade routes, and cyberspace. Nations no longer need to invade to control—they manipulate economies, disrupt supply chains, and weaponize trade to achieve their goals.
For countries aiming to rise in global influence, the strategy is one of expansion, investment, and resource control. The most aggressive player in this game today is China. It does not seek to dominate through war but through economic colonization, building infrastructure projects across Asia, Africa, and Latin America under the Belt and Road Initiative. This allows China to control key trade routes, gain influence over emerging markets, and secure long-term access to raw materials.
Other emerging economies, like India and Brazil, are also playing the long game. Their military presence is not as dominant, but they use technology, trade agreements, and regional influence to carve out their share of the global market. The goal is clear: economic power equals political power, and those who control global supply chains will dictate the future.
Even Russia, despite its reliance on military force in Ukraine, is deeply focused on economic leverage. Its dominance in energy exports gives it a powerful tool—when Europe imposed sanctions, Russia retaliated by cutting off gas supplies, sending shockwaves through global energy markets. This was not just military aggression; it was economic warfare designed to force the world to acknowledge Russia’s relevance.
While rising nations push forward, established powers like the United States and the European Union are on the defensive. Their economic control has been the foundation of the modern world order, and any challenge to it is met with fierce resistance.
The U.S. remains the world's dominant economy, but it now fights its battles through sanctions, technology restrictions, and financial systems rather than open war. It has blocked China from accessing advanced semiconductor technology, crippled Russia’s economy with financial restrictions, and uses the dominance of the U.S. dollar as a tool to maintain control. This is why countries like China and Russia are now pushing to create alternative trade systems that bypass the dollar—because whoever controls global finance holds the real power.
Europe, on the other hand, is trying to defend its economy from both external and internal threats. The European Union’s reliance on Russian energy showed its vulnerability, forcing it to shift policies rapidly. Meanwhile, it struggles with internal fractures, as economic disparities between richer and poorer EU nations create tensions.
As nations continue this battle between economic expansion and economic defence, the world is shifting toward a multipolar order, where no single country will dominate as the U.S. did after the Cold War. Instead, power will be shared among multiple centres—China, the U.S., the EU, Russia, and rising regional powers like India and Brazil.

However, this competition will not be fought with military force alone. The most powerful nations will be those that control advanced technology (AI, semiconductors, and cybersecurity); dominate financial systems (currency influence and trade networks); secure energy and food supplies for the future.
Militarization will remain a tool, but the real war is about who controls the global economy. Nations will continue to build armies, not necessarily to fight wars, but to ensure they can protect their economic interests in an era where power is shifting faster than ever before.

We are continuing with our discussion, but I would like to remind myself, and as input for you, that the battle for economic control is no longer limited to one industry—technology, finance, and energy are merging into a single global struggle. Whoever dominates AI will control automation, cybersecurity, and military tech. Whoever controls finance will dictate global trade and economic stability. Whoever leads in energy production will determine which nations thrive and which fall behind.
While the military remains a crucial tool, the real wars of the 21st century will be won in labs, financial markets, and power plants rather than on traditional battlefields. Countries are no longer simply building armies—they are building economic and technological empires, and without reducing respect, using military strength as a show of force to protect these assets.

Friday, March 21, 2025

A Little Bit about AI

"Once upon a time, in the prosperous kingdom of Konoha, there once ruled a monarch known as King Fibulus. His reign spanned two terms, during which he mastered the art of deception. Among his many talents, King Fibulus was particularly skilled at weaving tales about his illustrious academic achievements. He often boasted of graduating from Konoha’s most prestigious university—a claim that sent waves of admiration through his loyal subjects," Gareng starts to tell a story.
"But not everyone was convinced. Whispers began to circulate among the alumni of this esteemed institution. “Who is this King Fibulus?” they murmured. “We’ve never seen him in any lectures, nor at the cafeteria struggling with soggy noodles!” Even the professors scratched their heads, unable to recall a student with such royal charisma.
One day, a brave informatics and telematics scholar publicly declared, 'The king’s diploma is fake!' This proclamation was met with outrage—not from the people but from an army of newly discovered 'classmates' who swore they had shared study sessions and late-night ramen with Fibulus. To bolster his credibility, King Fibulus even staged a meeting with his supposed thesis advisor. Unfortunately, when asked about his thesis supervisor, he confidently stated a name that was entirely different from what was written on the document.
The university itself stepped in to defend its honour (and perhaps its funding and uncumbency). 'The diploma is authentic,' they proclaimed loudly, though their voices trembled slightly. Critics who dared question this narrative faced severe consequences; two outspoken dissenter was thrown into the dungeon for 'spreading falsehoods.'
Years passed, and the controversy faded into obscurity—until a forensic expert unearthed irrefutable evidence proving that the king's diploma was indeed fabricated. The expert presented meticulous arguments and undeniable proof to support their claim. Yet, rather than admit fault, the university accused the expert of misleading the public.
Meanwhile, King Fibulus maintained an elegant silence. He neither confirmed nor denied the allegations, choosing instead to distribute rice while ignoring calls to present his original diploma. His loyal defenders continued their crusade, some out of genuine belief and others for reasons best left unspoken.As time went on, the question of whether their king had ever truly graduated remained unanswered, and what's even more surprising, it turns out the prince did the same thing. The tale of King Fibulus became a legend told to children as a cautionary story about truth and power. And so, in Konoha, one thing remained certain: lies could build cracks empire that too deep to repair."

"Now let's jump into our topic," Gareng moves on. "Artificial Intelligence (AI) refers to a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include learning, problem-solving, speech recognition, image recognition, and decision-making. Artificial Intelligence (AI) operates through a combination of algorithms, data, and computational power to simulate human-like intelligence.
Michael Negnevitsky in his Artificial Intelligence: A Guide to Intelligent Systems (2005, Pearson Education) says that philosophers have been trying for over two thousand years to understand and resolve two big questions of the universe: how does a human mind work, and can non-humans have minds? However, these questions are still unanswered.
Some philosophers have picked up the computational approach originated by computer scientists and accepted the idea that machines can do everything that humans can do. Others have openly opposed this idea, claiming that such highly sophisticated behaviour as love, creative discovery and moral choice will always be beyond the scope of any machine—and I support the latter.
The nature of philosophy allows for disagreements to remain unresolved. Engineers and scientists have already built machines that we can call ‘intelligent’. So what does the word ‘intelligence’ mean? According to the dictionary, there are two definitions, first someone’s intelligence is their ability to understand and learn things; and second, intelligence is the ability to think and understand instead of doing things by instinct or automatically.
Thus, says Negnevitsky, according to the first definition, intelligence is the quality possessed by humans. But the second definition suggests a completely different approach and gives some flexibility; it does not specify whether it is someone or something that has the ability to think and understand. Now we should discover what thinking means. Let us consult our dictionary again.
Thinking is the activity of using your brain to consider a problem or to create an idea. So, in order to think, someone or something has to have a brain, or in other words, an organ that enables someone or something to learn and understand things, to solve problems and to make decisions. So we can define intelligence as ‘the ability to learn and understand, to solve problems and to make decisions’.

‘Can machines think?’ The very question that asks whether computers can be intelligent, or whether machines can think, came to us from the ‘dark ages’ of artificial intelligence (from the late 1940s). The goal of artificial intelligence (AI) as a science is to make machines do things that would require intelligence if done by humans. Therefore, the answer to the question was vitally important to the discipline. However, the answer is not a simple ‘Yes’ or ‘No’, but rather a vague or fuzzy one. Your everyday experience and common sense would have told you that. Some people are smarter in some ways than others. Sometimes we make very intelligent decisions but sometimes we also make very silly mistakes. Some of us deal with complex mathematical and engineering problems but are moronic in philosophy and history. Some people are good at making money, while others are better at spending it. As humans, we all can learn and understand, to solve problems and to make decisions; however, our abilities are not equal and lie in different areas. Therefore, we should expect that if machines can think, some of them might be smarter than others in some ways.

Negnevitsky then tells us the history of artificial intelligence, from the ‘Dark Ages’ to knowledge-based systems. Artificial intelligence as a science was founded by three generations of researchers.
The ‘Dark Ages’, or the birth of artificial intelligence (1943 – 56) where the first work recognised in the field of artificial intelligence (AI) was presented by Warren McCulloch and Walter Pitts in 1943. McCulloch had degrees in philosophy and medicine from Columbia University and became the Director of the Basic Research Laboratory in the Department of Psychiatry at the University of Illinois. His research on the central nervous system resulted in the first major contribution to AI: a model of neurons of the brain.
The rise of artificial intelligence, or the era of great expectations (1956 – late 1960s) is characterised by tremendous enthusiasm, great ideas and very limited success. Only a few years before, computers had been introduced to perform routine mathematical calculations, but now AI researchers were demonstrating that computers could do more than that. It was an era of great expectations.
From the mid-1950s, AI researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 1980s, and to exceed human intelligence by the year 2000. By 1970, however, they realised that such claims were too optimistic. Although a few AI programs could demonstrate some level of machine intelligence in one or two toy problems, almost no AI projects could deal with a wider selection of tasks or more difficult real-world problems.

Unfulfilled promises, or the impact of reality (late 1960s – early 1970s) phase. From the mid-1950s, AI researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 1980s, and to exceed human intelligence by the year 2000. By 1970, however, they realised that such claims were too optimistic. Although a few AI programs could demonstrate some level of machine intelligence in one or two toy problems, almost no AI projects could deal with a wider selection of tasks or more difficult real-world problems.
The technology of expert systems, or the key to success (early 1970s – mid-1980s) phase. Probably the most important development in the 1970s was the realisation that the problem domain for intelligent machines had to be sufficiently restricted. Previously, AI researchers had believed that clever search algorithms and reasoning techniques could be invented to emulate general, human-like, problem-solving methods. A general-purpose search mechanism could rely on elementary reasoning steps to find complete solutions and could use weak knowledge about domain. However, when weak methods failed, researchers finally realised that the only way to deliver practical results was to solve typical cases in narrow areas of expertise by making large reasoning steps.
The DENDRAL program is a typical example of the emerging technology. DENDRAL was developed at Stanford University to analyse chemicals. The project was supported by NASA, because an un- manned spacecraft was to be launched to Mars and a program was required to determine the molecular structure of Martian soil, based on the mass spectral data provided by a mass spectrometer. Edward Feigenbaum (a former student of Herbert Simon), Bruce Buchanan (a computer scientist) and Joshua Lederberg (a Nobel prize winner in genetics) formed a team to solve this challenging problem.

How to make a machine learn, or the rebirth of neural networks (mid-1980s – onwards) phase. In the mid-1980s, researchers, engineers and experts found that building an expert system required much more than just buying a reasoning system or expert system shell and putting enough rules in it. Disillusion about the applicability of expert system technology even led to people predicting an AI ‘winter’ with severely squeezed funding for AI projects. AI researchers decided to take a new look at neural networks.
Evolutionary computation, or learning by doing (early 1970s – onwards) phase. Natural intelligence is a product of evolution. Therefore, by simulating biological evolution, we might expect to discover how living systems are propelled towards high-level intelligence. Nature learns by doing; biological systems are not told how to adapt to a specific environment – they simply compete for survival. The fittest species have a greater chance to reproduce and thereby pass their genetic material to the next generation.
The concept of genetic algorithms was introduced by John Holland in the early 1970s. He developed an algorithm for manipulating artificial ‘chromosomes’ (strings of binary digits), using such genetic operations as selection, crossover and mutation. Genetic algorithms are based on a solid theoretical foundation of the Schema Theorem.

In the new era of knowledge engineering, or computing with words (late 1980s – onwards), neural network technology offers more natural interaction with the real world than systems based on symbolic reasoning. Neural networks can learn, adapt to changes in a problem’s environment, establish patterns in situations where rules are not known, and deal with fuzzy or incomplete information. However, they lack explanation facilities and usually act as a black box. The process of training neural networks with current technologies is slow, and frequent retraining can cause serious difficulties.

Now, where is knowledge engineering heading? Expert, neural and fuzzy systems have now matured and have been applied to a broad range of different problems, mainly in engineering, medicine, finance, business and management. Each technology handles the uncertainty and ambiguity of human knowledge differently, and each technology has found its place in knowledge engineering. They no longer compete; rather they complement each other. A synergy of expert systems with fuzzy logic and neural computing improves adaptability, robustness, fault-tolerance and speed of knowledge-based systems. Besides, computing with words makes them more ‘human’. It is now common practice to build intelligent systems using existing theories rather than to propose new ones, and to apply these systems to real-world problems rather than to ‘toy’ problems.

We live in the era of the knowledge revolution when the power of a nation is determined not by the number of soldiers in its army but the knowledge it possesses. Science, medicine, engineering and business propel nations towards a higher quality of life, but they also require highly qualified and skilful people. We are now adopting intelligent machines that can capture the expertise of such knowledgeable people and reason like humans.

AI can be categorized into several types based on its capabilities. First, Narrow AI (Weak AI), is designed for a specific task, such as virtual assistants like Siri or Alexa, which can perform limited functions but do not possess general intelligence.
Second, General AI (Strong AI), a theoretical form of AI that would possess the ability to understand, learn, and apply knowledge across various domains, similar to human intelligence. General AI is still largely a concept and has not yet been achieved.
Third, Superintelligent AI. This refers to an AI that surpasses human intelligence in all aspects. While it is a topic of speculation and debate, superintelligent AI does not currently exist.

How does AI work? According to Michael Negnevitsky, Artificial Intelligence (AI) is a fascinating field that seeks to replicate human intelligence in machines by leveraging computational power, algorithms, and structured knowledge. According to Michael Negnevitsky, AI systems are built on three fundamental pillars: knowledge representation, reasoning, and learning.
The journey begins with knowledge representation, where AI systems organize information in structured formats that machines can understand. This knowledge can be represented using methods like semantic networks, decision trees, or rule-based systems. These structures allow AI to store and access information efficiently, forming the foundation of intelligent decision-making.
Once knowledge is represented, the system uses reasoning mechanisms to draw conclusions or make decisions based on the data it has. Logical reasoning enables AI to simulate thought processes similar to those of humans. For example, an AI system might analyze a set of rules and infer new facts or solve problems by applying logical deductions.
However, reasoning alone is not sufficient for creating truly intelligent systems. This is where learning models come into play. Learning allows AI systems to adapt and improve over time by analyzing patterns in data. Negnevitsky highlights three key types of learning Supervised Learning (the system is trained using labeled examples, where it learns to associate inputs with desired outputs); Unsupervised Learning (the system identifies patterns and relationships within unlabeled data, enabling it to group or cluster information); Reinforcement Learning (the system learns through trial and error by interacting with an environment and receiving feedback in the form of rewards or penalties).
These learning methods enable AI systems to evolve dynamically, improving their accuracy and effectiveness as they process more data.
Negnevitsky also emphasizes that AI systems are not static; they continuously refine their knowledge base by incorporating new information and adapting their reasoning processes. This ability to learn and evolve is what makes AI so powerful—allowing it to solve complex problems across various domains, from healthcare diagnostics to autonomous vehicles.
In essence, AI works by combining structured knowledge representation with logical reasoning and adaptive learning. These components interact seamlessly to create intelligent systems capable of solving problems, making decisions, and improving themselves over time. This narrative captures the essence of how AI operates as explained in Negnevitsky's work while maintaining a clear and engaging flow!
According to Goodfellow, Bengio, and Courville’s Deep Learning (Adaptive Computation and Machine Learning series, 2016, MIT Press), AI works by leveraging deep neural networks that learn from vast amounts of data through structured training processes. By mimicking human cognitive functions, these systems can recognize patterns, make predictions, and adapt over time. This powerful approach has transformed industries and continues to push the boundaries of what machines can achieve.
This narrative encapsulates how AI operates through deep learning as explained in Goodfellow's book while maintaining an engaging flow!
\According to David L. Poole and Alan K. Mackworth’s Artificial Intelligence: Foundations of Computational Agents, 2017, Cambridge University Press), AI works by integrating perception, reasoning, and action into goal-driven agents. These agents learn iteratively, adapt to new information, and operate autonomously—mirroring the flexibility and problem-solving abilities of human intelligence, but grounded in computational principles.
So, AI works by leveraging large datasets, sophisticated algorithms, and computational power to mimic human intelligence in various tasks. Understanding how AI operates is crucial for effectively implementing it in different applications while addressing challenges related to ethics, bias, and transparency.

Despite its remarkable capabilities, AI remains dependent on humans in several critical ways. AI systems are created and programmed by humans, computer scientists and engineers who design algorithms, models, and systems that enable AI to learn and operate. Without human intervention, AI cannot evolve or function effectively.
AI requires data to learn and make decisions, humans are responsible for collecting, cleaning, and organizing this data. The quality and diversity of the data directly impact the performance of AI systems. If the data is biased or incomplete, the results produced by AI will also be flawed.
Although AI can perform many tasks autonomously, human oversight is essential to ensure that the system operates correctly. Humans are needed to adjust AI models if the outcomes are unsatisfactory or if there are changes in context or environment.
Decisions about how and where to use AI often involve ethical considerations and policies that must be made by humans. This includes addressing issues related to privacy, security, and the societal impact of AI applications.
AI is frequently designed to interact with human users, such as virtual assistants, chatbots, or recommendation systems. These interactions require an understanding of context and nuances that humans provide during development.
While AI can automate many tasks and make data-driven decisions, it still relies on humans for development, oversight, ethical considerations, and interaction. Human involvement is crucial to ensure that AI is used effectively and responsibly. Through collaboration between humans and AI, we can achieve better outcomes and innovative solutions.

The age at which individuals should start interacting with Artificial Intelligence (AI) can vary depending on the context and purpose of using the technology. Interaction with AI in educational settings can begin at an early age or in childhood. AI can assist young children in learning through personalized and interactive applications. However, it is essential to ensure that this interaction remains balanced with social and emotional experiences gained from human interactions.
Teenagers (ages 13-18), particularly Generation Z (born between the mid-1990s and early 2010s), tend to be more comfortable interacting with AI systems. They view AI as an efficient tool for completing various tasks, both in educational and entertainment contexts. However, there is a risk of over-reliance on technology that needs to be monitored.
In young adulthood (Ages 18-30), individuals often begin to interact with AI in professional settings. They need to understand how to leverage AI to enhance productivity and efficiency in the workplace. A positive attitude toward AI among this generation can facilitate broader technology adoption.

It is crucial to consider the ethical and social impacts of using AI across all age groups. While AI offers many benefits, empathetic and responsive human interaction remains essential to meet individuals' emotional needs.
Interacting with AI should ideally start at an early age but should be tailored to each individual's context and needs. Parental and educator oversight is vital to ensure that the use of AI provides benefits without diminishing the social and emotional interactions that are critical for children's and teenagers' development.

Now what if AI are used at school? The integration of Artificial Intelligence (AI) in educational settings presents several challenges that need careful consideration.
Students may become overly dependent on AI for answers, which can hinder their ability to think critically and solve problems. This reliance can lead to a superficial understanding of subjects, as students might use AI-generated content without genuinely engaging with the material. Consequently, this could negatively impact their academic performance and learning progression
The use of AI raises various ethical concerns, including:
  • Bias: AI systems may not fairly represent all groups, potentially leading to biased outcomes.
  • Plagiarism: Students might submit AI-generated content as their own, undermining academic integrity.
  • Copyright Issues: There are risks associated with using content that AI has copied from other sources without proper attribution.
  • Unequal Access: Not all students have equal access to AI tools, which could exacerbate existing inequalities in education.
AI systems often require access to personal data, raising concerns about privacy and data security. Schools must ensure that they protect students' information while using AI tools, as breaches could have serious repercussions
Increased reliance on AI can reduce face-to-face interactions between students and teachers. This lack of human engagement may lead to feelings of isolation among students and hinder the development of social skills and emotional intelligence
AI-generated content is only as good as the data it is trained on. Ensuring that the educational material provided by AI is accurate, up-to-date, and relevant is a significant challenge. There is also a risk of homogenization, where standardized content may overlook diverse perspectives and critical thinking
The adoption of AI in schools can expand the attack surface for cyber threats, making educational institutions more vulnerable to attacks such as phishing or malware injections. Schools may lack the resources or expertise to adequately address these cybersecurity challenges
As students rely more on AI for solutions, their critical thinking and problem-solving abilities may diminish. This trend could lead to a generation less equipped to tackle complex issues independently

If humans were to stop thinking, innovating, and writing—thus failing to provide data for AI—several significant consequences could arise. AI relies heavily on data for training and improvement. Without new data input from human creativity and innovation, AI systems would stagnate. They would be unable to learn from new experiences or adapt to changing environments, leading to outdated models that may not perform effectively in real-world scenarios.
AI systems generate outputs based on existing data and patterns. If humans cease to create new ideas or content, AI will lack the foundational material needed to produce innovative or unique works. This could result in a homogenization of ideas, where AI merely replicates existing concepts without the spark of human creativity.
As reliance on AI grows, the skills necessary for critical thinking, problem-solving, and creativity may decline among humans. If people stop engaging in these cognitive processes, future generations might find it challenging to think independently or innovate, leading to a society that is less adaptable and dynamic.
The absence of human input could lead to ethical dilemmas in AI applications. For example, decisions made by AI without human oversight might not align with societal values or ethical standards. This could exacerbate issues like bias in algorithms or misuse of technology.
Without new data and ideas, society might become overly dependent on existing AI systems and algorithms, which can lead to vulnerabilities. For instance, if these systems fail or are compromised, the lack of alternative solutions could create significant challenges.
AI-generated content relies on the quality of the data it processes. If humans stop producing accurate and thoughtful content, AI might inadvertently propagate misinformation or outdated knowledge, further complicating public discourse and understanding.
A halt in human innovation and data provision would not only hinder the progress of AI but also have profound implications for society's creativity, critical thinking abilities, and ethical standards.

In the realm of thought, AI should be used by humans only as a tool, where human minds reside, to foster critical thinking, innovate and create, by embracing our nature as learners.
And from whom do we, humans, learn? The answer is clear, we draw from the Creator, Whose wisdom is so near. With Al-Qalam, the Pen, a sacred gift bestowed, teaching humanity which he knew not, along the path we strode," Gareng concluded.

 [Bahasa]