Wednesday, January 18, 2023

The Thinkers : Acquaintance with Knowledge

"Graha told me, 'Philosophers have been studying knowledge for thousands of year. Plato’s 2,000-year- old theorizing and contemporary developmental psychological research agree: There’s an intimate connection between knowledge and true belief. There is a great deal of evidence from developmental psychology that the ability to recognize instances of beliefs and knowledge in others is a fundamental ability for children. In fact, this ability is essential to what psychologists call a child’s theory of mind—their ability to use reasoning about other people’s mental states to explain and predict their behavior. Children develop the ability to recognize the connection between a person’s sources of information and that person’s knowledge,'" Swara carried on when she came after saying Basmalah and Salaam.
"Graha added, 'There are at least three distinct kinds of knowledge,
Know-how is the knowledge that craftspeople have; they know how to pilot a ship or cure disease.
Knowledge-wh—knowledge-who, -what, -where, or -why—is the knowledge that enables you to answer a who, what, when, where, or why question.
Knowledge-that, or propositional knowledge, is knowledge that a certain fact is true.
Most philosophical discussions of knowledge have traditionally focused on propositional, or factual, knowledge—knowledge-that.

The hunt for knowledge has never been easier. Hard questions can be answered with a few keystrokes. Our individual powers of memory, perception, and reasoning can be double-checked by distant friends and experts, with minimal effort. Past generations would marvel at the number of books within our reach.
But these new advantages don’t always protect us from an old problem: if knowledge is easy to get, so is mere opinion, and it can be hard to spot the difference. A website that looks trustworthy can be biased, world-renowned authorities can follow misleading evidence down the wrong track, and illusions can distort what we ourselves seem to see or remember. What at first seemed like knowledge can turn out to be something less than the real thing. Reflecting on the difficulty of enquiry, we can find ourselves wondering exactly what this real thing might be. What is knowledge? What is the difference between just thinking that something is true and actually knowing that it is? How are we able to know anything at all?
These questions are ancient ones, and the branch of philosophy dedicated to answering them—epistemology—has been active for thousands of years. Over the centuries, philosophers investigating knowledge have unearthed some strange puzzles and paradoxes. Philosophers have also developed some innovative solutions to these problems.

Knowledge is sometimes portrayed as a free-flowing impersonal resource: knowledge is said to be stored in databases and libraries, and exchanged through ‘the knowledge economy’, as information-driven commerce is sometimes called. Like many resources, knowledge can be acquired, used for diverse purposes, and lost—sometimes at great expense. But knowledge has a closer connection to us than resources like water or gold. Gold would continue to exist even if sentient life were wiped out in a catastrophe; the continued existence of knowledge, on the other hand, depends on the existence of someone who knows.
It’s tempting to identify knowledge with facts, but not every fact is an item of knowledge. Imagine shaking a sealed cardboard box containing a single coin. As you put the box down, the coin inside the box has landed either heads or tails: let’s say that’s a fact. But as long as no one looks into the box, this fact remains unknown; it is not yet within the realm of knowledge. Nor do facts become knowledge simply by being written down. If you write the sentence ‘The coin has landed heads’ on one slip of paper and ‘The coin has landed tails’ on another, then you will have written down a fact on one of the slips, but you still won’t have gained knowledge of the outcome of the coin toss. Knowledge demands some kind of access to a fact on the part of some living subject. Without a mind to access it, whatever is stored in libraries and databases won’t be knowledge, but just ink marks and electronic traces. In any given case of knowledge, this access may or may not be unique to an individual: the same fact may be known by one person and not by others. Common knowledge might be shared by many people, but there is no knowledge that dangles unattached to any subject. Unlike water or gold, knowledge always belongs to someone.
More precisely, we should say that knowledge always belongs to some individual or group: the knowledge of a group may go beyond the knowledge of its individual members. There are times when a group counts as knowing a fact just because this fact is known to every member of the group (‘The orchestra knows that the concert starts at 8 pm’). But we can also say that the orchestra knows how to play Beethoven’s entire Ninth Symphony, even if individual members know just their own parts. Or we can say that a rogue nation knows how to launch a nuclear missile even if there is no single individual of that nation who knows even half of what is needed to manage the launch. Groups can combine the knowledge of their members in remarkably productive (or destructive) ways.

Is there knowledge beyond the knowledge of human individuals and groups? What should we say about what is known by non-human animals? These questions threaten to pull us into difficult biological and theological debates. For this reason, most epistemologists start with the simpler case of the knowledge of a single human being. Knowledge, in the sense that matters here, is a link between a person and a fact.
There is something interesting here, how is ‘know’ different from the contrasting verb ‘think’? Everyday usage provides some clues. Consider the following two sentences,
Jill knows that her door is locked.
Bill thinks that his door is locked.
We immediately register a difference between Jill and Bill—but what is it? One factor that comes to mind has to do with the truth of the embedded claim about the door. If Bill just thinks that his door is locked, perhaps this is because Bill’s door is not really locked. Maybe he didn’t turn the key far enough this morning as he was leaving home. Jill’s door, however, must be locked for the sentence about her to be true: you can’t ordinarily say, ‘Jill knows that her door is locked, but her door isn’t locked.’ Knowledge links a subject to a truth. This feature of ‘knowing that’ is called factivity: we can know only facts, or true propositions. ‘To know that’ is not the only factive construction: others include ‘to realize that’, ‘to see that’, ‘to remember that’, ‘to prove that’. You can realize that your lottery ticket has won only if it really has won. One of the special features of ‘know’ is that it is the most general such verb, standing for the deeper state that remembering, realizing, and the rest all have in common. Seeing that the barn is on fire or proving that there is no greatest prime number are just two of the many ways of achieving knowledge.

The dedicated link to truth is part of the essence of knowledge. By contrast, belief can easily link a subject to a false proposition. Knowledge has still further requirements, beyond truth and confidence. Someone who is very confident but for the wrong reasons would also fail to have knowledge. A father whose daughter is charged with a crime might feel utterly certain that she is innocent. But if his confidence has a basis in emotion rather than evidence (suppose he’s deliberately avoiding looking at any facts about the case), then even if he is right that his daughter is innocent, he may not really know that she is. But if a confidently held true belief is not enough for knowledge, what more needs to be added? This question turns out to be surprisingly difficult—indeed, to discuss it briefly.
Because truth is such an important feature in the essence of knowledge, something further should be said about it here. We’ll assume in what follows that truth is objective, or based in reality and the same for all of us. Most philosophers agree about the objectivity of truth, but there are some rebels who have thought otherwise. The Ancient Greek philosopher Protagoras (5th century bce) held that knowledge is always of the true, but also that different things could be true for different people. Standing outdoors on a breezy summer day and feeling a bit sick, I could know that the wind is cold, while you know that it is warm. Protagoras didn’t just mean that I know that the wind feels cold to me, while you know that it feels warm to you—the notion that different people have different feelings is something that can be embraced by advocates of the mainstream view according to which truth is the same for everyone. (It could be a plain objective fact that the warm wind feels cold to a sick person.) Protagoras says something more radical: it is true for me that the wind really is cold and true for you that the wind is warm. In fact, Protagoras always understands truth as relative to a subject: some things are true-for-you; other things are true-for-your-best-friend or true-for-your-worst-enemy, but nothing is simply true.
Protagoras’s relativist theory of knowledge is intriguing, but hard to swallow, and perhaps even self-refuting. If things really are for each person as he sees them, then no one ever makes a mistake. It’s true for the hallucinating desert traveller that there really is an oasis ahead; it’s true for the person who makes an arithmetical error that seven and five add up to eleven. What if it later seems to you that you have made a mistake? If things always are as they appear, then it is true for you that you have made a mistake, even though appearances can never be misleading, so it should have been impossible for you to get things wrong in the first place. This is awkward. One Ancient Greek tactic for handling this problem involved a division of you-at-this-moment from you-a-moment-ago. Things are actually only true for you-right-now, and different things might be true-for-you-later.

Think of one of the most trivial and easily checked facts you know. For example, you know whether you are presently wearing shoes. Right? The sceptic would like you to reconsider. Is it or you could be dreaming? If this is a dream, you could be lying barefoot in bed. Or you could be asleep on the commuter train, fully dressed.
Ancient Greece was in fact the birthplace of two distinct sceptical traditions, the Academic and the Pyrrhonian. Academic skepticism refers to the skeptical period of ancient Platonism dating from around 266 BCE, when Arcesilaus became scholarch of the Platonic Academy, until around 90 BCE, when Antiochus of Ascalon rejected skepticism, although individual philosophers, such as Favorinus and his teacher Plutarch, continued to defend skepticism after this date. Unlike the existing school of skepticism, the Pyrrhonists, they maintained that knowledge of things is impossible. Ideas or notions are never true; nevertheless, there are degrees of plausibility, and hence degrees of belief, which allow one to act. The school was characterized by its attacks on the Stoics, particularly their dogma that convincing impressions led to true knowledge. The most important Academics were Arcesilaus, Carneades, and Philo of Larissa. The most extensive ancient source of information about Academic skepticism is Academica, written by the Academic skeptic philosopher Cicero.
Pyrrhonian Scepticism movement was named in honour of Pyrrho of Elis (c.360–270 bce), who is known to us not through his own written texts—he left no surviving works—but through the reports of other philosophers and historians. As a young man, Pyrrho joined Alexander the Great’s expedition to India, where he is said to have enjoyed some exposure to Indian philosophy. On his return, Pyrrho started to attract followers, eventually becoming so popular that his home town honoured him with a statue and a proclamation that all philosophers could live there tax free. Pyrrho’s influence now reaches us mainly through the writings of his admirer Sextus Empiricus (c.160–210 ce), who drew sceptical ideas from a range of ancient sources to form the branch of scepticism now known as Pyrrhonism.
The old question of scepticism received some surprising new answers in the 20th century. A strangely simple approach was advanced by the English philosopher G. E. Moore in a public lecture in 1939. In answer to the question of how we could prove the reality of the external world, Moore simply held up his hands (saying, ‘Here is one hand, and here is another’), explained that they were external objects, and drew the logical conclusion that external objects actually exist. Moore considered this to be a fully satisfactory proof: from the premise that he had hands, and the further premise that his hands were external objects (or, as he elaborated, ‘things to be met with in space’), it clearly does follow that external things exist. The sceptic might, of course, complain that Moore did not really know that he had hands—but here Moore proposed shifting the burden of proof over to the sceptic. ‘How absurd it would be to suggest that I did not really know it, but only believed it, and that perhaps it was not the case!’ Moore insists that he knows that he has hands, but doesn’t even try to prove that he is right about this. After shrugging off the sceptic’s worries as absurd, Moore aims to explain why he won’t produce a proof that he has hands, and why we should still accept him as having knowledge on this point.
Even philosophers who are receptive to Moore’s suggestion that there is something wrong with the sceptic’s reasoning may feel unsatisfied with Moore’s plain and stubborn insistence on his common-sense knowledge. Some have tried to identify more precisely what mistake the sceptic is making, while also constructing a positive defence of our common-sense claims to knowledge. One major strategy was advanced by Bertrand Russell, a colleague of Moore’s at Cambridge. Russell grants one point to the sceptic right away: it is logically possible that all of our impressions (or ‘sense data’, to use Russell’s terminology) have their origin in something quite different from the real world we ordinarily take ourselves to inhabit. But in Russell’s approach to scepticism, now known as the ‘Inference to the Best Explanation’ approach, we can grant that point about logical possibility and still hang on to fight the sceptic. Russell argues that there is a large gap between admitting that something is logically possible and concluding that we can’t rationally rule it out: we have rational principles other than the rules of logic, narrowly conceived. In particular, Russell invokes the principle of simplicity: other things being equal, a simpler explanation is rationally preferred to a more complex one. It’s logically possible that all the sense data you ordinarily credit to your pet cat (meowing sounds, the sight and feel of fur, and so on) do not come from the source you expect. Perhaps these impressions issue from a succession of different creatures, or from a series of inexplicably consistent dreams or some other strange source. But the simplest hypothesis, according to Russell, is the one that you would most naturally believe: there is a single real animal whose periodic interactions with you cause the relevant cat-like impressions in the stream of your private experience. Just as it is rational for scientists to explain patterns in their data by appeal to simple laws, it is rational to explain patterns in our everyday experience by appeal to a simple world of lasting objects (the ‘real-world’ hypothesis).

In recent years, some philosophers have used tools from the philosophy of language in an attempt to attack scepticism more aggressively. The motivating thought behind this new approach (the ‘semantic approach’) is that we can find ammunition against the sceptic by looking closely at the way in which our words have meaning or link up to reality. In particular, these new arguments against scepticism have drawn on a movement in the philosophy of language known as Semantic Externalism, a movement that traces back to the work of Ruth Barcan Marcus, Saul Kripke, and Hilary Putnam in the 1960s and 1970s.
The key idea of Semantic Externalism is that words get their meanings not from the images or descriptions that individual speakers associate with those words in their minds (that would be ‘Semantic Internalism’), but from causal chains connecting us to things in the world around us.
For example, in Shakespeare’s time, water was thought to be an element; modern scientists now characterize it as the compound H2O. But even if the question ‘What is water?’ would be answered quite differently by Shakespeare, the modern scientist, and the average person on the street, we have all been interacting with the same substance. The Semantic Externalist contends that we can all refer to the same substance when we say ‘water’ exactly because our meaningful use of the word is anchored in our common causal contact with a particular substance. Because Shakespeare and the modern scientist have seen and tasted the same liquid, whatever they thought about its nature, we can now say that they mean the same thing when they use the word ‘water’. Sometimes the relevant causal chains must run through other speakers: no person alive today has met the late French emperor Napoleon, but we can still talk about him, as long as we pick up our use of the word ‘Napoleon’ through sources with the right kind of causal links back to the man himself. Semantic Externalism is especially useful in explaining how speakers with different (and conflicting) ideas about something can still talk about the same thing: when Jill says that Napoleon was very short, and Bill says that Napoleon was actually above average in height (knowing that the rumours of his small stature were started by his English enemies), they can be discussing the same person despite the differences in their mental images.
The best-known application of Semantic Externalism to scepticism is found in Hilary Putnam’s 1981 book Reason, Truth and History.

In many fields—literature, music, architecture—the label ‘Modern’ stretches back to the early 20th century. Philosophy is odd in starting its Modern period almost 400 years earlier. This oddity is explained in large measure by a radical 16th century shift in our understanding of nature, a shift that also transformed our understanding of knowledge itself. On our Modern side of this line, thinkers as far back as Galileo Galilei (1564–1642) are engaged in research projects recognizably similar to our own. If we look back to the Pre-Modern era, we see something alien: this era features very different ways of thinking about how nature worked, and how it could be known.
To sample the strange flavour of pre-Modern thinking, try the following passage from the Renaissance thinker Paracelsus (1493–1541), 'The whole world surrounds man as a circle surrounds one point. From this it follows that all things are related to this one point, no differently from an apple seed which is surrounded and preserved by the fruit … Everything that astronomical theory has profoundly fathomed by studying the planetary aspects and the stars … can also be applied to the firmament of the body.'
Thinkers in this tradition took the universe to revolve around humanity, and sought to gain knowledge of nature by finding parallels between us and the heavens, seeing reality as a symbolic work of art composed with us in mind.

By the 16th century, the idea that everything revolved around and reflected humanity was in danger, threatened by a number of unsettling discoveries, not least the proposal, advanced by Nicolaus Copernicus (1473–1543), that the earth was not actually at the centre of the universe. The old tradition struggled against the rise of the new. Faced with the news that Galileo’s telescopes had detected moons orbiting Jupiter, the traditionally minded scholar Francesco Sizzi argued that such observations were obviously mistaken. According to Sizzi, there could not possibly be more than seven ‘roving planets’ (or heavenly bodies other than the stars), given that there are seven holes in an animal’s head (two eyes, two ears, two nostrils and a mouth), seven metals, and seven days in a week.
Sizzi didn’t win that battle. It’s not just that we agree with Galileo that there are more than seven things moving around in the solar system. More fundamentally, we have a different way of thinking about nature and knowledge. We no longer expect there to be any special human significance to natural facts (‘Why seven planets as opposed to eight or 15?’) and we think knowledge will be gained by systematic and open-minded observations of nature rather than the sorts of analogies and patterns to which Sizzi appeals. However, the transition into the Modern era was not an easy one. The pattern-oriented ways of thinking characteristic of pre-Modern thought naturally appeal to meaning-hungry creatures like us. These ways of thinking are found in a great variety of cultures: in classical Chinese thought, for example, the five traditional elements (wood, water, fire, earth, and metal) are matched up with the five senses in a similar correspondence between the inner and the outer. As a further attraction, pre-Modern views often fit more smoothly with our everyday sense experience: naively, the earth looks to be stable and fixed while the sun moves across the sky, and it takes some serious discipline to convince oneself that the mathematically more simple models (like the sun-centred model of the solar system) are right.

Here’s something you probably took yourself to know: Mount Everest is the tallest mountain in the world. But if that fact about Everest didn’t come as news to you, here’s something you probably don’t know: how exactly you originally learned that fact (or any random trivia fact of that sort). According to psychologists who study memory, unless you had a pivotal life experience when you first heard that fact about Everest (like an earthquake hitting at the very moment it was mentioned in your first primary school geography lesson), you won’t remember which source you learned it from. In fact, if someone challenged your claim to know that Mount Everest is the tallest mountain in the world, you might not be able to say much to defend it. You could say that it feels to you like a familiar fact. The challenger could object that those feelings of familiarity can be deceptive. For many people around the world, the claim that Sydney is the capital of Australia feels like a well-known fact. Sometimes, feeling sure you are right accompanies actually being wrong.
Suppose that a person isn’t conscious of anything really justifying his claim that Mount Everest is the tallest mountain in the world. Could he still count as knowing that fact? Here philosophers split into two camps. The internalist camp says: If you really can’t think of any supporting evidence, you are in trouble. Your belief about Everest can’t count as knowledge if there is nothing accessible to you that supports it. It’s unlike your belief that you are now reading, which you yourself can justify by appeal to the experiences that you are now conscious of having; it’s also unlike your belief that there is no largest prime number, which you can justify by going through the steps of Euclid’s proof for yourself. Knowledge is grounded by your own experience and by your own capacity to reason. Internalists place a special emphasis on what you can do with resources that are available from the first-person perspective: if you can’t see for yourself why you should believe something, you don’t actually know it. The subject’s own awareness of good grounds is an essential part of what distinguishes knowing from lower states like guessing.
Meanwhile, according to the rival externalist camp, knowledge is a relationship between a person and a fact, and this relationship can be in place even when the person doesn’t meet the internalist’s demands for first-person access to supporting grounds. If it really is a fact that Mount Everest is the tallest mountain in the world, and if you really are related to that fact in the right way, then you know that Mount Everest is the tallest mountain in the world, even if you can’t explain your reasons for thinking this. Externalists are happy to grant that sometimes you not only know something but also have special first-person insight into exactly how you know it. But from an externalist perspective, that insight into how you know is an optional bonus, and not something that must generally accompany every single instance of knowledge. Externalists argue that always demanding insight into how we know risks setting off a vicious regress. On the internalist way of thinking, they note, you shouldn’t just have some random idea about how you know something, but should actually know how you know it (what is insight without knowledge?). But if knowledge always requires knowing how you know, then this second level of knowledge requires its own internalist guarantee (knowing how you know that you know), and so on. Suddenly you need infinite levels of insight to know the simplest fact. The internalist path threatens to lead us to scepticism, externalists suggest.

In the realm of knowledge, many of our prized possessions come to us second-hand. We rely on others for our grasp of everything from the geography of distant places to mundane facts about the lives of our friends. If we couldn’t use others as sources, we would lose our grip on topics as diverse as ancient history (except what we could discover through our own personal archaeological expeditions) and celebrity weddings (unless we start getting invited). Testimony evidently expands our horizons: the challenge is in explaining exactly how (and how far). Does listening to other people—or reading what they have written—supply us with knowledge in a unique or distinctive way? Do we need special reasons to trust people in order to gain knowledge from them? What should we think about resources like Wikipedia, where most articles have multiple and anonymous authors?
At one extreme, some philosophers have argued that testimony never actually provides knowledge (John Locke will be our star example of this position). At the other end of the spectrum, some philosophers argue that testimony not only provides knowledge, but does so in a distinctive way. In this view, testimony is a special channel for receiving knowledge, a channel with the same basic status as sensory perception and reasoning (this type of position was embraced in classical Indian philosophy, and is now popular in Anglo-American theory as well).
When does testimony supply knowledge? Some philosophers say: ‘Never.’ To see why philosophers might be sceptical about testimonial knowledge, even if they aren’t sceptical about other kinds of knowledge, it first helps to clarify what we mean by ‘testimony’. In an act of testimony, someone tells you something—through speech, gestures, or writing—and the content of what they are telling you plays a special role in what you get out of the exchange. Even sceptics about testimonial knowledge can agree that ordinary perceptual knowledge can be generated by the event of hearing or reading what someone says. For example, imagine either seeing that someone has written ‘I have neat handwriting’ on a slip of paper, or hearing someone saying ‘I have a hoarse voice.’ If you can indeed see that the writing is neat or hear that the voice is hoarse, you come to know the truth of what is said or written. But your knowledge here is perceptual, rather than testimonial, because the content of what is written or said plays no special role in what you learn: the sentence ‘Smith got the job’ would work just as well to convey the beauty of the handwriting or the roughness of the voice. If you believe something on the basis of my testimony, you understand what I am saying, and take my word for it.
The main moderately positive position is reductionism: we do gain knowledge through testimony, but the knowledge-providing power of testimony is nothing special. Whether we are reading, listening, or watching someone’s gestures or sign language, we receive testimony through ordinary sense perception.

Some words are slippery. Every night, the word ‘tomorrow’ slides forward to pick out a different day of the week. ‘Here’ designates a different place depending on where you are standing. ‘I’ stands for someone different depending on who is speaking; and ‘this’ could be anything at all. Words like ‘big’ and ‘small’ are also tricky: a morbidly obese mouse is in some sense big, but in another sense still small. What about the verb ‘to know’? Is it possible that it also shifts around in some interesting way?
What the other words featured in the last paragraph have in common is context-sensitivity. It’s tempting to say that context-sensitive words keep changing their meaning, but that’s not exactly right. We don’t have to buy a new dictionary every day to keep up on what the word ‘tomorrow’ means. Rather than changing their meanings, context-sensitive words work like recipes that take input from a conversational context to settle what they stand for. Once the context is established, it should be clear exactly what ‘this’ indicates, or which day of the week is picked out by ‘yesterday’.
‘Contextualism’ is the standard name for the view that words like ‘know’ and ‘realize’ are context-sensitive. Contextualism grew out of a theory of knowledge launched in the early 1970s, the ‘Relevant Alternatives’ theory of knowledge. Advocates of that theory say that knowing always involves grasping some kind of contrast.

How much do we know about knowledge before we start to study it systematically? We don’t seem to start completely empty-handed. Philosophers have a special reason to hope that we have something at the outset, some talent for spotting genuine cases of knowledge: instincts or intuitions about particular cases are supposed to support some philosophical theories over others. If you felt that the person looking at the broken clock didn’t know the time, that feeling was taken to work as a reason for you to reject the classical analysis of knowledge. But what enabled you to judge that case the way you did? And can you tell whether your way of judging—whether or not others share it—is really getting it right? These questions have recently sparked fresh empirical and philosophical work on our intuitions about knowledge.
The word ‘intuition’ may suggest some mystical power of insight, but intuitions about knowledge are a feature of everyday life. Focus for a moment on the difference between these two claims: (1) ‘Lee thinks that he is being followed.’ (2) ‘Lee knows that he is being followed.’ There’s a significant difference here, but the choice to say one thing rather than the other doesn’t usually involve deliberate calculation. You can feel that someone knows something (or doesn’t know it) without calling to mind any explicit theory of knowledge: this kind of feeling is an intuition. There’s a name for the natural capacity that generates instinctive feelings about knowledge and other mental states: mindreading. The word is popularly uhow well they grasp their environment, and we become better able to predict how they will act and interact with us. Without a capacity for mindreading, we’d be stuck looking at surface patterns of moving limbs and facial features; mindreading gives us access to deeper states within a person. Whether we are trying to coordinate or compete with others, it helps enormously to know what they want and know, and whether they are friendly, angry, or impatient. We don’t always get it right: it’s possible to mistake what someone knows or wants, or to be misled by a skilled deceiver, but our daily social navigation is so effective that it comes as a surprise when we occasionally misread a situation.

The mindreading abilities of human beings are better than those of any other species on earth. It’s not surprising that mindreading comes to have its own specialized area in the adult brain: although we can do it effortlessly, mindreading involves some complex calculations. In this respect it has something in common with face recognition, which also involves very rapid and effortless calculations, and is also highly specialized within one specific area of the adult brain. What is the relationship between what someone wants, notices, pretends, and plans to do when he is seen? Navigating around these patterns is a non-trivial task. If we are able to decide, in the course of a casual conversation, whether to describe someone as just thinking or really knowing something, we can do this without conscious calculation in part because we have specialized brain resources devoted to the task of tracking mental states.
There are natural limitations to our mindreading equipment. One limit is a simple capacity limit on how many nested levels of mental state we can represent. Here’s a deeper limitation that is specific to mindreading: we have a tendency to be self-centred. More precisely, we suffer from a bias called ‘egocentrism’, which makes it difficult for us to override our own perspective when we are evaluating others who know less about their situation than we do.
It’s not entirely clear why we have so much trouble subtracting from our own special knowledge when trying to represent or evaluate other perspectives, but we do know that the egocentric bias is very robust: some biases can be suppressed if you forewarn people about them, or if you give cash incentives for better performance, but egocentrism sticks with us even under those conditions. Epistemologists have wondered whether this limitation on our natural capacity to see other perspectives could be playing some role in the pattern of intuitions motivating contextualism. Once I am thinking about tricky alternatives like disguised donkeys, I will have trouble evaluating the perspective of a naive zoo visitor: even if I’m explicitly aware that she isn’t thinking about those strange possibilities, egocentrism can drive me to evaluate her as though she is. Whether or not this works as a strategy for explaining the intuitions behind contextualism, epistemologists can profit from a better understanding of the natural mechanisms behind our intuitions about the presence and absence of knowledge. If some intuitions can be shown to arise from natural limitations or biases in our mindreading capacity, we can handle them with special caution as we construct our theories of knowledge.

Some philosophers have argued that our situation here is hopeless. You can check whether your wristwatch is running fast or slow by comparing it to the National Research Council atomic clock, but there’s no obvious counterpart for checking the accuracy of your intuitions about knowledge. If you’ve had trouble coming up with a smooth theory of knowledge that explains all of your instinctive feelings about particular cases, you may suspect that some of these feelings are illusions. But which ones? Philosopher Robert Cummins contends that we could only sort out which intuitions about knowledge were the right ones if we had some independent intuition-free access to the nature of knowledge itself. But if we had that kind of direct access we wouldn’t have to fumble with intuitions about particular cases to grasp the nature of knowledge: ‘If you know enough to start fixing problems with philosophical intuition, you already know enough to get along without it.’ Cummins concludes that philosophers should never rely on intuitions about cases in formulating theories of knowledge.
That’s a very pessimistic response. The parallel move in the perceptual domain would be to say that we can’t start fixing problems with our visual impressions of colour, and figuring out which ones are illusory, until we have an independent, vision-free way of accessing colour. Although we are starting to develop technologies that can sort out colour signals without reliance on the human eye—photometers can measure the colours on that diagram of the Hermann Grid and won’t get confused about the corner spots—we didn’t actually wait until we had that equipment to start sorting out the illusions from the accurate impressions. We have used a variety of techniques over time to figure out which impressions are right, including double-checking impressions in different contexts or from different angles. Our understanding of vision has evolved alongside our understanding of the nature of light and colour.
Meanwhile, even without the epistemological equivalent of a photometer to detect states of knowledge, our investigation of intuitions about knowledge can also evolve alongside our investigation of knowledge itself. As we move towards a better understanding of our knowledge-spotting instincts, fitting them into a broader picture of our psychology, we become better able to sort out which intuitions should count. Meanwhile, a sharper philosophical picture of knowledge itself will help advance our understanding of the nature of those instincts. Tools other than intuitions can also be used to tackle the problem of knowledge. We can work to develop internally consistent theories of knowledge that fit with our broader theories of human language, logic, science, and learning. We can refine our rough intuitive sense of the conditions that make knowledge possible by building mathematical models of individual and group knowledge. We can compare the strengths of existing philosophical theories generated in a range of historical periods and across different cultures. Some philosophical claims about knowledge have turned out to be confused or self-undermining, but other findings about knowledge, like its special connection with truth, have stood the test of time. If we do not know in advance which of our methods will be best suited to deliver further insight into the nature of knowledge, this is in part because we still do not fully understand what knowledge is."

It's time to leave, and before she took her leave, Swara said, "So, how you think, matters a lot, because it affects all the work you do all through your day. It is time to focus on the way you are thinking. Changing the way you are thinking is not only to be more optimistic but giving your mind the breathing room it needs to grow and expand."
Swara's echoes began to weaken, and she slowly disappeared humming,

Without going out of your door
You can know all things on earth
Without looking out of your window
You can know the ways of heaven
The farther one travels
The less one knows
The less one really knows

Arrive without traveling
See all without looking
Do all without doing *)

"And Allah knows best."
Citations & References:
- Jennifer Nagel, Knowledge: A Very Short Introduction, Oxford University Press
- Joseph H. Shieber, PhD, Theories of Knowledge: How to Think about What You Know, the Great Courses
*) "The Inner Light" written by George Harrison. Actually it was taken from Laozi's Dao De Jing stanza 47, that the mind is the primary resident of its abode; mental clarity is essential for the proper control and utilization of the senses. When desires are suppressed, the mind controls its officers, the senses.