June 24, 2010

-page 3-

Whatever view one takes about these matters (with the possible exception of extreme externalism) indirect perception obviously requires some understanding (knowledge? Justification? Belief?) of the general relationship between the fact one comes to know (that ‘a’ is ‘F’) and the facts (that ‘b’ is ‘G’) that enable one to know it. And it is this requirement on background knowledge or understanding that leads to questions to questions about the possibility of indirect perceptual knowledge. Is it really knowledge? The first question is inspired by sceptical doubts about whether we can ever know the connecting facts in question. How is it possible to learn, to acquire knowledge of, the connecting facts knowledge of which is necessary to see, by b’s being ‘G’, and that ‘a’ is ‘F’? These connecting facts do not appear to be perceptually knowable. Quite the contrary, they appear to b e general truths knowable (if knowable at all) by inductive inference e from past observations. And if one is sceptical about obtaining knowledge in this indirect, inductive way one is, perforce, sceptical about the existence of the kind of indirect knowledge, including indirect perceptual knowledge of the set described, in that depends on it.


Even if one puts aside such sceptical questions, how ever, there remains a legitimate concern about the perceptual character of this kind knowledge. If one sees that ‘a’ is ‘F’ by seeing that ‘b’ is ‘G’, is really seeing that ‘a’ is ‘F’? Isn’t perception merely a part ~ and, from an epistemological standpoint, the less significant part ~ of the process whereby one comes to know that ‘a’ is ‘F’. One must, it is true, sere that ‘b’ is ‘G’, but this is only one of the premises needed to reach the conclusion (knowledge) that ‘a’ is ‘F’. There is also the background knowledge that is essential to the process. If we think of a theory as any factual proposition, or set of factual propositions, that cannot itself be known in some direct observational way, we can express this worry by saying that indirect perception is always theory-loaded: Seeing (indirectly)that ‘a’ is ‘F’ is only possible if the observer already has knowledge of (justification for, belief in) some theory, the theory ‘connecting’ the fast one cannot come to know (that ‘a’ is ‘F’) with the fact (that ‘b’ is ‘G’) that enables one to know it.

This, of course, reverses the standard foundationalist picture of human knowledge. Instead of theoretical knowledge depending on, and being derived from, perception, perception (of the indirect sort) presupposes a prior knowledge.

Foundationalists are quick to point out that this apparent reversal in the structure of human knowledge is only apparent. Our indirect perception of facts depends on theory, yes, but this merely shows that indirect perceptual knowledge is not part of the foundation. To reach the kind of perceptual knowledge that lies at the foundation, we need to look at a form of perception that is purified of all theoretical elements. This then, will be perceptual knowledge pure and direct. No background knowledge or assumptions about connecting regularities are needed in direct perception because the known facts are presented directly and immediately and not (as, in indirect perception) on the basis of some other facts. In direct perception all the justification (needed for knowledge) is right there in the experience itself.

What, then, about the possibility of perceptual knowledge pure and direct, the possibility of coming to know, on the basis of sensory experience, that ‘a’ is ‘F’ where this does not require assumptions or knowledge that has a source outside the experience itself? Where is this epistemological ‘pure gold’ to be found?

There are, basically, two views about the nature of direct perceptual knowledge (coherentists would deny that any of our knowledge is basic in this sense). These views (following traditional nomenclature) can be called ‘direct realism’ and ‘representationalism’ or ‘representative realism’. A representationalist restricts direct perceptual knowledge to objects of a very special sort: Ideas, impressions, or sensations, sometimes called sense-data ~ entities in the mind of the observer. One directly perceives a fact, e.g., that ‘b’ is ‘G’ , only when ‘b’ is a mental entity of some sort ~ a subjective appearance or sense-data ~ and ‘G’ is a property of this datum. Knowledge of these sensory states is supposed to be certain and infallible. These sensory facts are, so o speak, right up against the mind’s eye. One cannot be mistaken about these facts for these facts are, in reality, facts about the way things appear to be, and one cannot be mistaken about the way things appear to be. Normal perception of external conditions, then, turns out to be (always) a type of indirect perception. One ‘sees’ that there is a tomato in front of one by seeing that the appearance (of the tomato) have a certain quality (reddish and bulgy) and inferring as this is topically said to be automatic and unconscious, on the basis of certain background assumptions, e.g., that there typically is a tomato in front of one when one has experiences of this sort, that there is a tomato in front of one. All knowledge of objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on an even more direct knowledge of the appearances.

For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’ with the theory that there is some regular, some uniform, correlation between the way things appear (known in the perceptually direct way) and the way things actually are (known, if known at all, in a perceptual indirect way).

The second view, direct realism, refuses to restrict perceptual knowledge, to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however, direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on, nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.

To understand the way this is supposed to work, consider an ordinary example, ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour ~ perhaps, even tasting and smelling it (to make sure its not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependence on S’s perceptual knowledge of its shape, colour, smell, and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, etc. Nonetheless, S’s perception of the banana’s colour and shape is not indirect. ‘S’ does not see that the object is yellow, for example, by seeing, knowing, believing anything more basic ~ either about the banana or anything else, e.g., his own sensations of the banana. ‘S’ has learned to identify such features, of course, but when ‘S’ learned to do is not an inference, even a unconscious inference, from other things be believes. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that the y were yellow. The exercise of this skill does not require, and in no way depends on, the having of any other beliefs. ‘S’s identificatorial successes will depend on his operating in certain special conditions, of course, ‘S’ will not, perhaps, be able to visually identify yellow objects in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way deepens on a belief )let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the result of exercising a skill, an indentificatoial skill, that like any skill,. Requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also, with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they do not, any more than the basketball player, have to know they are in these conditions to do what being in these conditions enables them to do.

This means, of course, that for direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a’ is ‘F’ depends on his being caused to believe that ’a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t he doesn’t. Whether or not ‘S’ knows depends, then, not on what else, if anything, ‘S’ believes, but on the circumferences in which ‘S’ comes to believe. This being so, this type of direct realism is a form of externalism, direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed, by way of justification for such knowledge has been reduced. Background knowledge ~ and, in particular, the knowledge that the experience does, and suffices for knowing ~ is not needed.

Realism in any area of thought is the doctrine that certain entities allegedly associated with that area are indeed real. Common sense realism ~ sometimes called ‘realism’, without t qualification ~ says that ordinary things like chairs and trees and people are real. Scientific realism says that theoretical posits like electrons and fields of force and quarks are equally real. And psychological realism says mental states like pain and beliefs are real. Realism can be upheld ~ and opposed ~ in all such areas, as it can with differently or more finely drawn provinces of discourse: For example, with discourse about colours, about the past, about possibility and necessity, or about matters of moral right and wrong. The realist in any such area insists on the reality of the entities in question in the discourse.

If realism itself can be given a fairly quick characterization, it is more difficult to chart the various forms of opposition, for they are legion. Some opponents deny that there are any distinctive posits associated with the area of discourse under dispute: A good example is the emotivist doctrine that moral discourse does not posit values but serves only, like applause and exclamation, to express feelings. Other opponents deny that the entities posited by the relevant discourse exists, or, at least, exists independently of our thinking about them: Here the standard example is ‘idealism’. And others again, insist that the entities associated with the discourse in question are tailored to our human capacities and interests and, to that extent, are as much a product of invention as a matter of discovery.

Nevertheless, one us e of terms such as ‘looks’, ‘seems’, and ‘feels’ is to express opinion. ‘It looks as if the Labour Party will win the next election’ expresses an opinion about the party’s chances and does not describe a particular kind of perceptual experience. We can, however, use such terms to describe perceptual experience divorced from any opinion to which the experience may incline us. A straight stick half in water looks bent, and does so to people completely familiar with this illusion who have, therefore, no inclination to hold that the stick is in fact bent. Such users of ‘looks’, ‘seems’, ‘tastes’, etc. are commonly called ‘phenomenological’.

The act/object theory holds that the sensory experience recorded by sentence employing sense are a matter of being directly acquainted with something which actually bears the red to me. I am acquainted with a red expanse (in my visual field): when something tastes bitter to me I am directly acquainted with a sensation with the property of being bitter, and so on and so forth. (If yu do not understand the term ‘directly acquainted’, stick a pin into your finger. The relation you will then bear to your pain, as opposed to the relation of concern you might bear to another’s pain when told about it, is an instance e of direct acquaintance e in the intended sense.)

The act/object account of sensory experience combines with various considerations traditionally grouped under the head of the argument for illusion to provide arguments for representative realism, or more precisely for the clause in it that contents that our senorily derived information about the world comes indirectly, that what we are most directly acquainted with is not an aspect of the world but an aspect for our mental sensory responses to it. Consider, for instance, the aforementioned refractive illusion, that of a straight stick in water looking bent. The act/object account holds that in this case we are directly acquainted with a bent shape. This shape, so the argument runs, cannot be the stick as it is straight, and thus, must be a mental item, commonly called a sense-datum. And, ion general sense-data-visual, tactual, etc. ~ are held to be the objects of direct acquaintance. Perhaps the most striking use of the act/object analysis to bolster representative realism turns on what modern science tells us about the fundamental nature of the physical world. Modern science tells us that the objects of the physical world around us are literally made up of enormously many, widely separated, tiny particles whose nature can be given in terms of a small number of properties like mass, charge, spin and so on. (These properties are commonly called the primary qualities, as primary and secondary qualities represent a metaphysical distinction with which really belong to objects in the world and qualities which only appear to belong to them, or which human beings only believe to belong to them, because of the effects those objects produce ion human beings, typically through the sense organs, that is to say, something that does not hold everywhere by nature, but is producing in or contributed by human beings in their interaction with a world which really contains only atoms of certain kinds in a void. To think that some objects in the world are coloured, or sweet ort bitter is to attribute to objects qualities which on this view they do not actually possess. Rather, it is only that some of the qualities which are imputed to objects, e.g., colour, sweetness, bitterness, which are not possessed by those objects. But, of course, that is not how the objects look to us, not how they present to our senses. They look continuous and coloured. What then can these coloured expanses with which we are directly acquainted be other than mental sense-data?

Two objections dominate the literature on representative realism: One goes back to Berkeley (1685-1753) and is that representative realism leads straight to scepticism about the external world, the other is that the act/object account of sensory awareness is to be rejected in favour of an adverbial account.

Traditional representative realism is a ‘veil of perception’ doctrine, in Bennett’s (1971) phrase. Lock e’s idea (1632-1704) was that the physical world was revealed by science to be in essence colourless, odourless, tasteless and silent and that we perceive it by, to put it metaphorically, throwing a veil over it by means of our senses. It is the veil we see, in the strictest sense of ‘see’. This does not mean that we do not really see the objects around us. It means that we see an object in virtue of seeing the veil, the sense-data, causally related in the right way to that object, an obvious question to ask, therefore, is what justifies us in believing that there is anything behind the veil, and if we are somehow justified in believing that there is something behind the veil,. How can we be confident of what it is like?

One intuition that lies at the heart of the realist’s account of objectivity is that, in the last analysis, the objectivity of a belief is to be explained by appeal to the independent existence of the entities it concerns: epistemological objectivity, this is, is to b e analysed in terms of ontological notions of objectivity. A judgement or belief is epistemological notions of objectivity, if and only if it stands in some specified reflation to an independently existing, determinate reality. Frége (1848-1925), for example, believed that arithmetic could comprise objective knowledge only if the numbers it refers to, the propositions it consists of, the functions it employs, and the truth-values it aims at, are all mind-independent entities. And conversely, within a realist framework, to show that the members of a given class of judgements are merely subjective, it is sufficient to show that there exists no independent reality that those judgements characterize or refer to.

Thus, it is favourably argued that if values are not part of the fabric of the world, then moral subjectivity is inescapable. For the realist, the, of epistemological notions of objectivity is to be elucidated by appeal to the existence of determinate facts, objects, properties, events and the like, which exit or obtain independent of any cognitive access we may have to them. And one of the strongest impulses towards platonic realism ~ the theoretical commitment to the existence of abstract objects like sets, numbers, and propositions ~ stems from the widespread belief that only if such things exist in their own right can we allow that logic, arithmetic and science are indeed objective. Though ‘Platonist’ realism in a sense accounts for mathematical knowledge, it postulates such a gulf between both the ontology and the epistemology of science and that of mathematics that realism is often said to make the applicability of mathematics in natural science into an inexplicable mystery

This picture is rejected by anti-realists. The possibility that our beliefs and theories are objectively true is not, according to them, capable of being rendered intelligible by invoking the nature and existence of reality as it is in and of itself. If our conception of epistemological objective notions is minimal, requiring only ‘presumptive universality’, then alternative, non-realist analysers of it can seem possible ~ and eve n attractive. Such analyses have construed the objectivity of an arbitrary judgement as a function of its coherence with other judgements, of its possession of grounds that warrant it,. Of its conformity to the a prior rules that constitute understanding, of its verifiability (or falsifiability), or if its permanent presence in the mind of God. On e intuitive common to a variety of different anti-realist theories is such that for our assertions to be objective, for our beliefs to comprise genuine knowledge, those assertions and beliefs must be, among other things, rational, justifiable, coherent, communicable and intelligible. But it is hard, the anti-realist claims, to see how such properties as these can be explained by appeal to entities as they are on and of themselves. On the contrary, according to most forms of anti-realism, it is only the basis of ontological subjective notions like ‘the way reality seems to us’, ‘the evidence that is available to us’, ‘the criteria we apply’, ‘the experience we undergo’ or ‘the concepts we have acquired’ that epistemological notions of objectivity of our beliefs can possibly be explained.

The reason by which a belief is justified must be accessible in principle to the subject hold that belief, as Externalists deny this requirement, proposing that this makes knowing too difficult to achieve in most normal contexts. The internalist-Externalists debate is sometimes also viewed as a debate between those who think that knowledge can be naturalized (Externalists) and those who do not (internalist) naturalists hold that the evaluative notions used in epistemology can be explained in terms of non-evaluative concepts ~ for example, that justification can be explained in terms of something like reliability. They deny a special normative realm of language that is theoretically different from the kinds of concepts used in factual scientific discourse. Non-naturalists deny this and hold to the essential difference between normative and the factual: The former can never be derived from or constituted by the latter. So internalists tend to think of reason and rationality as non-explicable in natural, descriptive terms, whereas, Externalists think such an explanation is possible.

Although the reason, . . . to what we think to be the truth. The sceptic uses an argumentive strategy to show the alternatives strategies that we do not genuinely have knowledge and we should therefore suspend judgement. But, unlike the sceptics, many other philosophers maintain that more than one of the alternatives are acceptable and can constitute genuine knowledge. However, it seems dubitable to have invoked hypothetical sceptics in their work to explore the nature of knowledge. These philosophers did no doubt that we have knowledge, but thought that by testing knowledge as severely as one can, one gets clearer about what counts as knowledge and greater insight results. Hence there are underlying differences in what counts as knowledge for the sceptic and other philosophical appearances. As traditional epistemology has been occupied with dissassociative kinds of debate that led to a dogmatism. Various types of beliefs were proposed as candidates for sceptic-proof knowledge, for example, those beliefs that are immediately derive by many as immune to doubt. Nevertheless, that they all had in common was that empirical knowledge began with the data of the senses, that this was safe from scepticism and that a further superstructure of knowledge was to be built on this firm basis.

It might well be observed that this reply to scepticism fares better as a justification for believing in the existence of external objects, than as a justification of the views we have about their nature. It is incredible that nothing independent of us is responsible for the manifest patterns displayed by our sense-data, but granting this leaves open many possibilities about the nature of the hypnotized external reality. Direct realists often make much of the apparent advantage that their view has in the question of the nature of the external world. The fact of the matter is, though, that it is much harder to arrive at tenable views about the nature of external reality than it is to defend the view that there is an external reality of some kind or other. The history of human thought about the nature of the external world is littered with what are now seen (with the benefit of hindsight) to be egregious errors ~ the four element theory, phlogiston, the crystal spheres, vitalism, and so on. It can hardly be an objection to a theory that makes the question of the nature of external reality much harder than the question of its existence.

The way we talk about sensory experience certainly suggests an act/object view. When something looks thus and so in the phenomenological sense, we naturally describe the nature of our sensory experience by saying that we are acquainted with a thus ans so ‘given’. But suppose that this is a misleading grammatical appearance, engendered by the linguistic propriety of forming complete, putatively referring expressions like ‘the bent shape on my visual field’, and that there is no more a bent shape inb existence for the representative realist to contend to be a mental sense-data, than there is a bad limp in existence when someone has, as we say, a bad limp. When someone has a bad limo, they limp badly, similarly, according to adverbial theorist, when, as we naturally put it, I am aware of a bent shape, we would better express the way things are by saying that I sense bent shape-ly. When the act/object theorist analyses as a feature of the object which gives the nature of the sensory experience, the adverbial theorist analyses as a mode of sense which gives the nature of the sensory experience. (The decision between the act/object and adverbial theories is a hard one.)

In the best-known form the adverbial theory of experience proposes that the grammatical object of a statement attributing an experience to someone be analysed as an adverb. For example,

Rod is experiencing a pink square

Is rewritten as:

Rod is experiencing (pink square)-ly

This is presented as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience corresponding to its grammatical object. A commitment to the explicit adverbialization of statements of experience is not, however, essential to adverbialism. The core of the theory consists, rather, in the denial of objects of experience, as opposed to objects of perception, and coupled with the view that the role of the grammatical object is a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a modifier of a verb. If this is so, it is perhaps appropriate to regard it as a special kind of adverb at the semantic level.

Nonetheless, in the arranging accordance to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness in the event of experiencing that object. Such as these experiences are, it is, nonetheless. The experiences are supposed to be whatever it is that they represent. Act, object theorist may differ on the nature of objects of experience, which h have been treated as properties. However, and, more commonly, private mental objects in which may not exist have any form of being, and, with sensory qualifies the experiencing imagination may walk upon the corpses of times’ generations, but this has also been used as a unique application to is mosaic structure in its terms for objects of sensory experience or the equivalence of the imaginations striving from the mental act as presented by the object and forwarded by and through the imaginistic thoughts that are released of a vexing imagination. Finally, in the terms of representative realism, objects of perception of which we are ‘directly aware’, as the plexuity in the abstract objects of perception exist if objects of experience.

As the aforementioned, traditionally representative realism is allied with the act/object theory. But we can approach the debate or by rhetorical discourse as meant within dialectic awareness, for which representative realism and direct realism are achieved by the mental act in abdication to some notion of regard or perhaps, happiness, all of which the prompted excitations of the notion expels or extractions of information processing. Mackie (1976( argues that Locke (1632-1704) can be read as approaching the debate ion television. My senses, in particular my eyes and ears, ‘tell’ me that Carlton is winning. What makes this possible is the existence of a long and complex causal chain of electro-magnetic radiation from the game through the television cameras, various cables between my eyes and the television screen. Each stage of this process carries information about preceding stages in the sense that the way things are at a given stage depends on the way things are at preceding stages. Otherwise the information would not be transferred from the game to my brain. There needs to be a systematic covariance between the state of my brain and the state unless it obtains between intermediate members of the long causal chain. For instance, if the state of my retina did not systematically remit or consign with the state of the television screen before me, my optic nerve would have, so to speak, nothing to go on to tell my brain about the screen, and so in turn would have nothing to go on to tell my brain about the game. There is no information at a distance’.

A few of the stages in this transmission of information between game and brain are perceptually aware of them. Much of what happens between brain and match I am quite ignorant about, some of what happens I know about from books, but some of what happens I am perceptually aware of the images on the scree. I am also perceptually aware of the game. Otherwise I could not be said to watch the game on television. Now my perceptual awareness of the match depends on my perceptual awareness of the screen. The former goes by means of the latter. In saying this I am not saying that I go through some sort of internal monologue like ‘Such and such images on the screen are moving thus and thus. Therefore, Carlton is attacking the goal’. Indeed, if you suddenly covered the screen with a cloth and asked me (1) to report on the images, and (2) to report in the game. I might well find it easier to report on the game than on the images. But that does not mean that my awareness of the game does not go by way of my awareness of the images on the screen. The shows that I am more interested in the game than in the screen, and so am storing beliefs about it in preference to beliefs about the screen.

We can now see how elucidated representative realism independently of the debate between act/object and adverbial theorists about sensory experience. Our initial statement of representative realism talked of the information acquired in perceiving an object being most immediately about the perceptual experience caused in us by the object, and only derivatively about objects itself, in the act/object, sense-data approach, what is held to make that true is that the fact that what we are immediately aware of it’s mental sense-datum. But instead, representative realists can put their view this way: Just as awareness of the match game by means of awareness of the screen, so awareness of the screen foes by way of awareness of experience., and in general when subjects perceive objects, their perceptual awareness always does by means of the awareness of experience.

Why believe such a view? Because of the point we referred to earlier: The worldly provision by our senses is so very different from any picture provided by modern science. It is so different in fact that it is hard to grasp what might be meant by insisting that we are in epistemologically direct contact with the world.

An argument from illusion is usually intended to establish that certain familia r facts about illusion disprove the theory of perception and called naïve or direct realism. There are,. However, many different versions of the argument which must be distinguished carefully. Some of these premisses (the nature of the appeal to illusion):Others centre on the interpretation of the conclusion (the kind of direct realism under attack). In distinguishing important differences in the versions of direct realism. One might be taken to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism would concede to the connection with perception, such that we sometimes directly perceive physical objects and their properties: We do not always perceive physical objects by perceiving something else, e.g., a sense-data. There are, however, difficulties with this formulation of the view. For one thing a great many philosophers who are not direct realists would admit that it is a mistake to describe people as actually perceiving something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to the physical world, and that is the last thing paradigm sense-data theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express what they were objecting to in terms of a technical and philosophical controversial concept such as acquaintance. Using such a notion we could define direct realism this way: In veridical experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious version of the view might drop the reference to veridical experience and claim simply that in all parts or constituents of physical objects.

We know things by experiencing them, and knowledge of acquaintance. (Russell changed the preposition to ’by’) is epistemically prior to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’.

A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with thing is more or less causally proximate to sensations caused by that thing is more or less distant causal y, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a object to which the thought refers, i.e., it is a sensation. The things we have knowledge of acquaintance e include ordinary objects in the external world, such as the Sun.

Grote contrasted the imaginistic thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are contentful mental states. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call conceptual propositional content, referring the thought to its object. Whether contentful or not, thoughts constituting knowledge of acquaintance with a thing as r relatively indistinct, although this indistinctness does not imply incommunicability. Yet, thoughts constituting knowledge about a thing are relatively distinct, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation. Grote did not have an explicit theory of reference e, the relation by which a thought of or about a specific thing. Nor did he explain how thoughts can be more or less indistinct.

Helmholtz (1821-94) held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge e which has to do with notions’ or ‘mere familiarity with phenomena’ are judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference e between distinct and indistinct thoughts. Helmholtz found a difference between precise judgements which are expressible in words and equally precise judgement which, in principle, are not expressible in words, and so are not communicable.

James (1842-1910), however, made a genuine advance over Grote and Helmholtz by analysing the reference relations holding between a thought and the specific thing of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially terminates in’ a thought constituting knowledge of acquaintance with that thing. The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analyses, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintance with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’. The concepts of a thought ‘operating in’ a thing or ‘terminating in’ another thought are causal, but where Grote found chains of efficient causation connecting thought and referent. James found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquainting e with a thing is direct. A thought constituting knowledge of acquaintance with a thing as a constituent and the thing and the experience of it are identical.

James further agreed with Grote that pure knowledge of acquaintance with things, eg., sensory experience, is epistemically prior to knowledge about things. While the epistemic justification involved in knowledge about all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ and ‘the maximal conceivable truth’, suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, that is to say, the knowledge about things.

What is more, that, Russell (1872-1970) agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truth’. That the mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons were to seem as having been similar to James’s. Conceptually unmediated reference to particulars is necessary for understanding any proposition mentioning a particular and, if scepticism about the external world is to be avoided, some particulars must be directly perceived. Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.

Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case reference is direct. But, Russell objected on the number of grounds to James’s causal account of the indirect reference involved in knowledge about things. Russell gave a descriptional rather than a causal analysis of that sort of reference. A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Yet, he preferred to speak of knowledge of things by description, than of knowledge about things.

Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance e with that thing is vague and inexplicit. Reflection and analysis can lead to distinguish constituent parts of the object of acquaintance and to obtain progressively more distinct, explicit, and complete knowledge about it.

Because one can interpret the reflation of acquaintance or awareness as one that is not epistemic, i.e., not a kind of propositional knowledge, it is important to distinguish the views read as ontological theses from a view one might call epistemological direct realism: In perception we are, on, at least some occasions, non-inferentially justified in believing a proposition asserting the existence e of a physical object. A view about what the object of perceptions are. Direct realism is a type of realism, since it is assumed that these objects exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which holds that there are no such independently existing objects. Its being a ‘direct realism rules out those views’ defended under the rubic of ‘critical realism’, of ‘representative realism’, in which there is some non-physical intermediary ~ usually called a ‘sense-data’ or a ‘sense impression’ ~ that must first be perceived or experienced in order to perceive the object that exists independently of this perception. According to critical realists, such an intermediary need not be perceived ‘first’ in a temporal sense, but it is a necessary ingredient which suggests to the perceiver an external reality, or which offers the occasion on which to infer the existence of such a reality. Direct realism, however, denies the need for any recourse to mental go-between in order to explain our perception of the physical world.

This reply on the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since our perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties. But no view of perception alone is sufficient to refute such global scepticism. For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.

All is the equivalent for an external world, as philosophers have used the term, is not some distant planet external to Earth. Nor is the external world, strictly speaking, a world. Rather, the external world consists of all those objects and events which exist external to perceiver. So the table across the room is part of the external world, and so is the room in part of the external world, and so is its brown colour and roughly rectangular shape. Similarly, if the table falls apart when a heavy object is placed on it, the event of its disintegration is a pat of the external world.

One object external to and distinct from any given perceiver is any other perceiver. So, relative to one perceiver, every other perceiver is a part of the external world. However, another way of understanding the external world results if we think of the objects and events external to and distinct from every perceiver. So conceived the set of all perceivers makes up a vast community, with all of the objects and events external to that community making up the external world. Thus, our primary considerations are in the concern from which we will suppose that perceiver are entities which occupy physical space, if only because they are partly composed of items which take up physical space.

What, then, is the problem of the external world. Certainly it is not whether there is an external world, this much is taken for granted. Instead, the problem is an epistemological one which, in rough approximation, can be formulated by asking whether and if so how a person gains of the external world. So understood, the problem seems to admit of an easy solution. Thee is knowledge of the external world which persons acquire primarily by perceiving objects and events which make up the external world.

However, many philosophers have found this easy solution problematic. Nonetheless, the very statement of ‘the problem of the external world itself’ will be altered once we consider the main thesis against the easy solution.

One way in which the easy solution has been further articulated is in terms of epistemological direct realism. This theory is realist in so far as it claims that objects and events in the external world, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. And this theory is epistemologically direct since it also claims that in perception people often, and typically acquire immediate non-inferential knowledge of objects and events in the external world. It is on this latter point that it is thought to face serious problems.

The main reason for this is that knowledge of objects in the external world seems to be dependent on some other knowledge, and so would not qualify as immediate and non-inferentially is claimed that I do not gain immediate non-inferential perceptual knowledge that thee is a brown and rectangular table before me, because I would know such a proposition unless I knew that something then appeared brown and rectangular. Hence, knowledge of the table is dependent upon knowledge of how it appears. Alternately expressed, if there is knowledge of the table at all, it is indirect knowledge, secured only if the proposition about the table may be inferred from propositions about appearances. If so, epistemological direct realism is false’

This argument suggests a new way of formulating the problem of the external world:

Problem of the external world: Can firstly, have knowledge of propositions about objects and events in the external world based on or upon propositions which describe how the external world appears, i.e., upon appearances?

Unlike our original formulation of the problem of the external world, this formulation does not admit of an easy solution. Instead, it has seemed to many philosophers that it admits of no solution at all, so that scepticism regarding the eternal world is only remaining alternative.

This theory is realist in just the way described earlier, but it adds, secondly, that objects and events in the external world are typically directly perceived, as are many of their features such as their colour, shapes, and textures.

Often perceptual direct realism is developed further by simply adding epistemological direct realism to it. Such an addition is supported by claiming that direct perception of objects in the external world provides us with immediate non-referential knowledge of such objects. Seen in this way, perceptual direct realism is supposed to support epistemological direct realism, strictly speaking they are independent doctrines. One might consistently, perhaps even plausibly, hold one without also accepting the other.

Direct perception is that perception which is not dependent on some other perception. The main opposition to the claim that we directly perceive external objects comes from direct or representative realism. That theory holds that whenever an object in the external world is perceived, some other object is also perceived, namely a sensum ~ a phenomenal entity of some sort. Further, one would not perceive the external object if one would not perceive the external object if one were to fail to receive the sensum. In this sense the sensum is a perceived intermediary, and the perception of the external object is dependent on the perception of the sensum. For such a theory, perception of the sensum is direct, since it is not dependent on some other perception, while perception on the external object is indirect. More generally, for the indirect t realism., all directly perceived entities are sensum. On the other hand, those who accept perceptual direct realism claim that perception of objects in the external world is typically direct, since that perception is not dependent on some perceived intermediaries such as sensum.

It has often been supposed, however, that the argument from illusion suffices to refute all forms of perceptual direct realism. The argument from illusion is actually a family of different arguments rather than one argument. Perhaps the most familiar argument in this family begins by noting that objects appear differently to different observers, and even to the same observers on different occasions or in different circumstances. For example, a round dish may appear round to a person viewing it from directly above and elliptical to another viewing it from one side. As one changes position the dish will appear to have still different shapes, more and more elliptical in some cases, closer and closer to round in others . In each such case, it is argued, the observer directly sees an entity with that apparent shape. Thus, when the dish appears elliptical, the observer is said to see directly something which is elliptical. Certainly this elliptical entity is not the top surface of the dish, since that is round. This elliptical entity, a sensum, is thought to be wholly distinct from the dish.

In seeing the dish from straight above it appears round and it might be thought that then directly sees the dish rather than a sensum. But here too, it relatively sett in: The dish will appear different in size as one is placed at different distances from the dish. So even if in all of these cases the dish appears round, it will; also appear to have many different diameters. Hence, in these cases as well, the observer is said to directly see some sensum, and not the dish.

This argument concerning the dish can be generalized in two ways. First, more or less the same argument can be mounted for all other cases of seeing and across the full range of sensible qualities ~ textures and colours in addition to shapes and sizes. Second, one can utilize related relativity arguments for other sense modalities. With the argument thus completed, one will have reached the conclusion that all cases of non-hallucinatory perception, the observer directly perceives a sensum, and not an external physical object. Presumably in cases of hallucination a related result holds, so that one reaches the fully general result that in all cases of perceptual experience, what is directly perceived is a sensum or group of sensa, and not an external physical object, perceptual direct realism, therefore, is deemed false.

Yet, even if perceptual direct realism is refuted, this by itself does not generate a problem of the external world. We need to add that if no person ever directly perceives an external physical object, then no person ever gains immediate non-inferential knowledge of such objects. Armed with this additional premise, we can conclude that if there is knowledge of external objects, it is indirect and based upon immediate knowledge of sensa. We can then formulate the problem of the external world in another way:

Problems of the external world: can, secondly, have knowledge of propositions about objects and events in the external world based upon propositions about directly perceived sensa?

It is worth nothing the differences between the problems of the external world as expounded upon its first premise and the secondly proposing comments as listed of the problems of the external world, we may, perhaps, that we have knowledge of the external world only if propositions about objects and events in the external world that are inferrable from propositions about appearances.

Some philosophers have thought that if analytical phenomenalism were true, the situational causalities would be different. Analytic phenomenalism is the doctrine that every proposition about objects and events in the external world is fully analysable into, and thus is equivalent in meaning to, a group of inferrable propositions . The numbers of inferrable propositions making up the analysis in any single propositioned object and or event in the external world would likely be enormous, perhaps, indefinitely many. Nevertheless, analytic phenomenalism might be of help in solving the perceptual direct realism of which the required deductions propositioned about objects and or events in the external world from those that are inferrable from prepositions about appearances. For, given analytical phenomenalism there are indefinite many in the inferrable propositions about appearances in the analysis of each proposition taken about objects and or events in the external world is apt to be inductive, even granting the truth of a analytical phenomenalism. Moreover, most of the inferrable propositions about appearances into which we might hope to analyse of the external world, then we have knowledge of the external world only if propositions about objects and events in the external world would be complex subjunctive conditionals such as that expressed by ‘If I were to seem to see something red, round and spherical, and if I were to seem to try to taste what I seem to see, then most likely I would seem to taste something sweet and slightly tart’. But propositionally inferrable appearances of this complex sort will not typically be immediately known. And thus knowledge of propositional objects and or event of the external world will not generally be based on or upon immediate knowledge of such propositionally making appearances.

Consider upon the appearances expressed by ‘I seem to see something red, round, and spherical’ and ‘I seem to taste something sweet and slightly tart’. To infer cogently from these propositions to that expressed by ‘There is an apple before me’ we need additional information, such as that expressed by ‘Apples generally cause visual appearance of redness, roundness, and spherical shape and gustatory appearance of sweetness and tartness’. With this additional information., the inference is a good on e, and it is likely to be true that there is an apple there relative to those premiered. The cogency of the inference, however, depends squarely on the additional premise, relative only to the stated inferrability placed upon appearances, it is not highly probable that thee is an apple there.

Moreover, there is good reason to think that analytic phenomenalism is false. For each proposed translation of an object and eventfully external world into the inferrable propositions about appearances. Mainly enumerative induction is of no help in this regard, for that is an inference from premisses about observed objects in a certain set-class having some properties ‘F’ and ‘G’ to unobserved objects in the same set-class having properties ‘F’ and ‘G’, to unobserved objects in the same set-class properties ‘F’ and ‘G’. If satisfactory, then we have knowledge of the external world if propositions are inferrable from propositions about appearances, however, concerned considerations drawn upon appearances while objects and or events of the external world concern for externalities of objects and interactive categories in events, are. So, the most likely inductive inference to consider is a causal one: We infer from certain effects, described by promotional appearances to their likely causes, described by external objects and or event that profited emanation in the concerning propositional state in that they occur. But, here, too, the inference is apt to prove problematic. But in evaluating the claim that inference constitutes a legitimate and independent argument from, one must explore the question of whether it is a contingent fact that, at least, most phenomena have explanations and that be so, that a given criterion, simplicity, were usually the correct explanation, it is difficult to avoid the conclusion that if this is true it would be an empirical fact about our selves in discovery of an reference to the best explanation.

Defenders of direct realism have sometimes appealed to an inference to the best explanation to justify prepositions about objects and or events in the external world, we might say that the best explanation of the appearances is that they are caused by external objects. However, even if this is true, as no doubt it is, it is unclear how establishing this general hypophysis helps justify specific ordination upon the proposition about objects and or event in the external world, such as that these particular appearances of a proposition whose inferrable properties about appearances caused by the red apple.

The point here is a general one: Cogent inductive inference from the inferrable proposition about appearances to propositions about objects and or events in the external world are available only with some added premiss expressing the requisite causal relation, or perhaps some other premiss describing some other sort of correlation between appearances and external objects. So there is no reason to think that indirect knowledge secured if the prepositions about its outstanding objectivity from realistic appearances, if so, epistemological direct realism must be denied. And since deductive and inductive inferences from appearance to objects and or events in the external world are propositions which seem to exhaust the options, no solution to its argument that sustains us of having knowledge of propositions about objects and events in the external world based on or upon propositions which describe the external world as it appears at which point that is at hand. So unless there is some solution to this, it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take

If the argument leading to some additional premise as might conclude that if there is knowledge of external objects if is directly and based on or upon the immediate knowledge of sensa, such that having knowledge of propositions about objects and or events in the external world based on or upon propositions about directly perceived sensa? Broadly speaking, there are two alternatives to both the perceptual indirect realism, and, of course, perceptual phenomenalism. In contrast to indirect t realism, and perceptual phenomenalism is that perceptual phenomenalism rejects realism outright and holds instead that (1) physical objects are collections of sensa, (2) in all cases of perception, at least one sensa is directly perceived, and, (3) to perceive a physical object one directly perceives some of the sensa which are constituents of the collection making up that object.



Proponents of each of these position try to solve the conditions not engendered to the species of additional persons ever of directly perceiving an external physical object, then no person ever gains immediate non-referential knowledge of such objects in different ways, in fact, if any the better able to solve this additional premise, that we would conclude that if there is knowledge of external objects than related doctrines for which time are aforementioned. The answer has seemed to most philosophers to be ‘no’, for in general indirect realists and phenomenalists have strategies we have already considered and rejected.

In thinking about the possibilities of such that we need to bear in mind that the term for propositions which describe presently directly perceived sensa. Indirect realism typically claim that the inference from its presently directly perceived sensa to an inductive one, specifically a causal inference from effects of causes. Inference of such a sort will perfectly cogent provides we can use a premiss which specifies that physical objects of a certain type are causally correlated with sensa of the sort currently directly perceived. Such a premiss will itself be justified, if at all, solely on the basis of propositions described presently directly perceived sensa. Certainly for the indirect realist one never directly perceives the causes of sensa. So, if one knows that, say, apples topically cause such-and-such visual sensa, one knows this only indirectly on the basis of knowledge of sensa. But no group of propositionally perceived sensa by itself supports any inferences to causal correlations of this sort. Consequently, indirect realists are in no p position to solve such categorically added premises for which knowledge is armed with additional premise, as containing of external objects , it is indirect and based on or upon immediate knowledge of sensa. The consequent solution of these that are by showing that propositions would be inductive and causal inference from effects of causes and show inductively how derivable for propositions which describe presently perceived sensa.

Phenomenalists have often supported their position, in part, by noting the difficulties facing indirect t realism, but phenomenalism is no better off with respect to inferrable prepositions about objects and events responsible for unspecific appearances. Phenomenalism construe physical objects as collections of sensa. So, to infer an inference from effects to causes is to infer a proposition about a collection from propositions about constituent members of the collective one, although not a causal one. Nonetheless, namely the inference in question will require a premise that such-and-such directly perceived sensa are constituents of some collection ‘C’, where ‘C’ is some physical object such as an apple. The problem comes with trying to justify such a premise. To do this, one will need some plausible account of what is mean t by claiming that physical objects are collections of sensa. To explicate this idea, however, phenomenalists have typically turned to analytical phenomenalism: Physical objects are collections of sensa in the sense that propositions about physical objects are analysable into propositions about sensa. And analytical phenomenalism we have seen, has been discredited.

If neither propositions about appearances or propositions accorded of the external world can be easily solved, then scepticism about external world is a doctrine we would be forced to adopt. One might even say that it is here that we locate the real problem of the external world. ‘How can we avoid being forced into accepting scepticism’?

In avoiding scepticism, is to question the arguments which lead to both propositional inferences about the external world an appearances. The crucial question is whether any part of the argument from illusion really forces us to abandon the incorporate perceptual direct realism. To help see that the answer is ‘no’ we may note that a key premise in the relativity argument links how something appears with direct perception: The fact that the dish appears elliptical is supposed to entail that one directly perceives something which is elliptical. But is there an entailment present? Certainly we do not think that the proposition expressed by ‘The book appears worn and dusty and more than two hundred years old’ entails that the observer directly perceives something which is worn and dusty and more than two hundred years old. And there are countless other examples like this one, where we will resist the inference from a property ‘F’ appearing to someone to claim that ‘F’ is instantiated in some entity.

Proponents of the argument from illusion might complain that the inference they favour works only for certain adjectives, specifically for adjectives referring to non-relational sensible qualities such as colour, taste, shape, and the like. Such a move, however, requires an arrangement which shows why the inference works in these restricted cases and fails in all others. No such argument has ever been provided, and it is difficult to see what it might be.

If the argument from illusion is defused, the major threat facing a knowledge of objects and or events in the external world primarily by perceiving them. Also, its theory is realist in addition that objects and events in the external world are typically directly perceived as are many of their characteristic features. Hence, there will no longer be any real motivation for it would appear that scepticism concerning knowledge of the external world would be the most reasonable position to take. Of course, even if perceptual directly realism is reinstated, this does not solve, by any means, the main reason for which that knowledge of objects in the external world seem to be dependent on some other knowledge, and so would not qualify as immediate and non-reference, along with many of their various features, exist independently of and are generally unaffected by perceivers and acts of perception in which they engage. That problem might arise even for one who accepts perceptual direct realism. But, there is reason to be suspicious in keeping with the argument that one would not know that one is seeing something blue if one failed to know that something looked blue. In this sense, there is a dependance of the former on the latter, what is not clear is whether the dependence is epistemic or semantic. It is the latter if, in order to understand what it is to see something blue, one must also understand what it is fort something to look blue. This may be true, even when the belief that one is seeing something blue is not epistemically dependent on or based upon the belief that something looks blue. Merely claiming, that there is a dependent relation does not discriminate between epistemic and semantic dependence. Moreover, there is reason to think it is not an epistemic dependence. For in general, observers rarely have beliefs about how objects appear, but this fact does not impugn their knowledge that they are seeing, e.g., blue objects.

It was, nonetheless, the Greeks that began writing about history in the 5th century Bc. Herodotus and Thucydides wrote long works that stressed eyewitness evidence, the multiple causes of events, and judgments about people's motives. Thucydides, followed by Aristotle, developed political science by analysing how states operated. Hellenistic Greek writers made history more personal and began composing biographies.

The enduring legacy of ancient Greece lies in the brilliance of its ideas and the depth of its literature and art. The greatest ancient evidence of their value is that the Romans, who conquered the Greeks in war, were themselves overcome by admiration for Greek cultural achievements. The first Roman literature, for example, was Homer's Odyssey translated into Latin. Greek art, architecture, philosophy, and religion also inspired Roman artists and thinkers, who used them as starting points for developing their own style of work. All educated Romans learned to read and speak Greek and studied Greek models in rhetoric. Stoicism became the most popular Roman philosophy of life.

Arab philosophers, mathematicians, and scientists who became the leading thinkers of medieval times studied the works of Aristotle and other Greek sources intensely. During the European Renaissance from the 14th to the 16th centuries, people from many walks of life read Greek literature and history. Writing in the late 16th and early 17th centuries, English playwright William Shakespeare based dramas on ancient Greek biographies. Modern playwrights still find inspiration for new works in Athenian drama. Many modern public buildings, such as the United States Supreme Court in Washington, DC, imitate Greek temple architecture. Although the founders of the United States rejected Athenian democracy as too direct and radical, they enshrined democratic equality as a basic principle. It was ancient Greeks who proved that democracy could be the foundation of a stable government. Pride in the cultural accomplishments of ancient Greece contributed to a feeling of ethnic unity when the modern nation of Greece was carved out of the Ottoman Empire. That pride still characterizes modern Greece and makes it a fierce defender of the Hellenic heritage.

Reliance on logic, allegiance to democratic principles, unceasing curiosity about what lies beneath the surface of things, some healthy respects for the dangers of arrogant overconfidence, and a love of beauty in stories and art remain incredibly important components of Western civilization. Ancient Greece contributed all of these things.

Philosophy is a rational and critical inquiry into basic principles. Philosophy is often divided into four main branches: metaphysics, the investigation of ultimate reality; epistemology, the study of the origins, validity, and limits of knowledge; ethics, the study of the nature of morality and judgment; and aesthetics, the study of the nature of beauty in the fine arts.

The School of Athens (1510-1511) by Italian Renaissance painter Raphael adorns a room in the Vatican Palace. The artist depicts several philosophers of classical antiquity and portrays each with a distinctive gesture, conveying complex ideas in simple images. In the centre of the composition, Plato and Aristotle dominate the scene. Plato points upward to the world of ideas, where he believes knowledge lies, whereas Aristotle holds his forearm parallel to the earth, stressing observation of the world around us as the source of understanding. In addition, Raphael draws comparisons with his illustrious contemporaries, giving Plato the face of the Renaissance genius Leonardo da Vinci, and Heraclitus, who rests his elbow on a large marble block, the face of the Renaissance sculptor Michelangelo. Euclid, bending down at the right, resembles the Renaissance architect Bramante. Raphael paints his own portrait on the young man in a black beret at the far right. In accordance with Renaissance ideas, artists belong to the ranks of the learned and the fine arts have the stature and merit of the written word.

As used originally by the ancient Greeks, the term philosophy meant the pursuit of knowledge for its own sake. Philosophy comprised all areas of speculative thought and included the arts, sciences, and religion. As special methods and principles were developed in the various areas of knowledge, each area acquired its own philosophical aspect, giving rise to the philosophy of art, of science, and of religion. The term philosophy is often used popularly to mean a set of basic values and attitudes toward life, nature, and society-thus the phrase ‘philosophy of life.’ Because the lines of distinction between the various areas of knowledge are flexible and subject to change, the definition of the term philosophy remains a subject of controversy.

Western philosophy from Greek antiquity to the present is surveyed in the remainder of this article. For information about philosophical thought in Asia and the Middle East,

Western philosophy is generally considered to have begun in ancient Greece as speculation about the underlying nature of the physical world. In its earliest form, it was indistinguishable from natural science. The writings of the earliest philosophers no longer exist, except for a few fragments cited by Aristotle in the 4th century Bc and by other writers of later times.

The first philosopher of historical record was Thales, who lived in the 6th century Bc in Miletus, a metropolis on the Ionian coast of Asia Minor. Thales, who was revered by later generations as one of the Seven Wise Men of Greece, was interested in astronomical, physical, and meteorological phenomena. His scientific investigations led him to speculate that all natural phenomena are different forms of one fundamental substance, which he believed to be water because he thought evaporation and condensation to be universal processes. Anaximander, a disciple of Thales, maintained that the first principle from which all things evolve is an intangible, invisible, infinite substance that he called apeiron, ‘the boundless.’ This substance, he maintained, is eternal and indestructible. Out of its ceaseless motion the more familiar substances, such as warmth, cold, earth, air, and fire, continuously evolve, generating in turn the various objects and organisms that make up the recognizable world.

The third great Ionian philosopher of the 6th century Bc, Anaximenes, returned to Thales’s assumption that the primary substance is something familiar and material, but he claimed it to be air rather than water. He believed that the changes things undergo could be explained in terms of rarefaction (thinning) and condensation of air. Thus Anaximenes was the first philosopher to explain differences in quality in terms of differences in size or quantity, a method fundamental to physical science.

Overall, the Ionian school made the initial radical step from mythological to scientific explanation of natural phenomena. It discovered the important scientific principles of the permanence of substance, the natural evolution of the world, and the reduction of quality to quantity.

The 6th-century-Bc Greek mathematician and philosopher Pythagoras was not only an influential thinker, but also a complex personality whose doctrines addressed the spiritual as well as the scientific. The following is a collection of short excerpts from studies of Pythagorean teachings and from anecdotes about Pythagoras written by later Greek thinkers, such as the philosopher Aristotle, the historian’s Herodotus and Diodorus Siculus, and the biographer Diogenes Laërtius.

About 530 Bc at Croton (now Crotona), in southern Italy, the philosopher Pythagoras founded a school of philosophy that was more religious and mystical than the Ionian school. It fused the ancient mythological view of the world with the developing interest in scientific explanation. The system of philosophy that became known as Pythagoreanism combined ethical, supernatural, and mathematical beliefs with many ascetic rules, such as obedience and silence and simplicity of dress and possessions. The Pythagoreans taught and practiced a way of life based on the belief that the soul is a prisoner of the body, is released from the body at death, and migrates into a succession of different kinds of animals before reincarnation into a human being. For this reason Pythagoras taught his followers not to eat meat. Pythagoras maintained that the highest purpose of humans should be to purify their souls by cultivating intellectual virtues, refraining from sensual pleasures, and practicing special religious rituals. The Pythagoreans, having discovered the mathematical laws of musical pitch, inferred that planetary motions produce a ‘music of the spheres,’ and developed a ‘therapy through music’ to bring humanity in harmony with the celestial spheres. They identified science with mathematics, maintaining that all things are made up of numbers and geometrical figures. They made important contributions to mathematics, musical theory, and astronomy.

Heraclitus of Ephesus, who was active around 500 Bc, continued the search of the Ionians for a primary substance, which he claimed to be fire. He noticed that heat produces changes in matter, and thus anticipated the modern theory of energy. Heraclitus maintained that all things are in a state of continuous flux, that stability is an illusion, and that only change and the law of change, or Logos, are real. The Logos doctrine of Heraclitus, which identified the laws of nature with a divine mind, developed into the pantheistic theology of Stoicism. (Pantheism is the belief that God and material substance are one, and that divinity is present in all things.)

In the 5th century Bc, Parmenides founded a school of philosophy at Elea, a Greek colony on the Italian peninsula. Parmenides took a position opposite from that of Heraclitus on the relation between stability and change. Parmenides maintained that the universe, or the state of being, is an indivisible, unchanging, spherical entity and that all reference to change or diversity is self-contradictory. According to Parmenides, all that exists has no beginning and has no end and is not subject to change over time. Nothing, he claimed, can be truly asserted except that ‘being is.’ Zeno of Elea, a disciple of Parmenides, tried to prove the unity of being by arguing that the belief in the reality of change, diversity, and motion leads to logical paradoxes. The paradoxes of Zeno became famous intellectual puzzles that philosophers and logicians of all subsequent ages have tried to solve. The concern of the Eleatics with the problem of logical consistency laid the basis for the development of the science of logic.

The speculation about the physical world begun by the Ionians was continued in the 5th century Bc by Empedocles and Anaxagoras, who developed a philosophy replacing the Ionian assumption of a single primary substance with an assumption of a plurality of such substances. Empedocles maintained that all things are composed of four irreducible elements: air, water, earth, and fire, which are alternately combined and separated by two opposite forces, love and strife. By that process the world evolves from chaos to form and back to chaos again, in an eternal cycle. Empedocles regarded the eternal cycle as the proper object of religious worship and criticized the popular belief in personal deities, but he failed to explain the way in which the familiar objects of experience could develop out of elements that are totally different from them. Anaxagoras therefore suggested that all things are composed of very small particles, or ‘seeds,’ which exist in infinite variety. To explain the way in which these particles combine to form the objects that constitute the familiar world, Anaxagoras developed a theory of cosmic evolution. He maintained that the active principle of this evolutionary process is a world mind that separates and combines the particles. His concept of elemental particles led to the development of an atomic theory of matter.

It was a natural step from pluralism to atomism, the theory that all matter is composed of tiny, indivisible particles differing only in simple physical properties such as size, shape, and weight. This step was taken in the 4th century Bc by Leucippus and his more famous associate Democritus, who is generally credited with the first systematic formulation of an atomic theory of matter. The fundamental assumption of Democritus’s atomic theory is that matter is not too divisible but is composed of numerous indivisible particles that are too small for human senses to detect. His conception of nature was thoroughly materialistic (focussed on physical aspects of matter), explaining all natural phenomena in terms of the number, shape, and size of atoms. He thus reduced the sensory qualities of things, such as warmth, cold, taste, and odour, to quantitative differences among atoms-that is, to differences measurable in amount or size. The higher forms of existence, such as plant and animal life and even human thought, were explained by Democritus in these purely physical terms. He applied his theory to psychology, physiology, theory of knowledge, ethics, and politics, thus presenting the first comprehensive statement of deterministic materialism, a theory claiming that all aspects of existence rigidly follow, or are determined by, physical laws.

Toward the end of the 5th century Bc., a group of travelling teachers called Sophists became famous throughout Greece. The Sophists played an important role in developing the Greek city-states from agrarian monarchies into commercial democracies. As Greek industry and commerce expanded, a class of newly rich, economically powerful merchants began to wield political power. Lacking the education of the aristocrats, they sought to prepare themselves for politics and commerce by paying the Sophists for instruction in public speaking, legal argument, and general culture. Although the best of the Sophists made valuable contributions to Greek thought, the group as a whole acquired a reputation for deceit, insincerity, and demagoguery. Thus the word sophistry has come to signify these moral faults.

The famous maxim of Protagoras, one of the leading Sophists, that ‘man is the measure of all things,’ is typical of the philosophical attitude of the Sophist school. Protagoras claimed that individuals have the right to judge all matters for themselves. He denied the existence of an objective (demonstrable and impartial) knowledge, arguing instead that truth is subjective in the sense that different things are true for different people and there is no way to prove that one person’s beliefs are objectively correct and another’s are incorrect. Protagoras asserted that natural science and theology are of little or no value because they have no impact on daily life, and he concluded that ethical rules need be followed only when it is to one’s practical advantage to do so.

Socrates was a Greek philosopher and teacher who lived in Athens, Greece, in the 400s Bc. He profoundly altered Western philosophical thought through his influence on his most famous pupil, Plato, who passed on Socrates’s teachings in his writings known as dialogues. Socrates taught that every person has full knowledge of ultimate truth contained within the soul and needs only to be spurred to conscious reflection in order to become aware of it. His criticism of injustice in Athenian society led to his prosecution and a death sentence for allegedly corrupting the youth of Athens.

Perhaps the greatest philosophical personality in history was Socrates, who lived from 469 to 399 Bc. Socrates left no written work and is known through the writings of his students, especially those of his most famous pupil, Plato. Socrates maintained a philosophical dialogue with his students until he was condemned to death and took his own life. Unlike the Sophists, Socrates refused to accept payment for his teachings, maintaining that he had no positive knowledge to offer except the awareness of the need for more knowledge. He concluded that, in matters of morality, it is best to seek out genuine knowledge by exposing false pretensions. Ignorance is the only source of evil, he argued, so it is improper to act out of ignorance or to accept moral instruction from those who have not proven their own wisdom. Instead of relying blindly on authority, we should unceasingly question our own beliefs and the beliefs of others in order to seek out genuine wisdom.

Greek philosopher Socrates chose to die rather than cease teaching his philosophy, declaring that ‘no evil can happen to a good man, either in life or after death.’ In 399 Bc Socrates was accused and convicted of impiety and moral corruption of the youth of Athens, Greece. At his trial, he presented a justification of his life. The substance of his speech was recorded by Greek philosopher Plato, a disciple of Socrates, in Plato’s Apology.

Socrates taught that every person has full knowledge of ultimate truth contained within the soul and needs only to be spurred to conscious reflection to become aware of it. In Plato’s dialogue Meno, for example, Socrates guides an untutored slave to the formulation of the Pythagorean theorem, thus demonstrating that such knowledge is innate in the soul, rather than learned from experience. The philosopher’s task, Socrates believed, was to provoke people into thinking for themselves, rather than to teach them anything they did not already know. His contribution to the history of thought was not a systematic doctrine but a method of thinking and a way of life. He stressed the need for analytical examination of the grounds of one’s beliefs, for clear definitions of basic concepts, and for a rational and critical approach to ethical problems.

Plato, one of the most famous philosophers of ancient Greece, was the first to use the term philosophy, which means ‘love of knowledge.’ Born around 428 Bc, Plato investigated a wide range of topics. Chief among his ideas was the theory of forms, which proposed that objects in the physical world merely resemble perfect forms in the ideal world, and that only these perfect forms can be the object of true knowledge. The goal of the philosopher, according to Plato, is to know the perfect forms and to instruct others in that knowledge.

Plato, who lived from about 428 to 347 Bc, was a more systematic and positive thinker than Socrates, but his writings, particularly the earlier dialogues, can be regarded as a continuation and elaboration of Socratic insights. Like Socrates, Plato regarded ethics as the highest branch of knowledge; he stressed the intellectual basis of virtue, identifying virtue with wisdom. This view led to the so-called Socratic paradox that, as Socrates asserts in the Protagoras, ‘no man does evil voluntarily.’ (Aristotle later noticed that such a conclusion allows no place for moral responsibility.) Plato also explored the fundamental problems of natural science, political theory, metaphysics, theology, and theory of knowledge, and developed ideas that became permanent elements in Western thought.

Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates-has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.

The basis of Plato’s philosophy is his theory of Ideas, also known as the doctrine of Forms. The theory of Ideas, which is expressed in many of his dialogues, particularly the Republic and the Parmenides, divides existence into two realms, an ‘intelligible realm’ of perfect, eternal, and invisible Ideas, or Forms, and a ‘sensible realm’ of concrete, familiar objects. Trees, stones, human bodies, and other objects that can be known through the senses are for Plato unreal, shadowy, and imperfect copies of the Ideas of tree, stone, and the human body. He was led to this apparently bizarre conclusion by his high standard of knowledge, which required that all genuine objects of knowledge be described without contradiction. Because all objects perceived by the senses undergo change, an assertion made about such objects at one time will not be true at a later time. According to Plato, these objects are therefore not completely real. Thus, beliefs derived from experience of such objects are vague and unreliable, whereas the principles of mathematics and philosophy, discovered by inner meditation on the Ideas, constitute the only knowledge worthy of the name. In the Republic, Plato described humanity as imprisoned in a cave and mistaking shadows on the wall for reality; he regarded the philosopher as the person who penetrates the world outside the cave of ignorance and achieves a vision of the true reality, the realm of Ideas. Plato’s concept of the Absolute Idea of the Good, which is the highest Form and includes all others, has been a main source of pantheistic and mystical religious doctrines in Western culture.

What is the nature of knowledge? And of ignorance? The 4th-century-Bc Greek philosopher Plato used the myth, or allegory, of the cave to illustrate the difference between genuine knowledge and opinion or belief. This distinction is at the heart of one of Plato’s most important works, The Republic. In the first part of the myth of the cave, excerpted here, Plato constructs a dialogue in which he considers the difficult transition from belief based on appearances to true understanding founded in reality.

Plato’s theory of Ideas and his rationalistic view of knowledge formed the foundation for his ethical and social idealism. The realm of eternal Ideas provides the standards or ideals according to which all objects and actions should be judged. The philosophical person, who refrains from sensual pleasures and searches instead for knowledge of abstract principles, finds in these ideals the basis for personal behaviour and social institutions. Personal virtue consists in a harmonious relation among the three parts of the soul: reason, emotion, and desire. Social justice likewise consists in harmony among the classes of society. The ideal state of a sound mind in a sound body requires that the intellect control the desires and passions, as the ideal state of society requires that the wisest individuals rule the pleasure-seeking masses. Truth, beauty, and justice coincide in the Idea of the Good, according to Plato; therefore, art that expresses moral values is the best art. In his rather conservative social program, Plato supported the censorship of art forms that he believed corrupted the young and promoted social injustice.

A student of ancient Greek philosopher Plato, Aristotle shared his teacher’s reverence for human knowledge but revised many of Plato’s ideas by emphasizing methods rooted in observation and experience. Aristotle surveyed and systematized nearly all the extant branches of knowledge and provided the first ordered accounts of biology, psychology, physics, and literary theory. In addition, Aristotle invented the field known as formal logic, pioneered zoology, and addressed virtually every major philosophical problem known during his time. Known to medieval intellectuals as simply ‘the Philosopher,’ Aristotle is possibly the greatest thinker in Western history and, historically, perhaps the single greatest influence on Western intellectual development.

Aristotle, who began study at Plato’s Academy at age 17 in 367 Bc, was the most illustrious pupil of Plato, and ranks with his teacher among the most profound and influential thinkers of the Western world. After studying for many years at Plato’s Academy, Aristotle became the tutor of Alexander the Great. He later returned to Athens to found the Lyceum, a school that, like Plato’s Academy, remained for centuries one of the great centres of learning in Greece. In his lectures at the Lyceum, Aristotle defined the basic concepts and principles of many of the sciences, such as logic, biology, physics, and psychology. In founding the science of logic, he developed the theory of deductive inference - a process for drawing conclusions from accepted premises by means of logical reasoning. His theory is exemplified by the syllogism (a deductive argument having two premises and a conclusion), and a set of rules for scientific method.

In his metaphysical theory, Aristotle criticized Plato’s theory of Forms. Aristotle argued that forms could not exist by themselves but existed only in particular things, which are composed of both form and matter. He understood substances as matter organized by a particular form. Humans, for example, are composed of flesh and blood arranged to shape arms, legs, and the other parts of the body.

Nature, for Aristotle, is an organic system of things whose forms make it possible to arrange them into classes comprising species and genera. Each species, he believed, has a form, purpose, and mode of development in terms of which it can be defined. The aim of science is to define the essential forms, purposes, and modes of development of all species and to arrange them in their natural order in accordance with their complexities of form, the main levels being the inanimate, the vegetative, the animal, and the rational. The soul, for Aristotle, is the form of the body, and humans, whose rational souls are a higher form than the souls of other terrestrial species, are the highest species of perishable things. The heavenly bodies, composed of an imperishable substance, or ether, and moved eternally in perfect circular motion by God, are still higher in the order of nature. This hierarchical classification of nature was adopted by many Christian, Jewish, and Muslim theologians in the Middle Ages as a view of nature consistent with their religious beliefs.

Aristotle’s political and ethical philosophy similarly developed out of a critical examination of Plato’s principles. The standards of personal and social behaviour, according to Aristotle, must be found in the scientific study of the natural tendencies of individuals and societies rather than in a heavenly or abstract realm of pure forms. Less insistent therefore than Plato on a rigorous conformity to absolute principles, Aristotle regarded ethical rules as practical guides to a happy and well-rounded life. His emphasis on happiness, as the active fulfilment of natural capacities, expressed the attitude toward life held by cultivated Greeks of his time. In political theory, Aristotle agreed with Plato that a monarchy ruled by a wise king would be the ideal political structure, but he also recognized that societies differ in their needs and traditions and believed that a limited democracy is usually the best compromise. In his theory of knowledge, Aristotle rejected the Platonic doctrine that knowledge is innate and insisted that it can be acquired only by generalization from experience. He interpreted art as a means of pleasure and intellectual enlightenment rather than an instrument of moral education. His analysis of Greek tragedy has served as a model of literary criticism.

From the 4th century Bc to the rise of Christian philosophy in the 4th century ad, Epicureanism, Stoicism, Skepticism, and Neoplatonism were the main philosophical school in the Western world. Interest in natural science declined steadily during this period, and these schools concerned themselves mainly with ethics and religion. This was also a period of intense intercultural contact, and Western philosophers were influenced by ideas from Buddhism in India, Zoroastrianism in Persia, and Judaism in Palestine.

Greek philosopher Epicurus was a prolific author and creator of an ethical philosophy based upon the achievement of pleasure and happiness. However, he viewed pleasure as the absence of pain and removal of the fear of death. This bust of Epicurus, a Roman copy of a Greek original, is in the Palazzo Nuovo in Rome, Italy.

In 306 Bc Epicurus founded a philosophical school in Athens. Because his followers met in the garden of his home they became known as philosophers of the garden. Epicurus adopted the atomistic physics of Democritus, but he allowed for an element of chance in the physical world by assuming that the atoms sometimes swerve in unpredictable ways, thus providing a physical basis for a belief in free will. The overall aim of Epicurus’s philosophy was to promote happiness by removing the fear of death. He maintained that natural science is important only if it can be applied in making practical decisions that help humans achieve the maximum amount of pleasure, which he identified with gentle motion and the absence of pain. The teachings of Epicurus are preserved mainly in the philosophical poem De Rerum Natura (On the Nature of Things) written by the Roman poet Lucretius in the 1st century Bc. Lucretius contributed greatly to the popularity of Epicureanism in Rome.

Emperor Marcus Aurelius ruled the Roman Empire from 161 to 180. His reign was marked by epidemics and frequent wars along the empire’s frontiers. A champion of the poor, Marcus Aurelius reduced the tax burden while founding schools, hospitals, and orphanages. A Stoic, Marcus Aurelius believed that a moral life leads to tranquillity and that moderation and acceptance improve the quality of one’s life.

The Stoic school, founded in Athens about 310 Bc by Zeno of Citium, developed out of the earlier movement of the Cynics, who rejected social institutions and material (worldly) values. Stoicism became the most influential school of the Greco-Roman world, producing such remarkable writers and personalities as the Greek slave and philosopher Epictetus in the 1st century ad and the 2nd-century Roman emperor Marcus Aurelius, who was noted for his wisdom and nobility of character. The Stoics taught that one can achieve freedom and tranquillity only by becoming insensitive to material comforts and external fortune and by dedicating oneself to a life of virtue and wisdom. They followed Heraclitus in believing the primary substance to be fire and in worshipping the Logos, which they identified with the energy, law, reason, and providence (divine guidance) found throughout nature. The Stoics argued that nature was a system designed by the divinities and believed that humans should strive to live in accordance with nature. The Stoic doctrine that each person is part of God and that all people form a universal family helped break down national, social, and racial barriers and prepare the way for the spread of Christianity. The Stoic doctrine of natural law, which makes human nature the standard for evaluating laws and social institutions, had an important influence on Roman and later Western law.

Roman emperor and philosopher Marcus Aurelius (121-180 ad) recorded principles of Stoic philosophy in his work, Meditations, which is essentially a notebook of jottings, covering a wide range of subjects. These extracts demonstrate the influence of Stoicism, the predominant philosophy of the time with its emphasis on the virtues of wisdom, courage, justice, and temperance, which free the soul from passion and desire.

The school of Skepticism, which continued the Sophist criticisms of objective knowledge, dominated Plato’s Academy in the 3rd century Bc. The Skeptics discovered, as had Zeno of Elea, that logic is a powerful critical device, capable of destroying any positive philosophical view, and they used it skilfully. Their fundamental assumption was that humanity cannot attain knowledge or wisdom concerning reality, and they therefore challenged the claims of scientists and philosophers to investigate the nature of reality. Like Socrates, the Skeptics insisted that wisdom consisted in awareness of the extent of one’s own ignorance. The Skeptics concluded that the way to happiness lies in a complete suspension of judgment. They believed that suspending judgment about the things of which one has no true knowledge creates tranquillity and fulfilment. As an extreme example of this attitude, it is said that Pyrrho, one of the most noted Skeptics, refused to change direction when approaching the edge of a cliff and had to be diverted by his students to save his life.

During the 1st century ad the Jewish-Hellenistic philosopher Philo of Alexandria combined Greek philosophy, particularly Platonic and Pythagorean ideas, with Judaism in a comprehensive system that anticipated Neoplatonism and Jewish, Christian, and Muslim mysticism. Philo insisted that the nature of God so far transcended (surpassed) human understanding and experience as to be indescribable; he described the natural world as a series of stages of descent from God, terminating in matter as the source of evil. He advocated a religious state, or theocracy, and was one of the first to interpret the Old Testament for the Gentiles.

Neoplatonism, one of the most influential philosophical and religious schools and an important rival of Christianity, was founded in the 3rd century ad by Ammonius Saccus and his more famous disciple Plotinus. Plotinus based his ideas on the mystical and poetic writings of Plato, the Pythagoreans, and Philo. The main function of philosophy, for him, is to prepare individuals for the experience of ecstasy, in which they become one with God. God, or the One, is beyond rational understanding and is the source of all reality. The universe emanates from the One by a mysterious process of overflowing of divine energy in successive levels. The highest levels form a trinity of the One; the Logos, which contains the Platonic Forms; and the World Soul, which gives rise to human souls and natural forces. The farther things emanate from the One, according to Plotinus, the more imperfect and evil they are and the closer they approach the limit of pure matter. The highest goal of life is to purify oneself of dependence on bodily comforts and, through philosophical meditation, to prepare oneself for an ecstatic reunion with the One. Neoplatonism exerted a strong influence on medieval thought.

During the decline of Greco-Roman civilization, Western philosophers turned their attention from the scientific investigation of nature and the search for worldly happiness to the problem of salvation in another and better world. By the 3rd century ad, Christianity had spread to the more educated classes of the Roman Empire. The religious teachings of the Gospels were combined by the Fathers of the Church with many of the philosophical concepts of the Greek and Roman schools. Of particular importance were the First Council of Nicaea in 325 and the Council of Ephesus in 431, which drew upon metaphysical ideas of Aristotle and Plotinus to establish important Christian doctrines about the divinity of Jesus and the nature of the Trinity.

Saint Augustine, born in what is now Souk-Ahras, Algeria, in ad 354, brought a systematic method of philosophy to Christian theology. Augustine taught rhetoric in the ancient cities of Carthage, Rome, and Milan before his Christian baptism in 387. His discussions of the knowledge of truth and of the existence of God drew from the Bible and from the philosophers of ancient Greece. A vigorous advocate of Roman Catholicism, Augustine developed many of his doctrines while attempting to resolve theological conflicts with Donatism and Pelagianism, two heretical Christian movements.

The process of reconciling the Greek emphasis on reason with the emphasis on religious emotion in the teachings of Christ and the apostles found eloquent expression in the writings of Saint Augustine during the late 4th and early 5th centuries. He developed a system of thought that, through subsequent amendments and elaborations, eventually became the authoritative doctrine of Christianity. Largely as a result of his influence, Christian thought was Platonic in spirit until the 13th century, when Aristotelian philosophy became dominant. Augustine argued that religious faith and philosophical understanding are complementary rather than opposed and that one must ‘believe in order to understand and understand in order to believe.’ Like the Neoplatonists, he considered the soul a higher form of existence than the body and taught that knowledge consists in the contemplation of Platonic ideas as abstract notions apart from sensory experience and anything physical or material.

Saint Augustine, an influential theologian and writer in the Western Church, wrote The City of God in the 5th century. In the following excerpt from the final book, or chapter, of this work, Augustine addressed a number of theological issues, including free will and the resurrection of the faithful. He asserted that God did not deprive people of their free will even when they turned to sin because it was preferable to ‘bring good out of evil than to prevent the evil from coming into existence.’ Augustine believed that the human body would rise after death, transformed into ‘the newness of the spiritual body’ and in paradise these new beings would ‘rest and see, see and love, love and praise.’

Platonic philosophy was combined with the Christian concept of a personal God who created the world and predestined (determined in advance) its course, and with the doctrine of the fall of humanity, requiring the divine incarnation in Christ. Augustine attempted to provide rational understanding of the relation between divine predestination and human freedom, the existence of evil in a world created by a perfect and all-powerful God, and the nature of the Trinity. Late in his life Augustine came to a pessimistic view about original sin, grace, and predestination: the ultimate fates of humans, he decided, are predetermined by God in the sense that some people are granted divine grace to enter heaven and others are not, and human actions and choices cannot explain the fates of individuals. This view was influential throughout the Middle Ages and became even more important during the Reformation of the 16th century when it inspired the doctrine of predestination put forth by Protestant theologian John Calvin.

Augustine conceived of history as a dramatic struggle between the good in humanity, as expressed in loyalty to the ‘city of God,’ or community of saints, and the evil in humanity, as embodied in the earthly city with its material values. His view of human life was pessimistic, asserting that happiness is impossible in the world of the living, where even with good fortune, which is rare, awareness of approaching death would mar any tendency toward satisfaction. He believed further that without the religious virtues of faith, hope, and charity, which require divine grace to be attained, a person cannot develop the natural virtues of courage, justice, temperance, and wisdom. His analyses of time, memory, and inner religious experience have been a source of inspiration for metaphysical and mystical thought.

Saint Augustine, a theologian and scholar of the Roman Catholic Church, did not embrace Christianity until he was more than 30 years old, but soon after his conversion his writings became influential in a number of theological debates. Author Henry Chadwick explores several works by Augustine, including his interpretation of Genesis, the doctrine of the trinity, and his attempts to reconcile biblical views with the prevailing scientific and philosophical views of the time.

The only major contribution to Western philosophy in the three centuries following the death of Augustine in ad 430 was made by the 6th-century Roman statesman Boethius, who revived interest in Greek and Roman philosophy, particularly Aristotle’s logic and metaphysics. In the 9th century the Irish monk John Erigena developed a pantheistic interpretation of Christianity, identifying the divine Trinity with the One, Logos, and World Soul of Neoplatonism and maintaining that both faith and reason are necessary to achieve the ecstatic union with God.

Even more significant for the development of Western philosophy was the early 11th-century Muslim philosopher Avicenna. His work modifying Aristotelian metaphysics introduced a distinction important to later philosophy between essence (the fundamental qualities that make a thing what it is - the tree-ness of a tree, for example) and existence (being, or living reality). He also demonstrated how it is possible to combine the biblical view of God with Aristotle’s philosophical system. Avicenna’s writings on logic, mathematics, physics, and medicine remained influential for centuries.

The 12th-century scholar Peter Abelard was one of the most famous theologians and philosophers of his time. In 1117 he began tutoring Héloïse, the niece of a French cleric. Abelard and Héloïse soon became secret lovers, but were forced to separate after being discovered by Héloïse’s uncle. The two lovers retired to monasteries, and although they kept in touch by writing, they did not see each other again.

In the 11th century a revival of philosophical thought began as a result of the increasing contact between different parts of the Western world and the general reawakening of cultural interests that culminated in the Renaissance. The works of Plato, Aristotle, and other Greek thinkers were translated by Arab scholars and brought to the attention of philosophers in Western Europe. Muslim, Jewish, and Christian philosophers interpreted and clarified these writings in an effort to reconcile philosophy with religious faith and to provide rational grounds for their religious beliefs. Their labours established the foundations of Scholasticism.

Boethius was a 6th-century Roman philosopher and statesman. He wrote The Consolation of Philosophy (c. 523) while in prison, awaiting execution. The book portrayed philosophy-that is, a system of thought that would sustain Boethius in his tribulations-as a woman. In a dialogue with Boethius, she told him that everyone’s life is transient and filled with problems. But through philosophy, she said, one can achieve serenity despite life’s reversals and misfortunes.

Scholastic thought was less interested in discovering new facts and principles than in demonstrating the truth of existing beliefs. Its method was therefore dialectical (based upon logical argument), and its intense concern with the logic of argument led to important developments in logic as well as theology. The Scholastic philosopher Saint Anselm of Canterbury adopted Augustine’s view of the complementary relation between faith and reason combined accrediting the results to Platonism with Christian theology. Supporting the Platonic theory of Ideas, Anselm argued in favour of the separate existence of universals, or common properties of things - the properties Avicenna had called essences. He thus established the position of logical realism - an assertion that universals and other ideas exist independently of our awareness of them-on one of the most vigorously disputed issues of medieval philosophy.

Regarded as the outstanding Jewish philosopher of the Middle Ages, Maimonides was born in Córdoba, Spain, in 1135. In one of the greatest works of Jewish religious philosophy, Guide for the Perplexed (1190?; trans. 1881-1885), Maimonides addresses the nature of God and creation, free will, and the dilemma of good and evil. He advocates an allegorical, rather than a literal, interpretation of biblical depictions of God’s nature and actions, a view that was controversial at the time. In the following excerpts from the treatise’s introduction and conclusion, Maimonides explains the purpose of his writing and discusses the intellectual love of God, which he considers the perfect form of worship. Richard S. Sarason

On the contrary view, known as nominalism, was formulated by the Scholastic philosopher Roscelin, who maintained that only individual, solid objects exist and that the universals, forms, and ideas, under which particular things are classified, constitute mere sounds or names, rather than intangible substances. When he argued that the Trinity must consist of three separate beings, his views were deemed heretical and he was forced to recant in 1092. The French Scholastic theologian Peter Abelard, whose tragic love affair with Héloïse in the 12th century is one of the most memorable romantic stories in medieval history, proposed a compromise between realism and nominalism known as conceptualism, according to which universals exist in particular things as properties and outside of things as concepts in the mind. Abelard maintained that revealed religion—religion based on divine revelation, or the word of God—must be justified by reason. He developed an ethics based on personal conscience that anticipated Protestant thought.

Saint Thomas Aquinas was a leading theologian and philosopher during the so-called golden age of scholasticism in the 13th century. The following passage from his Summa Theologica provides a good example of the scholastic method, which began with a question, then assembled arguments on the question that were eventually reconciled. Aquinas draws upon many authorities for his discussion, ranging from 4th-century Bc Greek philosopher Aristotle to Christian theologian Saint Augustine (ad 354-430).

The Spanish-Arab jurist and physician Averroës, the most noted Muslim philosopher of the Middle Ages, made Aristotelian science and philosophy a powerful influence on medieval thought with his lucid and scholarly commentaries on the works of Aristotle. He earned himself the title ‘the Commentator’ among the many Scholastics who came to regard Aristotle as ‘the Philosopher.’ Averroës attempted to overcome the contradictions between Aristotelian philosophy and revealed religion by distinguishing between two separate systems of truth, a scientific body of truths based on reason and a religious body of truths based on revelation. His view that reason takes precedence over religion led to his exile in 1195. Averroës’s so-called double-truth doctrine influenced many Muslim, Jewish, and Christian philosophers; it was rejected, however, by many others, and became an important issue in medieval philosophy.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas’s concepts of substance and accident.

The Jewish rabbi and physician Moses Maimonides, one of the greatest figures in Judaic thought, followed his contemporary Averroës in uniting Aristotelian science with religion but rejected the view that both of two conflicting systems of ideas can be true. In his Guide for the Perplexed (1190?) Maimonides attempted to provide a rational explanation of Judaic doctrine and defended religious beliefs (such as the belief in the creation of the world) that conflicted with Aristotelian science only when he was convinced that decisive evidence was lacking on either side.

The medieval theologian Saint Thomas Aquinas made a major contribution to philosophy through his interpretation of the relationship between the natural world and the divine. In this key passage from one of his most important works, Summa Theologica, he presents his views on the power - and limits - of human reason. His ideas reflect the optimism and cultural achievements of the High Middle Ages.

Abelard, Averroës, and Maimonides were each accused of blasphemy because their views conflicted with religious beliefs of the time. The 13th century, however, saw a series of philosophers who would come to be worshipped as saints. The Italian Scholastic philosopher Saint Bonaventure combined Platonic and Aristotelian principles and introduced the concept of substantial form, or nonmaterial substance, to account for the immortality of the soul. Bonaventure’s view tended toward pantheistic mysticism in making the aim of philosophy the ecstatic union with God.

This painting of the Arab philosopher Averroës appears in a series of paintings by Italian painter Andrea da Firenze in the church of Santa Maria Novella in Florence, Italy. The commentaries of Averroës on the philosophy of Aristotle had enormous influence on Christian and Jewish thinkers in Europe during the Middle Ages.

The 13th-century German Scholastic philosopher Saint Albertus Magnus was the first Christian philosopher to endorse and interpret the entire system of Aristotelian thought. He studied and admired the writings of the Muslim and Jewish Aristotelians and wrote commentaries on Aristotle in which he attempted to reconcile Aristotle’s thought with Christian teachings. He also took a great interest in the natural science of his day. The 13th-century English monk Roger Bacon, one of the first Scholastics to take an interest in experimental science, realized that a great deal remained to be learned about nature. He criticized the deductive method of his contemporaries and their reliance on past authority, and called for a new method of inquiry based on controlled observation.

scientist Roger Bacon was a major advocate of experimental science during the 13th century. Bacon was condemned and imprisoned for his beliefs.

The most important medieval philosopher was Saint Thomas Aquinas, a Dominican monk who was born in Italy in 1225 and later studied under Albertus Magnus in Germany. Aquinas combined Aristotelian science and Augustinian theology into a comprehensive system of thought that later became the authoritative philosophy of the Roman Catholic Church. He wrote on every known subject in philosophy and science, and his major works, Summa Theologica and Summa Contra Gentiles, in which he presents a persuasive and systematic structure of ideas, still constitute a powerful influence on Western thought. His writings reflect the renewed interest of his time in reason, nature, and worldly happiness, together with its religious faith and concern for salvation.

During the 13th century, Saint Thomas Aquinas sought to reconcile Aristotelian philosophy with Augustinian theology. Aquinas employed both reason and faith in the study of metaphysics, moral philosophy, and religion. While Aquinas accepted the existence of God on faith, he offered five proofs of God’s existence to support such a belief.

From and through the years of 354-430, lived Saint Augustine, the greatest of the Latin Fathers and one of the most eminent Western Doctors of the Church.

Saint Augustine, born in what is now Souk-Ahras, Algeria, in ad 354, brought a systematic method of philosophy to Christian theology. Augustine taught rhetoric in the ancient cities of Carthage, Rome, and Milan before his Christian baptism in 387. His discussions of the knowledge of truth and of the existence of God drew from the Bible and from the philosophers of ancient Greece. A vigorous advocate of Roman Catholicism, Augustine developed many of his doctrines while attempting to resolve theological conflicts with Donatism and Pelagianism, two heretical Christian movements.

Augustine was born on November 13, 354, in Tagaste, Numidia (now Souk-Ahras, Algeria). His father, Patricius (died about 371), was a pagan (later converted to Christianity), but his mother, Monica, was a devout Christian who laboured untiringly for her son's conversion and who was canonized by the Roman Catholic church. Augustine was educated as a rhetorician in the former North African cities of Tagaste, Madaura, and Carthage. Between the ages of 15 and 30, he lived with a Carthaginian woman whose name is unknown; in 372 she bore him a son, whom he named Adeodatus, which is Latin for ‘the gift of God.’

Inspired by the philosophical treatise Hortensius, by the Roman orator and statesman Marcus Tullius Cicero, Augustine became an earnest seeker after truth. He considered becoming a Christian, but experimented with several philosophical systems before finally entering the church. For nine years, from 373 until 382, he adhered to Manichaeism, a Persian dualistic philosophy then widely current in the Western Roman Empire. With its fundamental principle of conflict between good and evil, Manichaeism at first seemed to Augustine to correspond to experience and to furnish the most plausible hypothesis upon which to construct a philosophical and ethical system. Moreover, its moral code was not very strict; Augustine later recorded in his Confessions’:Give me chastity and continence, but not just now.’ Disillusioned by the impossibility of reconciling certain contradictory Manichaeist doctrines, Augustine abandoned this philosophy and turned to skepticism.

About 383 Augustine left Carthage for Rome, but a year later he went on to Milan as a teacher of rhetoric. There he came under the influence of the philosophy of Neoplatonism and met the bishop of Milan, St. Ambrose, then the most distinguished ecclesiastic in Italy. Augustine presently was attracted again to Christianity. At last one day, according to his own account, he seemed to hear a voice, like that of a child, repeating, ‘Take up and read.’ He interpreted this as a divine exhortation to open the Scriptures and read the first passage he happened to see. Accordingly, he opened to Romans 13:13-14, where he read: ‘ . . . not in revelry and drunkenness, not in debauchery and licentiousness, not in quarrelling and jealousy. But put on the Lord Jesus Christ, and make no provision for the flesh, to gratify its desires.’ He immediately resolved to embrace Christianity. Along with his natural son, he was baptized by Ambrose on Easter Eve in 387. His mother, who had rejoined him in Italy, rejoiced at this answer to her prayers and hopes. She died soon afterward in Ostia.

He returned to North Africa and was ordained in 391. He became bishop of Hippo (now Annaba, Algeria) in 395, an office he held until his death. It was a period of political and theological unrest, for while the barbarians pressed in upon the empire, even sacking Rome itself in 410, schism and heresy also threatened the church. Augustine threw himself wholeheartedly into the theological battle. Besides combatting the Manichaean heresy, Augustine engaged in two great theological conflicts. One was with the Donatists, a sect that held the sacraments invalid unless administered by sinless ecclesiastics. The other conflict was with the Pelagians, followers of a contemporary British monk who denied the doctrine of original sin. In the course of this conflict, which was long and bitter, Augustine developed his doctrines of original sin and divine grace, divine sovereignty, and predestination. The Roman Catholic church has found special satisfaction in the institutional or ecclesiastical aspects of the doctrines of St. Augustine; Roman Catholic and Protestant Theology alike are largely based on their more purely theological aspects. John Calvin and Martin Luther, leaders of the Reformation, were both close students of Augustine.

Augustine's doctrine stood between the extremes of Pelagianism and Manichaeism. Against Pelagian doctrine, he held that human spiritual disobedience had resulted in a state of sin that human nature was powerless to change. In his theology, men and women are saved by the gift of divine grace; against Manichaeism he vigorously defended the place of free will in cooperation with grace. Augustine died at Hippo, August 28, 430. His feast day is August 28.

Saint Augustine, a theologian and scholar of the Roman Catholic Church, did not embrace Christianity until he was more than 30 years old, but soon after his conversion his writings became influential in a number of theological debates. Author Henry Chadwick explores several works by Augustine, including his interpretation of Genesis, the doctrine of the trinity, and his attempts to reconcile biblical views with the prevailing scientific and philosophical views of the time.

The place of prominence held by Augustine between the Fathers and Doctors of the Church is comparable to that of St. Paul among the apostles. As a writer, Augustine was prolific, persuasive, and a brilliant stylist. His best-known work is his autobiographical Confessions (circa 400), exposing his early life and conversion. In his great Christian apologia The City of God (413-26), Augustine formulated a theological philosophy of history. Ten of the 22 books of this work are devoted to polemic against pantheism. The remaining 12 books trace the origin, progress, and destiny of the church and establish it as the proper successor to paganism. In 428 Augustine wrote the Retractions, in which he registered his final verdict upon his earlier books, correcting whatever his maturer judgment held to be misleading or wrong. His other writings include the Epistles, of which 270 are in the Benedictine edition, variously dated between 386 and 429; his treatises On Free Will (388-95), On Christian Doctrine (397), On Baptism: Against the Donatists (400), On the Trinity (400-16), and On Nature and Grace (415); and Homilies upon several books of the Bible.

Saint Augustine, an influential theologian and writer in the Western Church, wrote The City of God in the 5th century. In the following excerpt from the final book, or chapter, of this work, Augustine addressed a number of theological issues, including free will and the resurrection of the faithful. He asserted that God did not deprive people of their free will even when they turned to sin because it was preferable to ‘bring good out of evil than to prevent the evil from coming into existence.’ Augustine believed that the human body would rise after death, transformed into ‘the newness of the spiritual body’ and in paradise these new beings would ‘rest and see, see and love, love and praise.’

Another, whom was called Saint Thomas Aquinas, sometimes called the Angelic Doctor and the Prince of Scholastics (1225-1274), Italian philosopher and theologian, whose works have made him the most important figure in Scholastic philosophy and one of the leading Roman Catholic theologians.

During the 13th century, Saint Thomas Aquinas sought to reconcile Aristotelian philosophy with Augustinian theology. Aquinas employed both reason and faith in the study of metaphysics, moral philosophy, and religion. While Aquinas accepted the existence of God on faith, he offered five proofs of God’s existence to support such a belief.

Aquinas was born of a noble family in Roccasecca, near Aquino, and was educated at the Benedictine monastery of Monte Cassino and at the University of Naples. He joined the Dominican order while still an undergraduate in 1243, the year of his father's death. His mother, opposed to Thomas's affiliation with a mendicant order, confined him to the family castle for more than a year in a vain attempt to make him abandon his chosen course. She released him in 1245, and Aquinas then journeyed to Paris to continue his studies. He studied under the German Scholastic philosopher Albertus Magnus, following him to Cologne in 1248. Because Aquinas was heavyset and taciturn, his fellow novices called him Dumb Ox, but Albertus Magnus is said to have predicted that ‘this ox will one day fill the world with his bellowing.’

Aquinas was ordained a priest about 1250, and he began to teach at the University of Paris in 1252. His first writings, primarily summaries and amplifications of his lectures, appeared two years later. His first major work was Scripta Super Libros Sententiarum (Writings on the Books of the Sentences, 1256?), Which consisted of commentaries on an influential work concerning the sacraments of the church, known as the Sententiarum Libri Quatuor (Four Books of Sentences), by the Italian theologian Peter Lombard?

In 1256 Aquinas was awarded a doctorate in theology and appointed professor of philosophy at the University of Paris. Pope Alexander IV (reigned 1254-1261) summoned him to Rome in 1259, where he acted as adviser and lecturer to the papal court. Returning to Paris in 1268, Aquinas immediately became involved in a controversy with the French philosopher Siger de Brabant and other followers of the Islamic philosopher Averroës.

The medieval theologian Saint Thomas Aquinas made a major contribution to philosophy through his interpretation of the relationship between the natural world and the divine. In this key passage from one of his most important works, Summa Theologica, he presents his views on the power-and limits-of human reason. His ideas reflect the optimism and cultural achievements of the High Middle Ages.

To understand the crucial importance of this controversy for Western thought, it is necessary to consider the context in which it occurred. Before the time of Aquinas, Western thought had been dominated by the philosophy of Saint Augustine, the Western church's great Father and Doctor of the 4th and 5th century, who taught that in the search for truth people must depend upon sense experience. Early in the 13th century the major works of Aristotle were made available in a Latin translation, accompanied by the commentaries of Averroës and other Islamic scholars. The vigour, clarity, and authority of Aristotle's teachings restored confidence in empirical knowledge and gave rise to a school of philosophers known as Averroists. Under the leadership of Siger de Brabant, the Averroists asserted that philosophy was independent of revelation.

Averroism threatened the integrity and supremacy of Roman Catholic doctrine and filled orthodox thinkers with alarm. To ignore Aristotle, as interpreted by the Averroists, was impossible; to condemn his teachings was ineffectual. He had to be reckoned with. Albertus Magnus and other scholars had attempted to deal with Averroism, but with little success. Aquinas succeeded brilliantly.

Reconciling the Augustinian emphasis upon the human spiritual principle with the Averroist claim of autonomy for knowledge derived from the senses, Aquinas insisted that the truths of faith and those of sense experience, as presented by Aristotle, are fully compatible and complementary. Some truths, such as that of the mystery of the incarnation, can be known only through revelation, and others, such as that of the composition of material things, only through experience; Still others, such as that of the existence of God, are known through both equally. All knowledge, Aquinas held, originates in sensation, but sense data can be made intelligible only by the action of the intellect, which elevates thought toward the apprehension of such immaterial realities as the human soul, the angels, and God. To reach understanding of the highest truths, those with which religion is concerned, the aid of revelation is needed. Aquinas's moderate realism placed the universals firmly in the mind, in opposition to extreme realism, which posited their independence of human thought. He admitted a foundation for universals in existing things, however, in opposition to nominalism and conceptualism.

Aquinas first suggested his mature position in the treatise De Unitate Intellectus Contra Averroistas (1270), which has been translated into English as The Trinity and the Unicity of the Intellect (1946). This work turned the tide against his opponents, who were condemned by the church.

Aquinas left Paris in 1272 and proceeded to Naples, where he organized a new Dominican school. In March 1274, while travailing to the Council of Lyon, to which he had been commissioned by Pope Gregory X, Aquinas fell ill. He died on March 7 at the Cistercian monastery of Fossanova.

Aquinas was canonized by Pope John XXII in 1323 and proclaimed a Doctor of the Church by Pope Pius V in 1567.

Saint Thomas Aquinas was a leading theologian and philosopher during the so-called golden age of scholasticism in the 13th century. The following passage from his Summa Theologica provides a good example of the scholastic method, which began with a question, then assembled arguments on the question that were eventually reconciled. Aquinas draws upon many authorities for his discussion, ranging from 4th-century Bc Greek philosopher Aristotle to Christian theologian Saint Augustine (ad 354-430).

More successfully than any other theologian or philosopher, Aquinas organized the knowledge of his time in the service of his faith. In his effort to reconcile faith with intellect, he created a philosophical synthesis of the works and teachings of Aristotle and other classic sages; of Augustine and other church fathers; of Averroës, Avicenna, and other Islamic scholars; of Jewish thinkers such as Maimonides and Solomon ben Yehuda ibn Gabirol; and of his predecessors in the Scholastic tradition. This synthesis he brought into line with the Bible and Roman Catholic doctrine.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas’s concepts of substance and accident.

Aquinas's accomplishment was immense; his work marks one of the few great culminations in the history of philosophy. After Aquinas, Western philosophers could choose only between humbly following him and striking off in some altogether different direction. In the centuries immediately following his death, the dominant tendency, even among Roman Catholic thinkers, was to adopt the second alternative. Interest in Thomist philosophy began to revive, however, toward the end of the 19th century. In the encyclical Aeterni Patris (Of the Eternal Father, 1879), Pope Leo XIII recommended that St. Thomas's philosophy be made the basis of instruction in all Roman Catholic schools. Pope Pius XII, in the encyclical Humani Generis (Of the Human Race, 1950), affirmed that the Thomist philosophy is the surest guide to Roman Catholic doctrine and discouraged all departures from it. Thomism remains a leading school of contemporary thought. Among the thinkers, Roman Catholic and non-Roman Catholic alike, who have operated within the Thomist framework have been the French philosophers Jacques Maritain and Étienne Gilson.

St. Thomas was an extremely prolific author, and about 80 works are ascribed to him. The two most important are Summa Contra Gentiles (1261-1264) and Summa Theologica (1265-1273). Summa Contra Gentiles, which has been translated into English as On the Truth of the Catholic Faith (1956), is a closely reasoned treatise intended to persuade intellectual Muslims of the truth of Christianity. Summa Theologica, which has been republished frequently in Latin and vernacular editions under its Latin title, was written in three parts (on God, on the moral life, and on Christ) and was intended to set forth Christian doctrine for beginners. The last part remained unfinished at his death.

At this point a Scholasticism, began as the philosophic and theological movement that attempted to use natural human reason, in particular, the philosophy and science of Aristotle, to understand the supernatural content of Christian revelation. It was dominant in the medieval Christian schools and universities of Europe from about the middle of the 11th century to about the middle of the 15th century. The ultimate ideal of the movement was to integrate of an ordered system both the natural wisdom of Greece and Rome and the religious wisdom of Christianity. The term Scholasticism is also used in a wider sense to signify the spirit and methods characteristic of this period of thought or any similar spirit and attitude toward learning found in other periods of history. The term Scholastic, which originally designated the heads of the medieval monastic or cathedral schools from which the universities developed, finally came to be applied to anyone teaching philosophy or theology in such schools or universities.

Saint Thomas Aquinas was a leading theologian and philosopher during the so-called golden age of scholasticism in the 13th century. The following passage from his Summa Theologica provides a good example of the scholastic method, which began with a question, then assembled arguments on the question that were eventually reconciled. Aquinas draws upon many authorities for his discussion, ranging from 4th-century Bc Greek philosopher Aristotle to Christian theologian Saint Augustine (ad 354-430).

Scholastic thinkers held a wide variety of doctrines in both philosophy and theology. What gives unity to the whole Scholastic movement are the common aims, attitudes, and methods generally accepted by all its members? The chief concern of the Scholastics was not to discover new facts but to integrate the knowledge already acquired separately by Greek reasoning and Christian revelation. This concern is one of the most characteristic differences between Scholasticism and modern thought since the Renaissance.

The basic aim of the Scholastics determined certain common attitudes, the most important of which was their conviction of the fundamental harmony between reason and revelation. The Scholastics maintained that because the same God was the source of both types of knowledge and truth was one of his chief attributes, he could not contradict himself in these two ways of speaking. Any apparent opposition between revelation and reason could be traced either to an incorrect use of reason or to an inaccurate interpretation of the words of revelation. Because the Scholastics believed that revelation was the direct teaching of God, it possessed for them a higher degree of truth and certitude than did natural reason. In apparent conflicts between religious faith and philosophic reasoning, faith was thus always the supreme arbiter; the theologian's decision overruled that of the philosopher. After the early 13th century, Scholastic thought emphasized more the independence of philosophy within its own domain. Nonetheless, throughout the Scholastic period, philosophy was called the servant of theology, not only because the truth of philosophy was subordinated to that of theology, but also because the theologian used philosophy to understand and explain revelation.

This attitude of Scholasticism stands in sharp contrast to the so-called double-truth theory of the Spanish-Arab philosopher and physician Averroës. His theory assumed that truth was accessible to both philosophy and Islamic theology but that only philosophy could attain it perfectly. The so-called truths of theology served, hence, as imperfect imaginative expressions for the common people of the authentic truth accessibly only to philosophy. Averroës maintained that philosophic truth could even contradict, at least verbally, the teachings of Islamic theology.

As a result of their belief in the harmony between faith and reason, the Scholastics attempted to determine the precise scope and competence of each of these faculties. Many early Scholastics, such as the Italian ecclesiastic and philosopher St. Anselm, did not clearly distinguish the two and were overconfident that reason could prove certain doctrines of revelation. Later, at the height of the mature period of Scholasticism, the Italian theologian and philosopher St. Thomas Aquinas worked out a balance between reason and revelation. Scholastics after Aquinas, however, beginning with the Scottish theologian and philosopher John Duns Scotus, restricted more and more the domain of truths capable of being proved by reason and insisted that many doctrines previously thought to have been proved by philosophy had to be accepted on the basis of faith alone. One reason for this restriction was that Scholastics applied the requirements for scientific demonstration, as first specified in Aristotle's Organon, much more rigorously than previous philosophers had done. These requirements were so strict that Aristotle himself was rarely able to apply them fully beyond the realm of mathematics. It was this trend that led finally to the loss of confidence in natural human reason and philosophy that is characteristic of the early Renaissance and of the first Protestant religious reformers, such as Martin Luther.

Another common attitude among Scholastics was their great respect for the so-called authorities in both philosophy and theology. These authorities were the great philosophers of Greece and Rome and the early Fathers of the Church. The medieval Scholastics educated themselves to think and write only by intensive study of these ancient authors, whose culture and learning had been so much richer than their own. After they had reached their full maturity of thought and had begun to create original works of philosophy, they continued the practice of quoting authorities to lend weight to their own opinions, even though the latter were reached, in many cases, quite independently. Later critics concluded from this practice that the Scholastics were mere compilers or repeaters of their authorities. As a matter of fact, the mature Scholastics, including Aquinas and Duns Scotus, were extremely flexible and independent in their use of the texts of the ancients; frequently, in order to bring the texts into harmony with their own positions, they gave interpretations that were difficult to reconcile with the ancients' intentions. The appeal to authority was often little more than a stylistic ornament for beginning or ending the exposition of the commentator's own opinions and was intended to show that the commentator's views were in continuity with the past and not mere novelties. Novelty and originality of thought were not sought deliberately by any of the Scholastics but were rather underplayed as much as possible.

The Scholastics considered Aristotle the chief authority in philosophy, calling him simply the Philosopher. The early Christian prelate and theologian St. Augustine was their principal authority in theology, subordinate only to the Bible and the official councils of the church. The Scholastics adhered most closely and uncritically to authority in accepting Aristotle's opinions in the empirical sciences, such as physics, astronomy, and biology. Their uncritical acceptance of Aristotle's scientific views produced a serious weakness in Scholasticism and was one of the principal reasons for its scornful rejection by scientists during the Renaissance and later.

One of the principal methods of Scholasticism was the use of the logic and philosophic vocabulary of Aristotle in teaching, demonstration, and discussion. Another important method was the practice of teaching a text by means of a commentary by some accepted authority. In philosophy, this authority was usually Aristotle. In theology, the principal texts were the Bible and the Sententiarum Libri Quatuor (Four Books of Sentences) by the 12th-century Italian theologian and prelate Peter Lombard, a collection of the opinions of the early Fathers of the Church on problems of theology. The early Scholastics began by adhering closely to the text on which they were commenting. Gradually, as the practice of critical reading developed their own powers of thinking, they began to introduce many supplementary commentaries on points, known as disputed questions, which either were not covered or were not adequately solved by the text itself. Beginning in the 13th century these supplementary commentaries, embodying the personal thought of the teachers, became the largest and most important part of the commentaries, with the result that literal explanation of the text was reduced to a mere fraction of each commentary.

Closely allied with the commentaries on disputed questions was the technique of discussion by means of public disputation. Every professor in a medieval university was required to appear several times a year before the assembled faculty and students in a disputation, defending crucial points of his own teaching against all persons who challenged them. The forms of Aristotelian logic were employed in both defence and attack. In the 13th century the public disputation became a flexible educational tool for stimulating, testing, and communicating the progress of thought in philosophy and theology. After the middle of the 14th century, however, the vitality of public disputation declined, and it became a rigid formalism. Disputants became concerned less with real content and more with fine points of logic and minute subtleties of thought. This degraded form of disputation did much to give Scholasticism a bad reputation during the Renaissance and later; consequently, many modern thinkers have considered it mere pedantic logical formalism.

During the 13th century, Saint Thomas Aquinas sought to reconcile Aristotelian philosophy with Augustinian theology. Aquinas employed both reason and faith in the study of metaphysics, moral philosophy, and religion. While Aquinas accepted the existence of God on faith, he offered five proofs of God’s existence to support such a belief.

The outstanding Scholastics of the 11th and 12th centuries included Anselm, the French philosopher, theologian, and teacher of logic Peter Abelard, and the philosopher and clergyman Roscelin, who founded the school of philosophy known as nominalism. Among Jewish thinkers of the same period, the rabbi, philosopher, and physician Maimonides attempted to reconcile Aristotelian philosophy with divine revelation, as understood in Judaism, in a spirit similar to that of the Christian Scholastics. The Scholastics of the so-called golden age of the 13th century included Aquinas and the German philosopher St. Albertus Magnus, both of the Dominican order; the English monk and philosopher Roger Bacon, the Italian prelate and theologian St. Bonaventure, and Duns Scotus, all of the Franciscan order; and the Belgian secular priest Henry of Ghent (1217? -1293). Nominalism became the dominant school of philosophy in the 14th century, when Scholasticism began to decline. The most important nominalist was the English philosopher William of Ockham, a great logician who attacked all the philosophic systems of the preceding Scholastics and maintained that natural reason and philosophy had a much more restricted field of operation than his predecessors had held to be the case.

Thirteenth-century Italian philosopher and theologian Saint Thomas Aquinas attempted to synthesize Christian belief with a broad range of human knowledge, embracing diverse sources such as Greek philosopher Aristotle and Islamic and Jewish scholars. His thought exerted lasting influence on the development of Christian theology and Western philosophy. Author Anthony Kenny examines the complexities of Aquinas’s concepts of substance and accident.

A brilliant but brief revival of Scholasticism, especially in the field of theology, took place in Spain in the 16th century, chiefly among the Dominicans, as exemplified by the Spanish theologian Francisco de Vitoria, and the Jesuits, as exemplified by the Spanish theologian and philosopher Francisco Suárez. A more widespread revival was launched by Pope Leo XIII in 1879 with the purpose of reconsidering, in the light of modern needs, the great Scholastic systems of the 13th century, especially that of Aquinas, and of incorporating in a modern reformulation of those systems all the genuine contributions of modern thought. This revival, which has often been called neo-Scholasticism, is one of the established currents of contemporary thought. The principal exponents of neo-Scholasticism include the French philosopher and diplomat Jacques Maritain and the French philosopher and historian of philosophy Étienne Henri Gilson.

Even, Aristotle's works were lost in the West after the decline of Rome. During the 9th century ad, Arab scholars introduced Aristotle, in Arabic translation, to the Islamic world (see Islam). The 12th-century Spanish-Arab philosopher Averroës is the best known of the Arabic scholars who studied and commented on Aristotle. In the 13th century, the Latin West renewed its interest in Aristotle's work, and Saint Thomas Aquinas found in it a philosophical foundation for Christian thought. Church officials at first questioned Aquinas's use of Aristotle; in the early stages of its rediscovery, Aristotle's philosophy was regarded with some suspicion, largely because his teachings were thought to lead to a materialistic view of the world. Nevertheless, the work of Aquinas was accepted, and the later philosophy of scholasticism continued the philosophical tradition based on Aquinas's adaptation of Aristotelian thought.

Aquinas made many important investigations into the philosophy of religion, including an extremely influential study of the attributes of God, such as omnipotence, omniscience, eternity, and benevolence. He also provided a new account of the relationship between faith and reason, arguing against the Averroists that the truths of faith and the truths of reason cannot conflict but rather apply to different realms. The truths of natural science and philosophy are discovered by reasoning from facts of experience, whereas the tenets of revealed religion, the doctrine of the Trinity, the creation of the world, and other articles of Christian dogma are beyond rational comprehension, although not inconsistent with reason, and must be accepted on faith. The metaphysics, theory of knowledge, ethics, and politics of Aquinas were derived mainly from Aristotle, but he added the Augustinian virtues of faith, hope, and charity and the goal of eternal salvation through grace to Aristotle’s naturalistic ethics with its goal of worldly happiness.

The most important critics of Thomistic philosophy (adherence to the theories of Aquinas) were the 13th-century Scottish theologian John Duns Scotus and 14th-century English Scholastic William of Ockham. Duns Scotus developed a subtle and highly technical system of logic and metaphysics, but because of the fanaticism of his followers the name Duns later ironically became a symbol of stupidity in the English word dunce. Scotus rejected the attempt of Aquinas to reconcile rational philosophy with revealed religion. He maintained, in a modified version of the double-truth doctrine of Averroës, that all religious beliefs are matters of faith, except for the belief in the existence of God, which he regarded as logically provable. Against the view of Aquinas that God acts in accordance with his rational nature, Scotus argued that the divine will is prior to the divine intellect and creates, rather than follows, the laws of nature and morality, thus implying a stronger notion of free will than that of Aquinas. On the issue of universals, Scotus developed a new compromise between realism and nominalism, accounting for the difference between individual objects and the forms that these objects exemplify as a logical rather than a real distinction.

William of Ockham formulated the most radically nominalistic criticism of the Scholastic belief in intangible, invisible things such as forms, essences, and universals. He maintained that such abstract entities are merely references of words to other words rather than to actual things. His famous rule, known as Ockham’s razor—which said that one should not assume the existence of more things than are logically necessary—became a fundamental principle of modern science and philosophy.

In the 15th and 16th centuries a revival of scientific interest in nature was accompanied by a tendency toward pantheistic mysticism—that is, finding God in all things. The Roman Catholic prelate Nicholas of Cusa anticipated the work of the Polish astronomer Nicolaus Copernicus in his suggestion that the Earth moved around the Sun, thus displacing humanity from the centre of the universe; he also conceived of the universe as infinite and identical with God. The Italian philosopher Giordano Bruno, who similarly identified the universe with God, developed the philosophical implications of the Copernican theory. Bruno’s philosophy influenced subsequent intellectual forces that led to the rise of modern science and to the Reformation.

The word modern in philosophy originally meant ‘new,’ distinguishing a new historic era both from antiquity and from the intervening Middle Ages. Many things had occurred in the intellectual, religious, political, and social life of Europe to justify the belief of 16th- and 17th-century thinkers in the genuinely new character of their times. The explorations of the world; the Protestant Reformation, with its emphasis on individual faith; the rise of commercial urban society; and the dramatic appearance during the Renaissance of new ideas in all areas of culture stimulated the development of a new philosophical world-view.

The medieval view of the world as a hierarchical order of beings created and governed by God was supplanted by the mechanistic picture of the world as a vast machine, the parts of which move in accordance with strict physical laws, without purpose or will. In this view of the universe, known as Mechanism, science took precedence over spirituality, and the surrounding physical world that we experience and observe received as much, if not more, attention than the world to come. The aim of human life was no longer conceived as preparation for salvation in the next world, but rather as the satisfaction of people’s natural desires. Political institutions and ethical principles ceased to be regarded as reflections of divine command and came to be seen as practical devices created by humans.

The human mind itself seemed an inexhaustible reality, on a par with the physical reality of matter. Modern philosophers had the task of defining more clearly the essence of mind and of matter, and of reasoning about the relation between the two. Individuals ought to see for themselves, they believed, and study the ‘book of Nature,’ and in every case search for the truth with their own reason.

Since the 15th century modern philosophy has been marked by a continuing interaction between systems of thought based on a mechanistic, materialistic interpretation of the universe and those founded on a belief in human thought as the only ultimate reality. This interaction has reflected the increasing effect of scientific discovery and political change on philosophical speculation.

This painting from the 19th century depicts Italian scientist Galileo at the Vatican in Rome in the 17th century. Galileo was forced to stand trial for his belief in Copernicanism, or the idea that Earth moves around the Sun. The Roman Catholic Church forced Galileo to publicly denounce Copernicanism and spend the rest of his life under house arrest.

In the new philosophical climate, experience and reason became the sole standards of truth. The first great spokesperson for the new philosophy was the English philosopher and statesman Francis Bacon, who denounced reliance on authority and verbal argument and criticized Aristotelian logic as useless for the discovery of new laws. Bacon called for a new scientific method based on reasoned generalization from careful observation and experiment. He was the first to formulate rules for this new method of drawing conclusions, now known as inductive inference (see Induction).

English philosopher and statesman Sir Francis Bacon’s philosophical treatise, Novum Organum (1620), is regarded as an important contribution to scientific methodology. In this work Bacon advanced the necessity of experimentation and accurate observation. Writing in aphorisms (concise statements of principle), Bacon outlined four types of false notions or methods that impede the ability to study nature impartially. He labelled these notions the Idols of the Tribe, the Idols of the Cave, the Idols of the Market-place, and the Idols of the Theatre. Novum Organum greatly influenced the later empiricists, including English philosopher John Locke.

The work of Italian physicist and astronomer Galileo was of even greater importance in the development of a new world-view. Galileo brought attention to the importance of applying mathematics to the formulation of scientific laws. This he accomplished by creating the science of mechanics, which applied the principles of geometry to the motions of bodies. The success of mechanics in discovering reliable and useful laws of nature suggested to Galileo and to later scientists that all nature is designed in accordance with mechanical laws.

The struggle between the Roman Catholic Church and 17th-century Italian physicist and astronomer Galileo has become symbolic of the clash between authority and intellectual freedom, but Galileo himself did not foresee any conflict. Using one of the first telescopes, Galileo found evidence to support the Copernican theory that the Earth and the other planets revolved around the Sun. Galileo believed that his scientific findings fell far outside the theological realm. Author Stillman Drake explores Galileo’s shock and disbelief as his disagreement with the church escalated.

These great changes of the 15th and 16th centuries brought about two intellectual crises that profoundly affected Western civilization. First, the decline of Aristotelian science called into question the methods and foundations of the sciences. This decline came about for a number of reasons including the inability of Aristotelian principles to explain new observations in astronomy. Second, new attitudes toward religion undermined religious authority and gave agnostic and atheistic ideas a chance to be heard.

French philosopher and mathematician René Descartes (1596-1650) is sometimes called the father of modern philosophy. In 1649 Descartes was invited by Queen Christina of Sweden to Stockholm to instruct the queen in philosophy. Although treated well by the queen, he was unaccustomed to the cold of Swedish winters and died of pneumonia the following year.

During the 17th century French mathematician, physicist, and philosopher René Descartes attempted to resolve both crises. He followed Bacon and Galileo in criticizing existing methods and beliefs, but whereas Bacon had argued for an inductive method based on observed facts, Descartes made mathematics the model for all science. Descartes championed the truth contained in the ‘clear and distinct ideas’ of reason itself. The advance toward knowledge was from one such truth to another, as in mathematical reasoning. Descartes believed that by following his rationalist method, one could establish first principles (fundamental underlying truths) for all knowledge—about man, the world, and even God.

The 17th century French scientist and mathematician René Descartes was also one of the most influential thinkers in Western philosophy. Descartes stressed the importance of skepticism in thought and proposed the idea that existence had a dual nature: one physical, the other mental. The latter concept, known as Cartesian dualism, continues to engage philosophers today. This passage from Discourse on Method (first published in his Philosophical Essays in 1637) contains a summary of his thesis, which includes the celebrated phrase ‘I think, therefore I am.’

Descartes resolved to reconstruct all human knowledge on an absolutely certain foundation by refusing to accept any belief, even the belief in his own existence, until he could prove it to be necessarily true. In his so-called dream argument, he argued that our inability to prove with certainty when we are awake and when we are dreaming makes most of our knowledge uncertain. Ultimately he concluded that the first thing of whose existence one can be certain is oneself as a thinking being. This conclusion forms the basis of his well-known argument, ‘Cogito, ergo sum’ (‘I think, therefore I am’). He also argued that, in pure thought, one has a clear conception of God and can demonstrate that God exists. Descartes argued that secure knowledge of the reality of God allowed him to have his earlier doubts about knowledge and science.

French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy focussing on its unconventional use of logic and the reactions it aroused.

Despite his mechanistic outlook, Descartes accepted the traditional religious doctrine of the immortality of the soul and maintained that mind and body are two distinct substances, thus exempting mind from the mechanistic laws of nature and providing for freedom of the will. His fundamental separation of mind and body, known as dualism, raised the problem of explaining how two such different substances as mind and body can affect each other, a problem he was unable to solve since which has remained a concern of philosophy ever. Descartes’s thought launched an era of speculation in metaphysics as philosophers made a determined effort to overcome dualism -the belief in the irreconcilable difference between mind and matter - and obtain unity. The separation of mind and matter is also known as Cartesian dualism after Descartes.

The 17th–century English philosopher Thomas Hobbes, in his effort to attain unity, asserted that matter is the only real substance. He constructed a comprehensive system of metaphysics that provided a solution to the mind-body problem by reducing mind to the internal motions of the body. He also argued that there is no contradiction between human freedom and causal determinism—the view that every act is determined by a prior cause. Both, according to Hobbes, work in accordance with the mechanical laws that govern the universe.

Seventeenth-century English philosopher Thomas Hobbes’s view of human nature is often characterized as deeply pessimistic. In the famous phrase from Leviathan (1651), Hobbes’s best–known work, ‘the life of man is nasty, brutish, and short.’ This excerpt from a study of Hobbes’s work by author Richard Tuck makes it clear, however, that Hobbes developed his theory of human morality and social relations from the humanist tradition prevalent among intellectuals of his time.

In his ethical theory Hobbes derived the rules of human behaviour from the law of self-preservation and justified egoistic action as the natural human tendency. In his political theory he maintained that government and social justice are artificial creations based on social contract (voluntary agreement between people and their government) and maintained by force. In his most famous work, Leviathan (1651), Hobbes justified political authority on the basis that self-interested people who existed in a terrifying ‘state of nature’ - that is, without a ruler - would seek to protect themselves by forming a political commonwealth that had rules and regulations. He concluded that absolute monarchy is the most effective means of preserving peace.

A member of the rationalist school of philosophy, Baruch Spinoza pursued knowledge through deductive reasoning rather than induction from sensory experience. Spinoza applied the theoretical method of mathematics to other realms of inquiry. Following the format of Euclid’s Elements, Spinoza’s Ethics organized morality and religion into definitions, axioms, and postulates.

Whereas Hobbes tried to oppose Cartesian dualism by reducing mind to matter, the 17th-century Dutch philosopher Baruch Spinoza attempted to reduce matter to divine spiritual substance. He constructed a remarkably precise and rigorous system of philosophy that offered new solutions to the mind-body problem and to the conflict between religion and science. Like Descartes, Spinoza maintained that the entire structure of nature can be deduced from a few basic definitions and axioms, on the model of Euclidean geometry. However, Spinoza believed that Descartes’s theory of two substances created an insoluble problem of the way in which mind and bodies interact. He concluded that the ultimate substance is God and that God, substance, and nature are identical. Thus he supported the pantheistic view that all things are aspects or modes of God (see Pantheism).

Dutch philosopher Baruch Spinoza is regarded as the foremost Western proponent of pantheism, the belief that God and nature are one and the same. This idea is the central thesis of Spinoza’s most famous and influential work, the 1674 Ethica Ordine Geometrico Demonstrata (Ethics Demonstrated with Geometrical Order). Author Roger Scruton examines Spinoza’s assertion that God is the ‘substance’ of everything.

Spinoza’s solution to the mind-body problem explained the apparent interaction of mind and body by regarding them as two forms of the same substance, which exactly parallel each other, thus seeming to affect each other but not really doing so. Spinoza’s ethics, like the ethics of Hobbes, was based on materialistic psychology according to which individuals are motivated only by self-interest. But in contrast to Hobbes, Spinoza concluded that rational self-interest coincides with the interest of others.

English philosopher John Locke explained his theory of empiricism, a philosophical doctrine holding that all knowledge is based on experience, in An Essay Concerning Human Understanding (1690). Locke believed the human mind to be a blank slate at birth that gathered all its information from its surroundings - starting with simple ideas and combining these simple ideas into more complex ones. His theory greatly influenced education in Great Britain and the United States. Locke believed that education should begin in early childhood and should proceed gradually as the child learns increasingly complex ideas.

English philosopher John Locke responded to the challenge of Cartesian dualism by supporting a commonsense view that the corporeal (bodily or material) and the spiritual are simply two parts of nature that remain always present in human experience. He made no attempt rigorously to define these parts of nature or to construct a detailed system of metaphysics that attempted to explain them; Locke believed that such philosophical aims were impossible to carry out and thus pointless. Against the rationalism of Descartes and Spinoza, who believed in the ability to achieve knowledge through reasoning and logical deduction, Locke continued the empiricist tradition begun by Bacon and embraced by Hobbes. The empiricists believed that knowledge came from observation and sense perceptions rather than from reason alone.

In 1690 Locke gave empiricism a systematic framework with the publication of his Essay Concerning Human Understanding. Of particular importance was Locke’s redirection of philosophy away from the study of the physical world and toward the study of the human mind. In so doing he made epistemology, the study of the nature of knowledge, the principal concern of philosophy in the 17th and 18th century. In his own theory of the mind Locke attempted to reduce all ideas to simple elements of experience, but he distinguished sensation and reflection as sources of experience, sensation providing the material for knowledge of the external world, and reflection the material for knowledge of the mind.

Locke greatly influenced the skepticism of later British thinkers, such as George Berkeley and David Hume, by recognizing the vagueness of the concepts of metaphysics and by pointing out that inferences about the world outside the mind cannot be proved with certainty. His ethical and political writings had an equally great influence on subsequent thought. During the late 18th century the founders of the modern school of utilitarianism, which makes happiness for the largest possible number of people the standard of right and wrong, drew heavily on the writings of Locke. His defence of constitutional government, religious tolerance, and natural human rights influenced the development of liberal thought during the late 18th century in France and the United States as well as in Great Britain.

Efforts to resolve the dualism of mind and matter, a problem first raised by Descartes, continued to engage philosophers during the 17th and 18th centuries. The division between science and religious belief also occupied them. There, the aim was to preserve the essentials of faith in God while at the same time defending the right to think freely. One view called Deism saw God as the cause of the great mechanism of the world, a view more in harmony with science than with traditional religion. Natural science at this time was striding ahead, relying on sense perception as well as reason, and thereby discovering the universal laws of nature and physics. Such empirical (observation-based) knowledge appeared to be more certain and valuable than philosophical knowledge based upon reason alone.

After Locke philosophers became more sceptical about achieving knowledge that they could be certain was true. Some thinkers who despaired of finding a resolution to dualism embraced skepticism, the doctrine that true knowledge, other than what we experience through the senses, is impossible. Others turned to increasingly radical theories of being and knowledge. Among them was German philosopher Immanuel Kant, probably the most influential of all because he set Western philosophy on a new path that it still follows today. Kant’s view that knowledge of the world is dependent upon certain innate categories or ideas in the human mind is known as idealism.

Nevertheless, finding to a theory that magnifies the role of decisions, or free selection from among equally possible alternatives, in order to show that what appears to be objective or fixed by nature is in fact an artefact of human convention, similar to conventions of etiquette, or grammar, or law. Thus one might suppose that moral rules owe more to social convention than to anything imposed from outside, or hat supposedly inexorable necessities are in fact the shadow of our linguistic conventions. The disadvantage of conventionalism is that it must show that alternative, equally workable conventions could have been adopted, and it is often easy to believe that, for example, if we hold that some ethical norm such as respect for promises or property is conventional, we ought to be able to show that human needs would have been equally well satisfied by a system involving a different norm, and this may be hard to establish.

A convention also suggested by Paul Grice (1913-88) directing participants in conversation to pay heed to an accepted purpose or direction of the exchange. Contributions made without paying this attention are liable to be rejected for other reasons than straightforward falsity: Something rue but unhelpful or inappropriate may meet with puzzlement or rejection. We can thus never infer fro the fact that it would be inappropriate to say something in some circumstance that what would be aid, were we to say it, would be false. This inference was frequently and in ordinary language philosophy, it being argued, for example, that since we do not normally say ‘there sees to be a barn there’ when there is unmistakably a barn there, it is false that on such occasions there seems to be a barn there.

There are two main views on the nature of theories. According to the ‘received view’ theories are partially interpreted axiomatic systems, according to the semantic view, a theory is a collection of models (Suppe, 1974). However, a natural language comes ready interpreted, and the semantic problem is no that of the specification but of understanding the relationship between terms of various categories (names, descriptions, predicates, adverbs . . .) and their meanings. An influential proposal is that this relationship is best understood by attempting to provide a ‘truth definition’ for the language, which will involve giving terms and structure of different kinds have on the truth-condition of sentences containing them.

The axiomatic method . . . as, . . . a proposition lid down as one from which we may begin, an assertion that we have taken as fundamental, at least for the branch of enquiry in hand. The axiomatic method is that of defining as a set of such propositions, and the ‘proof procedures’ or finding of how a proof ever gets started. Suppose I have as premises (1) p and (2) p ➞ q. Can I infer q? Only, it seems, if I am sure of, (3) (p & p ➞q) ➞q. Can I then infer q? Only, it seems, if I am sure that (4) (p & p ➞ q) ➞ q) ➞ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set so far implies q, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of reference, allowing movement fro the axiom. The rule ‘modus ponens’ allow us to pass from the first two premises to q. Charles Dodgson Lutwidge (1832-98) better known as Lewis Carroll’s puzzle shows that it is essential to distinguish two theoretical categories, although there may be choice about which to put in which category.

This type of theory (axiomatic) usually emerges as a body of (supposes) truths that are not nearly organized, making the theory difficult to survey or study a whole. The axiomatic method is an idea for organizing a theory (Hilbert 1970): one tries to select from among the supposed truths a small number from which all others can be seen to be deductively inferable. This makes the theory rather more tractable since, in a sense, all the truths are contained in those few. In a theory so organized, the few truths from which all others are deductively inferred are called axioms. In that, just as algebraic and differential equations, which were used to study mathematical and physical processes, could they be made mathematical objects, so axiomatic theories, like algebraic and differential equations, which are means of representing physical processes and mathematical structures, could be made objects of mathematical investigation.

In the traditional (as in Leibniz, 1704), many philosophers had the conviction that all truths, or all truths about a particular domain, followed from a few principles. These principles were taken to be either metaphysically prior or epistemologically prior or in the fist sense, they were taken to be entities of such a nature that what exists is ‘caused’ by them. When the principles were taken as epistemologically prior, that is, as axioms, they were taken to be epistemologically privileged either, e.g., self-evident, not needing to be demonstrated or (again, inclusive ‘or’) to be such that all truths do follow from them (by deductive inferences). Gödel (1984) showed that treating axiomatic theories as themselves mathematical objects, that mathematics, and even a small part of mathematics, elementary number theory, could not be axiomatized, that, more precisely, any class of axioms that in such that we could effectively decide, of any proposition, whether or not it was in the class, would be too small to capture all of the truths.

The use of a model to test for the consistency of an axiomatized system is older than modern logic. Descartes’s algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar mapping had been used by mathematicians in the 19th century for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The study of interpretations of formal system. Proof theory studies relations of deductibility as defined purely syntactically, that is, without reference to the intended interpretation of the calculus. More formally, a deductively valid argument starting from true premises, that yields the conclusion between formulae of a system. But once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation to ones that are false under the same interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpretations) and semantic consequence (a formula, written as:

{A1 . . . An} ⊨ B

If it is true in all interpretations in which they are true) The central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B, if and only if {A1. . . . An} ⊢ B. These are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only tautologies. There are many axiomatizations of the propositional calculus that are consistent an complete. Gödel proved in 1929 that first-order predicate calculus is complete: any formula that is true under every interpretation is a theorem of the calculus.

The propositional calculus or logical calculus whose expressions are character representation sentences or propositions, and constants representing operations on those propositions to produce others of higher complexity. The operations include conjunction, disjunction, material implication and negation (although these need not be primitive). Propositional logic was partially anticipated by the Stoics but researched maturity only with the work of Frége, Russell, and Wittgenstein.

The concept introduced by Frége of a function taking a number of names as arguments, and delivering one proposition as the value. The idea is that ‘χ loves y’ is a propositional function, which yields the proposition ‘John loves Mary’ from those two arguments (in that order). A propositional function is therefore roughly equivalent to a property or relation. In Principia Mathematica, Russell and Whitehead take propositional functions to be the fundamental function, since the theory of descriptions could be taken as showing that other expressions denoting functions are incomplete symbols.

Keeping in mind, the two classical truth-values that a statement, proposition, or sentence can take. It is supposed in classical (two-valued) logic, that each statement has one of these values, and none has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, and otherwise false. Statements may be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative governing assertion. Considerations of vagueness may introduce greys into a black-and-white scheme. For the issue of whether falsity is the only way of failing to be true.

Formally, it is nonetheless, that any suppressed premise or background framework of thought necessary to make an argument valid, or a position tenable. More formally, a presupposition has been defined as a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus, if ‘p’ presupposes ‘q’, ‘q’ must be true for p to be either true or false. In the theory of knowledge of Robin George Collingwood (1889-1943), any propositions capable of truth or falsity stand on a bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question. It was suggested by Peter Strawson (1919-), in opposition to Russell’s theory of ‘definite’ descriptions, that ‘there exists a King of France’ is a presupposition of ‘the King of France is bald’, the latter being neither true, nor false, if there is no King of France. It is, however, a little unclear whether the idea is that no statement at all is made in such a case, or whether a statement can made, but fails of being one a true and oppose of either true ids false. The former option preserves classical logic, since we can still say that every statement is either true or false, but the latter does not, since in classical logic the law of ‘bivalence’ holds, and ensures that nothing at all is presupposed for any proposition to be true or false. The introduction of presupposition therefore means that either a third truth-value is found, ‘intermediate’ between truth and falsity, or classical logic is preserved, but it is impossible to tell whether a particular sentence expresses a proposition that is a candidate for truth ad falsity, without knowing more than the formation rules of the language. Each suggestion carries costs, and there is some consensus that at least where definite descriptions are involved, examples like the one given are equally well handed by regarding the overall sentence false when the existence claim fails.

A proposition may be true or false it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the term is the analogy between assigning a propositional variable one or other of these values, as a formula of the propositional calculus, and assigning an object as the value of many other variable. Logics with intermediate values are called many-valued logics. Then, a truth-function of a number of propositions or sentences is a function of them that has a definite truth-value, depends only on the truth-values of the constituents. Thus (p & q) is a combination whose truth-value is true when ‘p’ is true and ‘q’ is true, and false otherwise, ¬ p is a truth-function of ‘p’, false when ‘p’ is true and true when ‘p’ is false. The way in which the value of the whole is determined by the combinations of values of constituents is presented in a truth table.

In whatever manner, truths of fact cannot be reduced to any identity and our only way of knowing them is a posteriori, by reference to the facts of the empirical world.

A proposition is knowable a priori if it can be known without experience of the specific course of events in the actual world. It may, however, be allowed that some experience is required to acquire the concepts involved in an a priori proposition. Some thing is knowable only a posteriori if it can be known a priori. The distinction given one of the fundamental problem areas of epistemology. The category of a priori propositions is highly controversial, since it is not clear how pure thought, unaided by experience, can give rise to any knowledge at all, and it has always been a concern of empiricism to deny that it can. The two great areas in which it seems to be so are logic and mathematics, so empiricists have commonly tried to show either that these are not areas of real, substantive knowledge, or that in spite of appearances their knowledge that we have in these areas is actually dependent on experience. The former line tries to show sense trivial or analytic, or matters of notation conventions of language. The latter approach is particularly y associated with Quine, who denies any significant slit between propositions traditionally thought of as a priori, and other deeply entrenched beliefs that occur in our overall view of the world.

Another contested category is that of a priori concepts, supposed to be concepts that cannot be ‘derived’ from experience, but which are presupposed in any mode of thought about the world, time, substance, causation, number, and self are candidates. The need for such concept s, and the nature of the substantive a prior knowledge to which they give rise, is the central concern of Kant ‘s Critique of Pure Reason.

Likewise, since their denial does not involve a contradiction, there is merely contingent: Their could have been in other ways a hold of the actual world, but not every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view truths of fact rest on the principle of sufficient reason, which is a reason why it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and therefore created by God. The foundation of his thought is the conviction that to each individual there corresponds a complete notion, knowable only to God, from which is deducible all the properties possessed by the individual at each moment in its history. It is contingent that God actualizes te individual that meets such a concept, but his doing so is explicable by the principle of ‘sufficient reason’, whereby God had to actualize just that possibility in order for this to be the best of all possible worlds. This thesis is subsequently lampooned by Voltaire (1694-1778), in whom of which was prepared to take refuge in ignorance, as the nature of the soul, or the way to reconcile evil with divine providence.

In defending the principle of sufficient reason sometimes described as the principle that nothing can be so without there being a reason why it is so. But the reason has to be of a particularly potent kind: eventually it has to ground contingent facts in necessities, and in particular in the reason an omnipotent and perfect being would have for actualizing one possibility than another. Among the consequences of the principle is Leibniz’s relational doctrine of space, since if space were an infinite box there could be no reason for the world to be at one point in rather than another, and God placing it at any point violate the principle. In Abelard’s (1079-1142), as in Leibniz, the principle eventually forces te recognition that the actual world is the best of all possibilities, since anything else would be inconsistent with the creative power that actualizes possibilities.

If truth consists in concept containment, then it seems that all truths are analytic and hence necessary; and if they are all necessary, surely they are all truths of reason. In that not every truth can be reduced to an identity in a finite number of steps; in some instances revealing the connection between subject and predicate concepts would require an infinite analysis, while this may entail that we cannot prove such proposition as a prior, it does not appear to show that proposition could have been false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world: If it is part of the concept of this world that it is best, how could its existence be other than necessary? An accountable and responsively answered explanation would be so, that any relational question that brakes the norm lay eyes on its existence in the manner other than hypothetical necessities, i.e., it follows from God’s decision to create the world, but God had the power to create this world, but God is necessary, so how could he have decided to do anything else? Leibniz says much more about these matters, but it is not clear whether he offers any satisfactory solutions.

The view that the terms in which we think of some area are sufficiently infected with error for it to be better to abandon them than to continue to try to give coherent theories of their use. Eliminativism should be distinguished from scepticism that claims that we cannot know the truth about some area; eliminativism claims rather that there are no truth there to be known, in the terms that we currently think. An eliminativist about theology simply counsels abandoning the terms or discourse of theology, and that will include abandoning worries about the extent of theological knowledge.

Eliminativists in the philosophy of mind counsel abandoning the whole network of terms mind, consciousness, self, Qualia that usher in the problems of mind and body. Sometimes the argument for doing this is that we should wait for a supposed future understanding of ourselves, based on cognitive science and better than any our current mental descriptions provide, sometimes it is supposed that physicalism shows that no mental description of ourselves could possibly be true.

Greek scepticism centred on the value of enquiry and questioning, scepticism is now the denial that knowledge or even rational belief is possible, either about some specific subject-matter, e.g., ethics, o r in any atra whatsoever. Classically, scepticism springs from the observation that the best methods in some area seem to fall short of giving us contact with the truth, e.g., there is a gulf between appearance and reality, and in frequency cites the conflicting judgements that our methods deliver, with the result that questions of truth become undecidable.

Sceptical tendencies emerged in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The; later distinguishes between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of ‘clear and distinct’ ideas, not far removed from the phantasia kataleptiké of the Stoics.

Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought altogether, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.

Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the celebrated ‘Cogito ergo sum’: I think therefore I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter-attacks on behalf of social and public starting-points. The metaphysical associated with this priority are the famous Cartesian dualism, or separation of mind and matter into two different but interacting substances, Descartes rigorously and rightly sees that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit’.

In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connection between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes’s notorious denial that non-human animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).

Although the structure of Descartes’s epistemology, theories of mind, and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity, and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The self conceived as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of ‘I-ness’ that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes’s view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.

Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.

He also points out, that the senses (sight, hearing, touch, etc., are often unreliable, and ‘it is prudent never to trust entirely those who have deceived us even once’, he cited such instances as the straight stick that looks ben t in water, and the square tower that look round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes’ contemporaries pointing out that since such errors come to light as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in softening up process which would ‘lead the mind away from the senses’. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown’.

Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.

A scientific understanding of these ideas could be derived, said Descartes, with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.

Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.

Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.

Still in spite of these concerns, the problem, least of mention, is of defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the ‘Theaetetus,’ that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against ‘scepticism’ or even apt to yield the truth. Natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.

The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous ‘first philosophy’, or viewpoint beyond that of the work one’s way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers to be a fanciefancy, that the more modest of tasks that are actually adopted at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.

Chance can influence the outcome at each stage: First, in the creation of genetic mutation, second, in wether the bearer lives long enough to show its effects, thirdly, in chance events that influence the individual’s actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.

We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean ‘Does natural selections always take the best path for the long-term welfare of a species?’ The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean ‘Does natural selection creates every adaption that would be valuable?’ The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.

This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variations in the intentionality for which it occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.

The parallel between biological evolution and conceptual or ‘epistemic’ evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection. Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociolology (Rescher, 1990).

On the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [partial] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.

Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.

Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that ‘if one is expanding knowledge beyond what one knows, one has no choice but to explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified. The central claim of evolutionary epistemology is synthetic, not analytic. If the central contradictory, which they are not. Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).

Two extraordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.

Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978, 613-16, and Ruse, 1986, ch.2 (. Stein and Lipton (1990) have argued, however, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanalogy, but the source of a more articulated account of the analogy.

Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).

Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.

What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can enter into causal relations, as this seems to exclude mathematically and the necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of particular facts about subjects’ environments.

For example, Armstrong (1973), predetermined that a position held by a belief in the form ‘This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).

Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.

Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.

According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for ‘us’, that we can know our evidence eliminates al the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptic’s alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.

The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory’ intended here) is that: A belief is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.

This proposal will be adequately specified only when we are told (i) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let ‘us’ look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.

(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ear’s inward ands other concurrent brain states on which the production of the belief depended: It does not include any events’ as the telephone, or the sound waves travelling between it and my ears, or any earlier decisions I made that were responsible for my being within hearing distance of the telephone at that time. It does seem intuitively plausible of a belief depends should be restricted to internal omnes proximate to the belief. Why? Goldman does not tell ‘us’. One answer that some philosophers might give is that it is because a belief’s being justified at a given time can depend only on facts directly accessible to the believer’s awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.

(2) Once the reliabilist has told ‘us’ how to delimit the process producing a belief, he needs to tell ‘us’ which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by ‘coming to a belief as to something one perceives as a result of activation of the nerve endings in some of one’s sense-organs’. A constricted type, in which that unvarying processes belong would be specified by ‘coming to a belief as to what one sees as a result of activation of the nerve endings in one’s retinas’. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retina’s particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?

If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is ‘the narrowest type that is casually operative’. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. (We need to say ‘some’ here rather than ‘any’, because, for example, when I see an oak or pine tree, are particularly ‘like-minded’ material bodies of my retinal image, for being buoyantly clear toward the operatives in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, ‘pinfish’ or ‘birchness’ ones, that would have produced the same belief.)

(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and coherent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.

Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in ‘normal’ worlds, that is, worlds consistent with ‘our general beliefs about the world . . . ‘about the sorts of objects, events and changes that occur in it’. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.

However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a belief’s being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state ‘B’ always causes one to believe that one is in brained-state ‘B’. Here the reliability of the belief-producing process is perfect, but ‘we can readily imagine circumstances in which a person goes into grain-state ‘B’ and therefore has the belief in question, though this belief is by no means justified’ (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureau’s forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. Here what prompts me to believe dors not justify my belief, but my belief is nevertheless justified by my knowledge of the weather bureau’s prediction and of its evidential force: I can advert to any disavowable inference that I ought not to be holding the belief. Indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, is necessary for either justification or knowledge.

Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.

One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.

If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In ‘Principia,’ Newton laid down as his first Rule of Reasoning in Philosophy that ‘nature does nothing in vain . . . ‘for Nature is pleased with simplicity and affects not the pomp of superfluous causes’. Leibniz hypothesized that the actual world obeys simple laws because God’s taste for simplicity influenced his decision about which world to actualize.

The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the ‘certain principles of physical reality’, said Descartes, ‘not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth’. Since the real, or that which actually exists external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concludes that all quantitative aspects of reality could be traced to the deceitfulness of the senses.

The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical frame-work based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on an ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides. The same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in theology by Platonic and Neoplatonic philosophy.

Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith. Hence the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical form’s resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.

At the beginning of the nineteenth century, Pierre-Sinon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.

LaPlace is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well’. The epistemology of science requires, he said, that we proceed by inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.

As this view of hypotheses and the truths of nature as quantities were extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlace’s assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature hat was quite different from that of the original creators of classical physics.

The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.

Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call ‘scientific’ and makes no substantive assumption about the way the world is.

A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connection between simplicity and high probability.

Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper’s or Quine’s arguments.

Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a single justifying argument. That recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connection between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.

Principles of parsimony and simplicity mediate the epistemic connection between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).

This ‘local’ approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.

It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has occurred over a wider summation of literature under more lesser than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid-A point elaborately made by Gottlob Frége. Attempts to understand the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leave ‘us’ puzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves ‘us’ worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.

Coming up with an adequate characterization of inference-and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.

The rule of inference, as for raised by Lewis Carroll, the Zeno-like problem of how a ‘proof’ ever gets started. Suppose I have as premises (i) ‘p’ and (ii) p ➝ q. Can I infer ‘q’? Only, it seems, if I am sure of (iii) (p & p ➝q) ➝ q. Can I then infer ‘q’? Only, it seems, if I am sure that (iv) (p & p ➝ q & (p & p ➝ q) ➝ q) ➝ q. For each new axiom (N) I need a further axiom (N + 1) telling me that the set so far implies ‘q’, and the regress never stops. The usual solution is to treat a system as containing not only axioms, but also rules of inference, allowing movement from the axioms. The rule ‘modus ponens’ allow ‘us’ to pass from the first premise to ‘q’. Carroll’s puzzle shows that distinguishing two theoretical categories is essential, although there may be choice about which theses to put in which category.

Traditionally, a proposition that is not a ‘conditional’, as with the ‘affirmative’ and ‘negative’, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) Equivalent, if ‘X’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.

Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’; is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ causes ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition fort ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.

What is more, that if any proposition of the form ‘if p then q’. The condition hypothesized, ‘p’. Is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of ‘material implication’, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of ‘modality’, corresponding to the thought that ‘if p is truer then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.

It follows from the definition of ‘strict implication’ that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to ‘q follows from p’, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.

The Humean problem of induction is that if we would suppose that there is some property ‘A’ concerning and observational or an experimental situation, and that out of a large number of observed instances of ‘A’, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property ‘B’. Suppose further that the background proportionate circumstances not specified in these descriptions has been varied to a substantial degree and that there is no collateral information available concerning the frequency of ‘B’s’ among ‘A’s or concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.

In this situation, an ‘enumerative’ or ‘instantial’ induction inference would move rights from the premise, that m/n of observed ‘A’s’ are ‘B’s’ to the conclusion that approximately m/n of all ‘A’s’ are ‘B’s. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of ‘A’s’ should be taken to include not only unobserved ‘A’s’ and future ‘A’s’, but also possible or hypothetical ‘A’s’ (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).

The traditional or Humean problem of induction, often referred to simply as ‘the problem of induction’, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true ‒or even that their chances of truth are significantly enhanced?

Hume’s discussion of this issue deals explicitly only with cases where all observed ‘A’s’ are ‘B’s’ and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent line of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as ‘Hume’s fork’), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.

Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or ‘experimental’, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that ‘the course of nature may change’, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue-so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).

An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.

The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Hume’s argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (i) Pragmatic justifications or ‘vindications’ of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Hume’s dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all. In that:

(1) Reichenbach’s view is that induction is best regarded, not as a form of inference, but rather as a ‘method’ for arriving at posits regarding, i.e., the proportion of ‘A’s’ remain additionally of ‘B’s’. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.

The gambler’s bet is normally an ‘appraised posit’, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a ‘blind posit’: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of ‘A’s’ are in addition of ‘B’s’ converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.

What we can know, according to Reichenbach, is that ‘if’ there is a truth of this sort to be found, the inductive method will eventually find it’. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of ‘A’s additionally constitute ‘B’s’. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives ‘us’ our best chance for success, our best gamble in a situation where there is no alternative to gambling.

This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other ‘methods’ for arriving at posits for which the same sort of defence can be given-methods that yield the same results as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it ‘ . . . is true’ than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.

An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Popper’s view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.

(2) The ordinary language response to the problem of induction has been advocated by many philosophers, none the less, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.

The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.

Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves ‘reasonable’ and our evidence ‘strong’, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.

(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.

One problem with this sort of move is that even if circularity is avoided, the movement to higher and higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.

(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 19132 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.

Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise ids truer, then the conclusion is likely to be true does not fit the standard conceptions of ‘analyticity’. A consideration of these matters is beyond the scope of the present spoken exchange.

There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is the assumption, originating in Hume, but since adopted by very many of others, that a move forward in the defence of induction would have to involve ‘turning induction into deduction’, i.e., showing, per impossible, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.

Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of ‘A’s’ in addition that occurs of, but B’s’ is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring wayin laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long-run patten of evidence in which a certain stable proportion of observed ‘A’s’ are ‘B’s’ ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).

Goodman’s ‘new riddle of induction’ purports that we suppose that before some specific time ’t’ (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term ‘grue’ to mean ‘green if examined before ’t’ and blue examined after t ʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.

The obvious alternative suggestion is that ‘grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that ‘green’ and ‘blueness’ does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue’ may be defined in terms if, ‘green’ and ‘blue’, but ‘green’ an equally well be defined in terms of ‘grue’ and ‘green’ (blue if examined before ‘t’ and green if examined after ‘t’).

The ‘grued, paradoxes’ demonstrate the importance of categorization, in that sometimes it is itemized as ‘gruing’, if examined of a presence to the future, before future time ‘t’ and ‘green’, or not so examined and ‘blue’. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For ‘grue’ is unprojectible, and cannot transmit credibility form known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, ‘grue’ is entrenched, lacking such a history, ‘grue’ is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables ‘us’ to utilize our cognitive resources best. Its prospects of being true are worse than its competitors’ and its cognitive utility is greater.

So, to a better understanding of induction we should then term is most widely used for any process of reasoning that takes ‘us’ from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premises telling that Fa, Fb, Fc . . . ‘where a, b, c’s, are all of some kind ‘G’, it is inferred that G’s from outside the sample, such as future G’s, will be ‘F’, or perhaps that all G’s are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same object’s future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.

The rational basis of any inference was challenged by Hume, who believed that induction presupposed belie in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving ‘us’ the evidence, the application of ancillary beliefs about the order of nature, and so on.

Nevertheless, the fundamental problem remains that ant experience condition by application show ‘us’ only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.

Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some-body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his ‘Logical Foundations of Probability’ (1950). Carnap’s idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared ti the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the ‘range’ of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.

Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.

Arose to the paradox of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterized objection lesson of it would be: ‘The displayed sentence is false.’

Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the ‘surprise examination paradox’: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. ‘The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday. For after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday -and would already know that it s not on Friday and would already know that it is not on Friday by the previous reasoning. The remaining days can be eliminated in the same manner’.

This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943. Although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.

Initial analyses of the subject’s argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödel’s incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following ‘self-referential’ paradox, the Knower. Consider the sentence:

(S) The negation of this sentence is known (to be true).

Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.

This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence ‘This sentence is false’ and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers to achieve the effect of self-reference yields important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarski’s Theorem) or of knowledge (Montague, 1963).

These meta-theorems still leave ‘us; with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference - as one mighty does if a logic of these concepts is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.

Explicitly, the assumption about knowledge and inferences are:

(1) If sentences ‘A’ are known, then ‘a.’

(2) (1) is known?

(3) If ‘B’ is correctly inferred from ‘A’, and ‘A’ is known, then ‘B’ id known.

To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we must add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.

The usual proposals for dealing with the Liar often have their analogues for the Knower, e.g., that there is something wrong with a self-reference or that knowledge (or truth) is properly a predicate of propositions and not of sentences. The relies that show that some of these are not adequate are often parallel to those for the Liar paradox. In addition, on e c an try here what seems to be an adequate solution for the Surprise Examination Paradox, namely the observation that ‘new knowledge can drive out knowledge’, but this does not seem to work on the Knower (Anderson, 1983).

There are a number of paradoxes of the Liar family. The simplest example is the sentence ‘This sentence is false’, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences ‘This sentence is not true’, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying ‘This sentence on the back of this T-shirt is false’, and one on the back saying ‘The sentence on the front of this T-shirt is true’. It is clear that each sentence individually is well formed, and was it not for the other, might have said something true. So any attempts to dismiss the paradox by sating that the sentence involved are meaningless will face problems.

Even so, the two approaches that have some hope of adequately dealing with this paradox is ‘hierarchy’ solutions and ‘truth-value gap’ solutions. According to the first, knowledge is structured into ‘levels’. It is argued that there be one-coherent notion expressed by the verb; knows’, but rather a whole series of notions: knows0. knows, and so on (perhaps into transfinite), stated ion terms of predicate expressing such ‘ramified’ concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the ‘truth-value gap’ solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connection with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that ‘strengthened’ or ‘super’ versions of the paradoxes tend to reappear when the solution itself is stated.

Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notions that satisfy these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as ‘is known by an omniscient God’ and concludes that there is no coherent single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.

Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically ‘stratified’ concepts. It would seem that wee must simply accept the fact that these (and similar) concepts cannot be assigned of any-one fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.

Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its show that there is something about our reasoning and our concepts that we do not understand. Famous families of paradoxes include the ‘semantic paradoxes’ and ‘Zeno’s paradoxes. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the ’Sorites paradox’ has lead to the investigations of the semantics of vagueness and fuzzy logics.

It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called ‘the’ paradox of analysis. Thus, consider the following proposition:

(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood.

(1) if true, illustrates an important type of philosophical analysis. For convenience of exposition, I will assume (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that:

(2) To be an instance of knowledge is to be as an instance of.

knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings’ on analysis suggests a second paradoxical analysis (Moore, 1942).

(3) An analysis of the concept of being a brother is that to be a

brother is to be a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and tat:

(4) An analysis of the concept of being a brother is that to be a brother is to be a brother

would also have to be true and in fact, would have to be the same proposition as (3?). Yet (3) is true and (4) is false.

Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are the same concept. Both these assumptions are explicit in Moore, but some of Moore’s remarks hint at a solution to that of another statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).

Elsewhere, of such ways, as a solution to the second paradox, to which is explicating (3) as:

(5) An analysis is given by saying that the verbal expression ‘χ is a brother’ expresses the same concept as is expressed by the conjunction of the verbal expressions ‘χ is male’ when used to express the concept of being male and ‘χ is a sibling’ when used to express the concept of being a sibling. (Ackerman, 1990).

An important point about (5) is as follows. Stripped of its philosophical jargon (‘analysis’, ‘concept’, ‘χ is a . . . ‘), (5) seems to state the sort of information generally stated in a definition of the verbal expression ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw upon listeners’ antecedent understanding of the verbal expression ‘male’ and ‘sibling’, and thus, to tell listeners what the verbal expression ‘brother’ really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moore’s intuitive requirement that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?

To answer this question, we must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysands are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern ‘us’ here.) One way to recognize the difference between the two types of analysis concerning ‘us’ here is to focus on the difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably ‘salva veritate’ whenever used in propositional attitude context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as ‘an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and anslysantia raising the first paradox is interchangeable. For example, consider the following proposition:

(6) Mary knows that some cats tail.

It is possible for John to believe (6) without believing:

(7) Mary has justified true belief, not essentially grounded in any falsehood, that some cats lack tails.

Yet this possibility clearly does not mean that the proposition that Mary knows that some casts lack tails is partly about language.

One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum is concepts that are different but that bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.

(a) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an instance of the other.

(b) The analysand and analysandum are knowable theoretical to be coextensive.

© The analysandum is simpler than the analysands a condition whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.

(d) The analysand do not have the analysandum as a constituent.

Condition (d) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (d) is a necessary condition, and partial analysis, for which it is not.

These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. , such as the concept of being 6 and the concept of the fourth root of 1296. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows. ‘J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’ by setting ‘K’ a series of armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of the form ‘If such-and-such where the case would this count as a case of Q? ‘J’ then contrasts the descriptions of the cases to which; K’ answers affirmatively with the description of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts (if possible not including the analysandum) and their mode of combination that constitute the analysand of K’‘s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able to perform this generalization, to recognize its result as correct, or even to understand he analysand that is its result. This is reminiscent of Walton’s observation that one can simply recognize a bird as a swallow without realizing just what feature of the bird (beak, wing configurations, etc.) form the basis of this recognition. (The philosophical significance of this way of recognizing is discussed in Walton, 1972) ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should ‘other things being equal’ be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. ‘J’ does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of P, R, or both enter the analysand of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there was a ‘P’ that was not an ‘R’. Would you still consider it a case of Q?

Taking all this into account, the fifth necessary condition for this sort of analysand-analysandum relations is as follows:

(e) If ‘S’ is the analysand of ‘Q’, the proposition that necessarily all and only instances of ‘S’ are instances of ‘Q’ can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations. It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition ‘p’ is one that can be expressed in form ‘not-p’, or, if ‘p’ can be expressed in the form ‘not-q’, then a contradiction is one that can be expressed in the form ‘q’. Thus, e.g., if ‘p is 2 + 1 = 4, then 2 + 1 ≠ 4 is the contradictory of ‘p’, for

2 + 1 ≠ 4 can be expressed in the form not (2 + 1 = 4). If ‘p’ is 2 + 1 ≠ 4, then 2 + 1 - 4 is a contradictory of ‘p’, since 2 + 1 ≠ 4 can be expressed in the form not (2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, ‘r’, ‘not-r’. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if ‘p’ is true, ‘not-p’ is false, no proposition ‘p’ can be at once true and false (otherwise both ‘p’ and its contradictories would be false?). In particular, for any predicate ‘p’ and object ‘χ’, it cannot be that ‘p’; is at once true of ‘χ’ and false of χ? This is the classical formulation of the principle of contradiction, but it is nonetheless, that wherein, we cannot now fault either demonstrates. We would eventually hope to be able ‘to solve the antinomy’ by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.

Many paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-cum-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the ‘Critique of Pure Reason,’ Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of ‘pure reason’ unconditioned by sense experience.

At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its ‘character’.

Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational ‘content’. (Unless otherwise indicated, ‘experience’ will be reserved for their ‘contentual representations’.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in ‘Macbeth saw a dagger’. This is, however, ambiguous between the perceptual claim ‘There was a (material) dagger in the world that Macbeth perceived visually’ and ‘Macbeth had a visual experience of a dagger’ (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).

As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience ‘represents’ and the properties that it ‘possesses’. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a non-shaped square, of which is a mental event, and it is therefore not itself irregular or is it square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.

Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change. Physical objects remain constant.

Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell ‘us’, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.

Character and content are none the less irreducibly different, for the following reasons. (a) There are experiences that completely lack content, e.g., certain bodily pleasures. (b) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. © Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (d) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content ‘singing bird’ only after the subject has learned something about birds.

According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one ‘phenomenological’ and the other ‘semantic’.

In an outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to ‘us’-is that it is an individual thing, an event, or a state of affairs.

The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (i) Simple attributions of experience, e.g., ‘Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square’, this seems to be relational. (ii) We appear to refer to objects of experience and to attribute properties to them, e.g., ‘The after-image that John experienced was certainly odd’. (iii) We appear to quantify ov er objects of experience, e.g., ‘Macbeth saw something that his wife did not see’.

The act/object analysis faces several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data -private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rock’s moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.

These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present ‘us’ with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.

According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences none the less appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G. E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are ‘indirectly aware’) are always distinct from objects of experience (of which we are ‘directly aware’). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.

A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)

In view of the above problems, the case for the act/object analysis should be reassessed. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present ‘us’ with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, ‘The after-image that John experienced was colourfully appealing’ becomes ‘John’s after-image experience was an experience of colour’, and ‘Macbeth saw something that his wife did not see’ becomes ‘Macbeth had a visual experience that his wife did not have’.

Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.

This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.

The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.

The relevant intuitions are (1) that when we say that someone is experiencing ‘an A’, or has an experience ‘of an A’, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.

Perhaps, the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does not have the resources to distinguish between, e.g.,

(1) Frank has an experience of a brown triangle

and:

(2) Frank has an experience of brown and an experience of a triangle.

Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience that is both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) is equivalent to:

(1*) Frank has an experience of something’s being both brown and triangular.

And (2) is equivalent to:

(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular,

and the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The Adverbialists may use this to answer the many-property problem by arguing that the phrase ‘a brown triangle’ in (1) does the same work as the clause ‘something’s being both brown and triangular’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).

A final position that should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.

Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture show which itself only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our mind’s eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.

Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let ‘us’ set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.

A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something ‘else’, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are ‘not’ direct realists would admit that it is a mistake to describe people as actually ‘perceiving’ something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists should want. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as ‘acquaintance’. Using such a notion, we could define direct realism this way: In ‘veridical’ experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious verison of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as ‘logical constructions’ or ‘logical fictions’, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russell’s ‘The Analysis of Mind,’ the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but ‘An Inquiry into Meaning and Truth’ (1940) represents a more empirical approach to the problem. Yet, philosophers have perennially investigated this and related distinctions using varying terminology.

Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of ‘definite descriptions’. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as ‘the first person born at sea’ only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.

Because one can interpret the relation of acquaintance or awareness as one that is not ‘epistemic’, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call ‘epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to ‘direct’ realism rules out those views defended under the cubic of ‘critical naive realism’, or ‘representational realism’, in which there is some non-physical intermediary -usually called a ‘sense-datum’ or a ‘sense impression’ -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, than ‘mediately’ perceived. What relevance does illusion have for these two forms of direct realism?

The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears as an illusory spatial elliptic circularity, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as a physical objects or theory.

So far, if the argument is relevant to any of the direct realisms distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?

We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of the object perceived, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get ‘us’ in touch with the ‘real’ nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way thing’s look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.

Still, why should we consider that we are aware of something other than a physical object in experience? Why should we not conclude that to be aware of a physical object is just to be appeared to by that object in a certain way? In its best-known form the adverbial theory of something proposes that the grammatical object of a statement attributing an experience to someone be analysed as an adverb. For example,

(A) Rod is experiencing a coloured square.

Is rewritten as?

Rod is experiencing, (coloured square)-ly

This is presented as an alternative to the act/object analysis, according to which the truth of a statement like (A) requires the existence of an object of experience corresponding to its grammatical object. A commitment to t he explicit adverbializations of statements of experience is not, however, essential to adverbialism. The core of the theory consists, rather, in the denial of objects of experience (as opposed ti objects of perception) coupled with the view that the role of the grammatical object in a statement of experience is to characterize more fully te sort of experience that is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier and, in particular, as a modifier of a verb. If it as a special kind of adverb at the semantic level.

At this point, it might be profitable to move from considering the possibility of illusion to considering the possibility of hallucination. Instead of comparing paradigmatic veridical perception with illusion, let ‘us’ compare it with complete hallucination. For any experiences or sequence of experiences we take to be veridical, we can imagine qualitatively indistinguishable experiences occurring as part of a hallucination. For those who like their philosophical arguments spiced with a touch of science, we can imagine that our brains were surreptitiously removed in the night, and unbeknown to ‘us’ are being stimulated by a neurophysiologist so as to produce the very sensations that we would normally associate with a trip to the Grand Canyon. Currently permit ‘us’ into appealing of what we are aware of in this complete hallucination that is obvious that we are not awaken to the sparking awareness of physical objects, their surfaces, or their constituents. Nor can we even construe the experience as one of an object’s appearing to ‘us’ in a certain way. It is after all a complete hallucination and the objects we take to exist before ‘us’ are simply not there. But if we compare hallucinatory experience with the qualitatively indistinguishable veridical experiences, should we most conclude that it would be ‘special’ to suppose that in veridical experience we are aware of something radically different from what we are aware of in hallucinatory experience? Again, it might help to reflect on our belief that the immediate cause of hallucinatory experience and veridical experience might be the very same brain event, and it is surely implausible to suppose that the effects of this same cause are radically different -acquaintance with physical objects in the case of veridical experience: Something else in the case of hallucinatory experience.

This version of the argument from hallucination would seem to address straightforwardly the ontological versions of direct realism. The argument is supposed to convince ‘us’ that the ontological analysis of sensation in both veridical and hallucinatory experience should give ‘us’ the same results, but in the hallucinatory case there is no plausible physical object, constituent of a physical object, or surface of a physical object with which additional premiss we would also get an argument against epistemological direct realism. That premiss is that in a vivid hallucinatory experience we might have precisely the same justification for believing (falsely) what we do about the physical world as we do in the analogous, phenomenological indistinguishable, veridical experience. But our justification for believing that there is a table before ‘us’ in the course of a vivid hallucination of a table are surely not non-inferential in character. It certainly is not, if non-inferential justifications are supposedly a consist but yet an unproblematic access to the fact that makes true our belief -by hypothesis the table does not exist. But if the justification that hallucinatory experiences give ‘us’ the same as the justification we get from the parallel veridical experience, then we should not describe a veridical experience as giving ‘us non-inferential justification for believing in the existence of physical objects. In both cases we should say that we believe what we do about the physical world on the basis of what we know directly about the character of our experience.

In this brief space, I can only sketch some of the objections that might be raised against arguments from illusion and hallucination. That being said, let us begin with a criticism that accepts most of the presuppositions of the arguments. Even if the possibility of hallucination establishes that in some experience we are not acquainted with constituents of physical objects, it is not clear that it establishes that we are never acquainted with a constituent of physical objects. Suppose, for example, that we decide that in both veridical and hallucinatory experience we are acquainted with sense-data. At least some philosophers have tried to identify physical objects with ‘bundles’ of actual and possible sense-data.

To establish inductively that sensations are signs of physical objects one would have to observe a correlation between the occurrence of certain sensations and the existence of certain physical objects. But to observe such a correlation in order to establish a connection, one would need independent access to physical objects and, by hypothesis, this one cannot have. If one further adopts the verificationist’s stance that the ability to comprehend is parasitic on the ability to confirm, one can easily be driven to Hume’s conclusion:

Let us chance our imagination to the heavens, or to the utmost limits of the universe, we never really advance a step beyond ourselves, nor can conceivable any kind of existence, but those perceptions, which have appear̀d in that narrow compass. This is the universe of the imagination, nor have we have any idea but what is there Reduced. (Hume, 1739-40, pp. 67-8).

If one reaches such a conclusion but wants to maintain the intelligibility and verifiability of the assertion about the physical world, one can go either the idealistic or the phenomenalistic route.

However, hallucinatory experiences on this view is non-veridical precisely because the sense-data one is acquainted with in hallucination do not bear the appropriate relations to other actual and possible sense-data. But if such a view were plausible one could agree that one is acquainted with the same kind of a thing in veridical and non-veridical experience but insists that there is still a sense in which in veridical experience one is acquainted with constituents of a physical object?

A different sort of objection to the argument from illusion or hallucination concerns its use in drawing conclusions we have not stressed in the above discourses. I, have in mentioning this objection, may to underscore an important feature of the argument. At least some philosophers (Hume, for example) have stressed the rejection of direct realism on the road to an argument for general scepticism with respect to the physical world. Once one abandons epistemological; direct realisms, one has an uphill battle indicating how one can legitimately make the inferences from sensation to physical objects. But philosophers who appeal to the existence of illusion and hallucination to develop an argument for scepticism can be accused of having an epistemically self-defeating argument. One could justifiably infer sceptical conclusions from the existence of illusion and hallucination only if one justifiably believed that such experiences exist, but if one is justified in believing that illusion exists, one must be justified in believing at least, some facts about the physical world (for example, that straight sticks look bent in water). The key point to stress in relying to such arguments is, that strictly speaking, the philosophers in question need only appeal to the ‘possibility’ of a vivid illusion and hallucination. Although it would have been psychologically more difficult to come up with arguments from illusion and hallucination if we did not believe that we actually had such experiences, I take it that most philosophers would argue that the possibility of such experiences is enough to establish difficulties with direct realism. Indeed, if one looks carefully at the argument from hallucination discussed earlier, one sees that it nowhere makes any claims about actual cases of hallucinatory experience.

Another reply to the attack on epistemological direct realism focuses on the implausibility of claiming that there is any process of ‘inference’ wrapped up in our beliefs about the world and its surrounding surfaces. Even if it is possible to give a phenomenological description of the subjective character of sensation, it requires a special sort of skill that most people lack. Our perceptual beliefs about the physical world are surely direct, at least in the sense that they are unmediated by any sort of conscious inference from premisses describing something other than a physical object. The appropriate reply to this objection, however, is simply to acknowledge the relevant phenomenological fact and point out that from the perceptive of epistemologically direct realism, the philosopher is attacking a claim about the nature of our justification for believing propositions about the physical world. Such philosophers need carry out of any comment at all about the causal genesis of such beliefs.

As mentioned that proponents of the argument from illusion and hallucination have often intended it to establish the existence of sense-data, and many philosophers have attacked the so-called sense-datum inference presupposed in some statements of the argument. When the stick looked bent, the penny looked elliptical and the yellow object looked red, the sense-datum theorist wanted to infer that there was something bent, elliptical and red, respectively. But such an inference is surely suspect. Usually, we do not infer that because something appears to have a certain property, that affairs that affecting something that has that property. When in saying that Jones looks like a doctor, I surely would not want anyone to infer that there must actually be someone there who is a doctor. In assessing this objection, it will be important to distinguish different uses words like ‘appears’ and ‘looks’. At least, sometimes to say that something looks ‘F’ way and the sense-datum inference from an F ‘appearance’ in this sense to an actual ‘F’ would be hopeless. However, it also seems that we use the ‘appears’/’looks’ terminology to describe the phenomenological character of our experience and the inference might be more plausible when the terms are used this way. Still, it does seem that the arguments from illusion and hallucination will not by themselves constitute strong evidence for sense-datum theory. Even if one concludes that there is something common to both the hallucination of a red thing and a veridical visual experience of a red thing, one need not describe the common constituent as awarenesses of something red. The adverbial theorist would prefer to construe the common experiential state as ‘being appeared too redly’, a technical description intended only to convey the idea that the state in question need not be analysed as relational in character. Those who opt for an adverbial theory of sensation need to make good the claim that their artificial adverbs can be given a sense that is not parasitic upon an understanding of the adjectives transformed into verbs. Still, other philosophers might try to reduce the common element in veridical and non-veridical experience to some kind of intentional state. More like belief or judgement. The idea here is that the only thing common to the two experiences is the fact that in both I spontaneously takes there to be present an object of a certain kind.

The selfsame objections can be started within the general framework presupposed by proponents of the arguments from illusion and hallucination. A great many contemporary philosophers, however, uncomfortable with the intelligibility of the concepts needed to make sense of the theories attacked even. Thus, at least, some who object to the argument from illusion do so not because they defend direct realism. Rather they think there is something confused about all this talk of direct awareness or acquaintance. Contemporary Externalists, for example, usually insist that we understand epistemic concepts by appeal: To nomologically connections. On such a view the closest thing to direct knowledge would probably be something by other beliefs. If we understand direct knowledge this way, it is not clar how the phenomena of illusion and hallucination would be relevant to claim that on, at least some occasions our judgements about the physical world are reliably produced by processes that do not take as their input beliefs about something else.

The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are now generally associated with Bertrand Russell. However, John Grote and Hermann von Helmholtz had earlier and independently to mark the same distinction, and William James adopted Grote’s terminology in his investigation of the distinction. Philosophers have perennially investigated this and related distinctions using varying terminology. Grote introduced the distinction by noting that natural languages ‘distinguish between these two applications of the notion of knowledge, the one being of the Greek ϒνѾναι, nosene, Kennen, connaître, the other being ‘Wissen’, ‘savoir’ (Grote, 1865). On Grote’s account, the distinction is a natter of degree, and there are three sorts of dimensions of variability: Epistemic, causal and semantic.

We know things by experiencing them, and knowledge of acquaintance (Russell changed the preposition to ‘by’) is epistemically priori to and has a relatively higher degree of epistemic justification than knowledge about things. Indeed, sensation has ‘the one great value of trueness or freedom from mistake’ (1900, p. 206).

A thought (using that term broadly, to mean any mental state) constituting knowledge of acquaintance with a thing is more or less causally proximate to sensations caused by that thing, while a thought constituting knowledge about the thing is more or less distant causally, being separated from the thing and experience of it by processes of attention and inference. At the limit, if a thought is maximally of the acquaintance type, it is the first mental state occurring in a perceptual causal chain originating in the object to which the thought refers, i.e., it is a sensation. The thing’s presented to ‘us’ in sensation and of which we have knowledge of acquaintance include ordinary objects in the external world, such as the sun.

Grote contrasted the imaginative thoughts involved in knowledge of acquaintance with things, with the judgements involved in knowledge about things, suggesting that the latter but not the former are mentally contentual by a specified state of affairs. Elsewhere, however, he suggested that every thought capable of constituting knowledge of or about a thing involves a form, idea, or what we might call contentual propositional content, referring the thought to its object. Whether contentual or not, thoughts constituting knowledge of acquaintance with a thing are relatively indistinct, although this indistinctness does not imply incommunicably. On the other hand, thoughts constituting distinctly, as a result of ‘the application of notice or attention’ to the ‘confusion or chaos’ of sensation (1900). Grote did not have an explicit theory on reference, the relation by which a thought is ‘of’ or ‘about’ a specific thing. Nor did he explain how thoughts can be more or less indistinct.

Helmholtz held unequivocally that all thoughts capable of constituting knowledge, whether ‘knowledge that has to do with Notions’ (Wissen) or ‘mere familiarity with phenomena’ (Kennen), is judgements or, we may say, have conceptual propositional contents. Where Grote saw a difference between distinct and indistinct thoughts, Helmholtz found a difference between precise judgements that are expressible in words and equally precise judgements that, in principle, are not expressible in words, and so are not communicable (Helmholtz, 19620. As happened, James was influenced by Helmholtz and, especially, by Grote. (James, 1975). Taken on the latter’s terminology, James agreed with Grote that the distinction between knowledge of acquaintance with things and knowledge about things involves a difference in the degree of vagueness or distinctness of thoughts, though he, too, said little to explain how such differences are possible. At one extreme is knowledge of acquaintance with people and things, and with sensations of colour, flavour, spatial extension, temporal duration, effort and perceptible difference, unaccompanied by knowledge about these things. Such pure knowledge of acquaintance is vague and inexplicit. Movement away from this extreme, by a process of notice and analysis, yields a spectrum of less vague, more explicit thoughts constituting knowledge about things.

All the same, the distinction was not merely a relative one for James, as he was more explicit than Grote in not imputing content to every thought capable of constituting knowledge of or about things. At the extreme where a thought constitutes pure knowledge of acquaintance with a thing, there is a complete absence of conceptual propositional content in the thought, which is a sensation, feeling or precept, of which he renders the thought incommunicable. James’ reasons for positing an absolute discontinuity in between pure cognition and preferable knowledge of acquaintance and knowledge at all about things seem to have been that any theory adequate to the facts about reference must allow that some reference is not conventionally mediated, that conceptually unmediated reference is necessary if there are to be judgements at all about things and, especially, if there are to be judgements about relations between things, and that any theory faithful to the common person’s ‘sense of life’ must allow that some things are directly perceived.

James made a genuine advance over Grote and Helmholtz by analysing the reference relation holding between a thought and of him to specific things of or about which it is knowledge. In fact, he gave two different analyses. On both analyses, a thought constituting knowledge about a thing refers to and is knowledge about ‘a reality, whenever it actually or potentially ends in’ a thought constituting knowledge of acquaintance with that thing (1975). The two analyses differ in their treatments of knowledge of acquaintance. On James’s first analysis, reference in both sorts of knowledge is mediated by causal chains. A thought constituting pure knowledge of acquaintances with a thing refers to and is knowledge of ‘whatever reality it directly or indirectly operates on and resembles’ (1975). The concepts of a thought ‘operating on’ a thing or ‘terminating in’ another thought are causal, but where Grote found teleology and final causes. On James’s later analysis, the reference involved in knowledge of acquaintance with a thing is direct. A thought constituting knowledge of acquaintance with a thing either is that thing, or has that thing as a constituent, and the thing and the experience of it is identical (1975, 1976).

James further agreed with Grote that pure knowledge of acquaintance with things, i.e., sensory experience, is epistemologically priori to knowledge about things. While the epistemic justification involved in knowledge about things rests on the foundation of sensation, all thoughts about things are fallible and their justification is augmented by their mutual coherence. James was unclear about the precise epistemic status of knowledge of acquaintance. At times, thoughts constituting pure knowledge of acquaintance are said to posses ‘absolute veritableness’ (1890) and ‘the maximal conceivable truth’ (1975), suggesting that such thoughts are genuinely cognitive and that they provide an infallible epistemic foundation. At other times, such thoughts are said not to bear truth-values, suggesting that ‘knowledge’ of acquaintance is not genuine knowledge at all, but only a non-cognitive necessary condition of genuine knowledge, knowledge about things (1976). Russell understood James to hold the latter view.

Russell agreed with Grote and James on the following points: First, knowing things involves experiencing them. Second, knowledge of things by acquaintance is epistemically basic and provides an infallible epistemic foundation for knowledge about things. (Like James, Russell vacillated about the epistemic status of knowledge by acquaintance, and it eventually was replaced at the epistemic foundation by the concept of noticing.) Third, knowledge about things is more articulate and explicit than knowledge by acquaintance with things. Fourth, knowledge about things is causally removed from knowledge of things by acquaintance, by processes of reelection, analysis and inference (1911, 1913, 1959).

But, Russell also held that the term ‘experience’ must not be used uncritically in philosophy, on account of the ‘vague, fluctuating and ambiguous’ meaning of the term in its ordinary use. The precise concept found by Russell ‘in the nucleus of this uncertain patch of meaning’ is that of direct occurrent experience of a thing, and he used the term ‘acquaintance’ to express this relation, though he used that term technically, and not with all its ordinary meaning (1913). Nor did he undertake to give a constitutive analysis of the relation of acquaintance, though he allowed that it may not be unanalysable, and did characterize it as a generic concept. If the use of the term ‘experience’ is restricted to expressing the determinate core of the concept it ordinarily expresses, then we do not experience ordinary objects in the external world, as we commonly think and as Grote and James held we do. In fact, Russell held, one can be acquainted only with one’s sense-data, i.e., particular colours, sounds, etc.), one’s occurrent mental states, universals, logical forms, and perhaps, oneself.

Russell agreed with James that knowledge of things by acquaintance ‘is essentially simpler than any knowledge of truths, and logically independent of knowledge of truths’ (1912, 1929). The mental states involved when one is acquainted with things do not have propositional contents. Russell’s reasons here seem to have been similar to James’s. Conceptually unmediated reference to particulars necessary for understanding any proposition mentioning a particular, e.g., 1918-19, and, if scepticism about the external world is to be avoided, some particulars must be directly perceived (1911). Russell vacillated about whether or not the absence of propositional content renders knowledge by acquaintance incommunicable.

Russell agreed with James that different accounts should be given of reference as it occurs in knowledge by acquaintance and in knowledge about things, and that in the former case, reference is direct. But Russell objected on a number of grounds to James’s causal account of the indirect reference involved in knowledge about things. However, Russell gave a descriptive causality in which his had, had a analytical reference: A thought is about a thing when the content of the thought involves a definite description uniquely satisfied by the thing referred to. Indeed, he preferred to speak of knowledge of things by description, rather than knowledge about things.

Russell advanced beyond Grote and James by explaining how thoughts can be more or less articulate and explicit. If one is acquainted with a complex thing without being aware of or acquainted with its complexity, the knowledge one has by acquaintance with that thing is vague and inexplicit. Reflection and analysis can lead one to distinguish constituent parts of the object of acquaintance and to obtain progressively more comprehensible, explicit, and complete knowledge about it (1913, 1918-19, 1950, 1959).

Apparent facts to be explained about the distinction between knowing things and knowing about things are there. Knowledge about things is essentially propositional knowledge, where the mental states involved refer to specific things. This propositional knowledge can be more or less comprehensive, can be justified inferentially and on the basis of experience, and can be communicated. Knowing things, on the other hand, involves experience of things. This experiential knowledge provides an epistemic basis for knowledge about things, and in some sense is difficult or impossible to communicate, perhaps because it is more or less vague.

If one is unconvinced by James and Russell’s reasons for holding that experience of and reference work to things that are at least sometimes direct. It may seem preferable to join Helmholtz in asserting that knowing things and knowing about things both involve propositional attitudes. To do so would at least allow one the advantages of unified accounts of the nature of knowledge (propositional knowledge would be fundamental) and of the nature of reference: Indirect reference would be the only kind. The two kinds of knowledge might yet be importantly different if the mental states involved have different sorts of causal origins in the thinker’s cognitive faculties, involve different sorts of propositional attitudes, and differ in other constitutive respects relevant to the relative vagueness and communicability of the mental sates.

In any of cases, perhaps most, Foundationalism is a view concerning the ‘structure’ of the system of justified belief possessed by a given individual. Such a system is divided into ‘foundation’ and ‘superstructure’, so related that beliefs in the latter depend on the former for their justification but not vice versa. However, the view is sometimes stated in terms of the structure of ‘knowledge’ than of justified belief. If knowledge is true justified belief (plus, perhaps, some further condition), one may think of knowledge as exhibiting a Foundationalist structure by virtue of the justified belief it involves. In any event, the construing doctrine concerning the primary justification is layed the groundwork as affording the efforts of belief, though in feeling more free, we are to acknowledge the knowledgeable infractions that will from time to time be worthy in showing to its recognition.

The first step toward a more explicit statement of the position is to distinguish between ‘mediate’ (indirect) and ‘immediate’ (direct) justification of belief. To say that a belief is mediately justified is to any that it s justified by some appropriate relation to other justified beliefs, i.e., by being inferred from other justified beliefs that provide adequate support for it, or, alternatively, by being based on adequate reasons. Thus, if my reason for supposing that you are depressed is that you look listless, speak in an non-accustomed flat tone of voice, exhibit no interest in things you are usually interested in, etc., then my belief that you are depressed is justified, if, at all, by being adequately supported by my justified belief that you look listless, speak in a flat tone of voice. . . .

A belief is immediately justified, on the other hand, if its justification is of another sort, e.g., if it is justified by being based on experience or if it is ‘self-justified’. Thus my belief that you look listless may not be based on anything else I am justified in believing but just on the cay you look to me. And my belief that 2 + 3 = 5 may be justified not because I infer it from something else, I justifiably believe, but simply because it seems obviously true to me.

In these terms we can put the thesis of Foundationalism by saying that all mediately justified beliefs owe their justification, ultimately to immediately justified beliefs. To get a more detailed idea of what this amounts to it will be useful to consider the most important argument for Foundationalism, the regress argument. Consider a mediately justified belief that ‘p’ (we are using lowercase letters as dummies for belief contents). It is, by hypothesis, justified by its relation to one or more other justified beliefs, ‘q’ and ‘r’. Now what justifies each of these, e.g., q? If it too is mediately justified that is because it is related accordingly to one or subsequent extra justified beliefs, e.g., ‘s’. By virtue of what is ‘s’ justified? If it is mediately justified, the same problem arises at the next stage. To avoid both circularity and an infinite regress, we are forced to suppose that in tracing back this chain we arrive at one or more immediately justified beliefs that stop the regress, since their justification does not depend on any further justified belief.

According to the infinite regress argument for Foundationalism, if every justified belief could be justified only by inferring it from some further justified belief, there would have to be an infinite regress of justifications: Because there can be no such regress, there must be justified beliefs that are not justified by appeal to some further justified belief. Instead, they are non-inferentially or immediately justified, they are basic or foundational, the ground on which all our other justifiable beliefs are to rest.

Variants of this ancient argument have persuaded and continue to persuade many philosophers that the structure of epistemic justification must be foundational. Aristotle recognized that if we are to have knowledge of the conclusion of an argument in the basis of its premisses, we must know the premisses. But if knowledge of a premise always required knowledge of some further proposition, then in order to know the premise we would have to know each proposition in an infinite regress of propositions. Since this is impossible, there must be some propositions that are known, but not by demonstration from further propositions: There must be basic, non-demonstrable knowledge, which grounds the rest of our knowledge.

Foundationalist enthusiasms for regress arguments often overlook the fact that they have also been advanced on behalf of scepticism, relativism, fideisms, conceptualism and Coherentism. Sceptics agree with foundationalist’s both that there can be no infinite regress of justifications and that nevertheless, there must be one if every justified belief can be justified only inferentially, by appeal to some further justified belief. But sceptics think all true justification must be inferential in this way -the foundationalist’s talk of immediate justification merely overshadows the requiring of any rational justification properly so-called. Sceptics conclude that none of our beliefs is justified. Relativists follow essentially the same pattern of sceptical argument, concluding that our beliefs can only be justified relative to the arbitrary starting assumptions or presuppositions either of an individual or of a form of life.

Regress arguments are not limited to epistemology. In ethics there is Aristotle’s regress argument (in ‘Nichomachean Ethics’) for the existence of a single end of rational action. In metaphysics there is Aquinas’s regress argument for an unmoved mover: If a mover that it is in motion, there would have to be an infinite sequence of movers each moved by a further mover, since there can be no such sequence, there is an unmoved mover. A related argument has recently been given to show that not every state of affairs can have an explanation or cause of the sort posited by principles of sufficient reason, and such principles are false, for reasons having to do with their own concepts of explanation (Post, 1980; Post, 1987).

The premise of which in presenting Foundationalism as a view concerning the structure ‘that is in fact exhibited’ by the justified beliefs of a particular person has sometimes been construed in ways that deviate from each of the phrases that are contained in the previous sentence. Thus, it is sometimes taken to characterise the structure of ‘our knowledge’ or ‘scientific knowledge’, rather than the structure of the cognitive system of an individual subject. As for the other phrase, Foundationalism is sometimes thought of as concerned with how knowledge (justified belief) is acquired or built up, than with the structure of what a person finds herself with at a certain point. Thus some people think of scientific inquiry as starting with the recordings of observations (immediately justified observational beliefs), and then inductively inferring generalizations. Again, Foundationalism is sometimes thought of not as a description of the finished product or of the mode of acquisition, but rather as a proposal for how the system could be reconstructed, an indication of how it could all be built up from immediately justified foundations. This last would seem to be the kind of Foundationalism we find in Descartes. However, Foundationalism is most usually thought of in contemporary Anglo-American epistemology as an account of the structure actually exhibited by an individual’s system of justified belief.

It should also be noted that the term is used with a deplorable looseness in contemporary, literary circles, even in certain corners of the philosophical world, to refer to anything from realism -the view that reality has a definite constitution regardless of how we think of it or what we believe about it to various kinds of ‘absolutism’ in ethics, politics, or wherever, and even to the truism that truth is stable (if a proposition is true, it stays true).

Since Foundationalism holds that all mediate justification rests on immediately justified beliefs, we may divide variations in forms of the view into those that have to do with the immediately justified beliefs, the ‘foundations’, and those that have to do with the modes of derivation of other beliefs from these, how the ‘superstructure’ is built up. The most obvious variation of the first sort has to do with what modes of immediate justification are recognized. Many treatments, both pro and con, are parochially restricted to one form of immediate justification -self-evidence, self-justification (self-warrant), justification by a direct awareness of what the belief is about, or whatever. It is then unwarrantly assumed by critics that disposing of that one form will dispose of Foundationalism generally (Alston, 1989, ch. 3). The emphasis historically has been on beliefs that simply ‘record’ what is directly given in experience (Lewis, 1946) and on self-evident propositions (Descartes’ ‘clear and distinct perceptions and Locke’s ‘Perception of the agreement and disagreement of ideas’). But self-warrant has also recently received a great deal of attention (Alston 1989), and there is also a reliabilist version according to which a belief can be immediately justified just by being acquired by a reliable belief-forming process that does not take other beliefs as inputs (BonJour, 1985, ch. 3).

Foundationalisms also differ as to what further constraints, if any, are put on foundations. Historically, it has been common to require of the foundations of knowledge that they exhibit certain ‘epistemic immunities’, as we might put it, immunity from error, refutation or doubt. Thus Descartes, along with many other seventeenth and eighteenth-century philosophers, took it that any knowledge worthy of the name would be based on cognations the truth of which is guaranteed (infallible), that were maximally stable, immune from ever being shown to be mistaken, as incorrigible, and concerning which no reasonable doubt could be raised (indubitable). Hence the search in the ‘Meditations’ for a divine guarantee of our faculty of rational intuition. Criticisms of Foundationalism have often been directed at these constraints: Lehrer, 1974, Will, 1974? Both responded to in Alston, 1989. It is important to realize that a position that is Foundationalist in a distinctive sense can be formulated without imposing any such requirements on foundations.

There are various ways of distinguishing types of Foundationalist epistemology by the use of the variations we have been enumerating. Plantinga (1983), has put forwards an influential innovation of criterial Foundationalism, specified in terms of limitations on the foundations. He construes this as a disjunction of ‘ancient and medieval Foundationalism’, which takes foundations to comprise what is self-evidently and ‘evident to he senses’, and ‘modern Foundationalism’ that replaces ‘evidently to the senses’ with ‘incorrigible’, which in practice was taken to apply only to beliefs about one’s present states of consciousness. Plantinga himself developed this notion in the context of arguing those items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called ‘strong’ or ‘extreme’ Foundationalism and ‘moderate’, ‘modest’ or ‘minimal’ Foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, its distinction is ‘simple’ and ‘iterative’ Foundationalism (Alston, 1989), depending on whether it is required of a foundation only that it is immediately justified, or whether it is also required that the higher level belief that the firmer belief is immediately justified is itself immediately justified. Suggesting only that the plausibility of the stronger requirement stems from a ‘level confusion’ between beliefs on different levels.

The classic opposition is between Foundationalism and Coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting ‘linear’ chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified yo the extent that it is integrated into a coherent system of belief. More recently into a pragmatist like John Dewey has developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.

Foundationalism can be attacked both in its commitment to immediate justification and in its claim that all mediately justified beliefs ultimately depend on the former. Though, it is the latter that is the position’s weakest point, most of the critical fire has been detected to the former. As pointed out about much of this criticism has been directly against some particular form of immediate justification, ignoring the possibility of other forms. Thus, much anti-foundationalist artillery has been directed at the ‘myth of the given’. The idea that facts or things are ‘given’ to consciousness in a pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963). The most prominent general argument against immediate justification is a ‘level ascent’ argument, according to which whatever is taken ti immediately justified a belief that the putative justifier has in supposing to do so. Hence, since the justification of the higher level belief after all (BonJour, 1985). We lack adequate support for any such higher level requirements for justification, and if it were imposed we would be launched on an infinite undergo regress, for a similar requirement would hold equally for the higher level belief that the original justifier was efficacious.

No comments:

Post a Comment