Uncommon Descent Serving The Intelligent Design Community

Human Consciousness

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

(From In the Beginning … ):

For the layman, it is the last step in evolution that is the most difficult to explain. You may be able to convince him that natural selection can explain the appearance of complicated robots, who walk the Earth and write books and build computers, but you will have a harder time convincing him that a mechanical process such as natural selection could cause those robots to become conscious. Human consciousness is in fact the biggest problem of all for Darwinism, but it is hard to say anything “scientific” about consciousness, since we don’t really know what it is, so it is also perhaps the least discussed.

Nevertheless, one way to appreciate the problem it poses for Darwinism or any other mechanical theory of evolution is to ask the question: is it possible that computers will someday experience consciousness? If you believe that a mechanical process such as natural selection could have produced consciousness once, it seems you can’t say it could never happen again, and it might happen faster now, with intelligent designers helping this time. In fact, most Darwinists probably do believe it could and will happen—not because they have a higher opinion of computers than I do: everyone knows that in their most impressive displays of “intelligence,” computers are just doing exactly what they are told to do, nothing more or less. They believe it will happen because they have a lower opinion of humans: they simply dumb down the definition of consciousness, and say that if a computer can pass a “Turing test,” and fool a human at the keyboard in the next room into thinking he is chatting with another human, then the computer has to be considered to be intelligent, or conscious. With the right software, my laptop may already be able to pass a Turing test, and convince me that I am Instant Messaging another human. If I type in “My cat died last week” and the computer responds “I am saddened by the death of your cat,” I’m pretty gullible, that might convince me that I’m talking to another human. But if I look at the software, I might find something like this:

if (verb == ‘died’)
fprintf(1,’I am saddened by the death of your %s’,noun)
end

I’m pretty sure there is more to human consciousness than this, and even if my laptop answers all my questions intelligently, I will still doubt there is “someone” inside my Intel processor who experiences the same consciousness that I do, and who is really saddened by the death of my cat, though I admit I can’t prove that there isn’t.

I really don’t know how to argue with people who believe computers could be conscious. About all I can say is: what about typewriters? Typewriters also do exactly what they are told to do, and have produced some magnificent works of literature. Do you believe that typewriters can also be conscious?

And if you don’t believe that intelligent engineers could ever cause machines to attain consciousness, how can you believe that random mutations could accomplish this?

Comments
KF, Yes, I agree. The courtroom is a perfect example, where more information leads to certain conclusions; which may be left to the benefit of a doubt in certain circumstances. However, the basis for law in the first place, and all the arguments involved, appeal to the law of non-contradiction. Your color example shows that the process may not be exact, but conclusions can be made based on the revealed facts that one thing and not another occurred, or that one person and not another was responsible, based on sufficient evidence. Science seems to operate on the same principle, although with different data sets; so does philosophy, and so does mathematics. CannuckianYankee
Vivid & GP: Thanks for the kind words. G PS: CY, the true logical opposite of white is not black, but not-white. Any shade of green, red, yellow or cyan will do. "Black and white thinking" is a strawman caricature argument, as you rightly point out. kairosfocus
BarryR "There is no certainty expected anywhere in the process." Of course there's no certainty. But the court is in session because something happened, not nothing. I.e., the court's intent is to get as close to the facts of the something that happened. In other words, there is an indisputable fact behind the court's decision to convene, and that fact is that something as opposed to nothing happened. The court is there to determine whether someone is responsible for the something that happened. Behind the something is a reality that is a certain way. That we can't be certain the way that reality is does not determine that it is not a reality - it simply indicates that we don't know, and that's what we're there to find out beyond a reasonable doubt. We can be more certain with more facts presented, but we can't know for certain. I'm getting the impression that you think the way we perceive reality is the only way reality is - that our perceptions determine the way it is. And with that, it would be difficult to make a decision as to what reality is, because we are free to change our perception on issues in order to change the reality. With regard to the law of non-contradiction - it's not an issue of whether we see things as black or white. It's an issue of having a basis for anything that we can call truth. If we have no basis for truth, then we have no truth. That's not black or white thinking, but logical thinking. To believe that it's anything other than that is a contradiction. CannuckianYankee
KF: I surely join vivid in my appreciation. gpuccio
Kf Just want to thank you for all your contributions you make to this forum. Vivid vividbleau
GP: The Euler equation is hands down the most beautiful equation in all mathematics. And, it is so utterly astonishing, a surprise that comes out of nowhere, almost. Zero, the empty space on the abacus 1 the first number and the number of identity e, which governs so many kinds of growth and decay i, the strangest number of all: the "imaginary" root of a negative number. pi, the ratio of circumference to diameter of the "perfect" geometrical figure. All of them, tied up in one equation. If you needed a signature of the Ultimate Mathematician who built a world that embeds mathematics and number, here it is. G kairosfocus
kairosfocus @412, you have, once again, provided invaluable insights about the relationship between our rational minds and the rational universe that our minds apprehend. StephenB
Thus we have have material 'particles' being 'destroyed' by teleportation of their transcendent information,,, A more convincing proof that transcendent information is foundational to material reality would be hard to imagine! Further evidence is abundant that the transcendent universal constants do not arise from any purported material basis and may in fact be called transcendent 'information' constants,,, As well Euler's number finds striking correlation to reality,,, The following verse and video are very interesting since, with the discovery of the Cosmic Microwave Background Radiation, the universe is found to actually be a circular sphere: Proverbs 8:26-27 While as yet He had not made the earth or the fields, or the primeval dust of the world. When He prepared the heavens, I was there, when He drew a circle on the face of the deep, Euler's Number - God Created Mathematics - video http://www.metacafe.com/watch/4003905 This following website has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1: http://www.biblemaths.com/pag03_pie/ Michael Denton - Mathematical Truths Are Transcendent And Beautiful - Square root of -1 is built into the fabric of reality - video http://www.metacafe.com/watch/4003918 I find it extremely strange that the enigmatic Euler's number would find such striking correlation to reality. In pi we have correlation to the 'sphere of the universe' as revealed by the Cosmic Background radiation, as well pi correlates to the finely-tuned 'geometric flatness' within the 'sphere of the universe' that has now been found. In e we have the fundamental constant that is used for ascertaining exponential growth in math that strongly correlates to the fact that space time is 'expanding/growing equally' in all places of the universe. In the square root of -1 we have what is termed a 'imaginary number', which was proposed to help solve equations like x2+ 1 = 0 back in the 17th century, yet now, as Michael Denton pointed out in the preceding video, it is found that the square root of -1 is required to explain the behavior of quantum mechanics in this universe. The correlation of Euler's number, to the foundational characteristics of how this universe is constructed and operates, points overwhelmingly to a transcendent Intelligence, with a capital I, which created this universe! It should also be noted that these universal constants, pi,e, and square root -1, were at first thought to be completely transcendent of any material basis, to find that these transcendent constants of Euler's number in fact 'govern' material reality, in such a fundamental way, should be enough to send shivers down any mathematicians spine. This following video is very interesting for revealing how difficult it was for mathematicians to actually 'prove' that mathematics was true: Georg Cantor - The Mathematics Of Infinity - video http://www.metacafe.com/watch/4572335 entire video: BBC-Dangerous Knowledge (Part 1-10) http://www.youtube.com/watch?v=Cw-zNRNcF90 As you can see, somewhat from the preceding video, mathematics cannot be held to be 'true' unless an assumption for a highest transcendent infinity is held to be true. A highest infinity which Cantor, and even Godel, held to be God. Thus this following formal proof, which was referred to at the end of the preceding video, shows that math cannot be held to be consistently true unless the highest infinity of God is held to be consistently true as a starting assumption: Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century Excerpt: Gödel’s Incompleteness Theorem says: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle - something you have to assume to be true but cannot prove "mathematically" to be true.” http://www.cosmicfingerprints.com/blog/incompleteness/ THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel's critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html This following site is a easy to use, and understand, interactive website that takes the user through what is termed 'Presuppositional apologetics'. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place. Proof That God Exists - easy to use interactive website http://www.proofthatgodexists.org/index.php bornagain77
Of note: An ion is an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving it a net positive or negative electrical charge. Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts bornagain77
Seeing that Euler's Identity was referenced as well as the fact someone made the absurd claim that material is primary to determining reality in math. I would like to answer as strongly as possible that there is no 'material' basis to reality. All reality in fact reduces to information or what I prefer to call "The Word" of John 1:1: notes: It is very interesting to note in this following paper that the quantum state of a photon is actually defined as 'infinite information': Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) --- Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. http://www.cas.umt.edu/phil/faculty/duwell/DuwellPSA2K.pdf These following studies verified the violation of the first law of thermodynamics that I had suspected in quantum teleportation: How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. --- As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/teleportation1.htm Quantum Teleportation - IBM Research Page Excerpt: "it would destroy the original (photon) in the process,," http://www.research.ibm.com/quantuminfo/teleportation/ Unconditional Quantum Teleportation - abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/cgi/content/abstract/282/5389/706 Of note: conclusive evidence for the violation of the First Law of Thermodynamics is firmly found in the preceding experiment when coupled with the complete displacement of the infinite transcendent information of "Photon c": http://docs.google.com/Doc?docid=0AYmaSrBPNEmGZGM4ejY3d3pfMzBmcjR0eG1neg More supporting evidence for the transcendent nature of information, and how it interacts with energy, is found in these following studies: Single photons to soak up data: Excerpt: the orbital angular momentum of a photon can take on an infinite number of values. Since a photon can also exist in a superposition of these states, it could – in principle – be encoded with an infinite amount of information. http://physicsworld.com/cws/article/news/7201 Ultra-Dense Optical Storage - on One Photon Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image's worth of data into a photon, slow the image down for storage, and then retrieve the image intact. http://www.physorg.com/news88439430.html This following experiment clearly shows information is not an 'emergent property' of any solid material basis as is dogmatically asserted by some materialists: Converting Quantum Bits: Physicists Transfer Information Between Matter and Light Excerpt: A team of physicists at the Georgia Institute of Technology has taken a significant step toward the development of quantum communications systems by successfully transferring quantum information from two different groups of atoms onto a single photon. http://gtresearchnews.gatech.edu/newsrelease/quantumtrans.htm The following articles show that even atoms (Ions) are subject to teleportation: bornagain77
---Markf: "I think everyone has aired their views and no one has changed their mind about anything. So I will leave it there." Does this mean that you are not goint to tell why, in your words, it is "nonsensical" to say that Jupiter can both exist and not exist at the same time? StephenB
#411 Gpuccio Sorry about your personal problems. I am a bit tired of this particular debate. I think everyone has aired their views and no one has changed their mind about anything. So I will leave it there. Mark markf
KF: that equation is really amazing. I believe most mathematicians are neo-platonists regarding their discipline. I became aware of the neo-platonists model of mathematics through the books of Penrose, who has certainly influences me very much, especially with his argument about Godel's theorem and the nature of human cognition. The beauty of mathematical constructions is in itself surprising, but their usefulness in understanding physical reality is really beyond any "natural" explanation: it definitely points to a parallel structure in physical reality and in our personal minds. gpuccio
EZ: The complex frequency domain [without and with damping as an issue] is where signal processing lives. And as to the reality or not of numbers, I note just one little equation: 0 = 1 + e ^(i*pi) -- Euler Believe it or not, the five most important numbers in mathematics are tied together, through the complex domain. But more to the point, numbers are a real aspect of experienced reality, but have no physical being as such. You cannot show 0, 1, or 2, etc, save as a property that we symbolise using numerals. But that mysterious, shadowy set of properties and their relationships, turn out to be central to reality, as physics and related disciplines show. And that points, ever so subtly but strongly, to a world of reality beyond the immediately physical. You may resist it, but there the compass needle points. (And every time you tune to an FM station, you are looking at the reality of how intelligible signals are constructed form combinations of waves at different frequencies, as the Fourier transform and integral point to. Every time you see something undergo damped harmonic vibrations, it is pointing in the same direction, with a damping factor added. Indeed, the relevant transfer functions strongly depend on poles in the complex frequency domain of INFINITE amplitude. That's part of the rubber sheet model of frequency response, and it is the technique I taught my students to visualise the otherwise utterly abstract reality. I will not get into Z-transforms here, save to say that with digital filters, they are unit delay elements, and can essentially replicate anything that can be done in the analogue world, and more. That's before we look at how such models apply to say economics.) GEM of TKI kairosfocus
Mark: I must thank you for your answers at 262 and 263. And apologize that only now I have been able to read them (some personal problem, now solved :) ). I still would like to stay out of all the discussion, but I will try to give you some simple description of what I think. I think that we, as humans, have cognitive powers which are based on some universally shared faculty, however we can call it. That faculty is mainly intuitive, and guides us in making and comparing statements. That intuitional cognition is therefore largely innate, and not experience derived, and is the basis for all mathematical and logical thinking, and indeed for all thinking. So, I am in the group of neoplatonists for what regards mathematics. The really astonishing fact is that such intuitive and innate faculty, and its products (logic, mathematics, empirical inference, and so on) are so powerful in understanding reality. Empirical sciences are continuous evidence that our mental models are definitely useful in explaining the reality out of us. Indeed, there is no a priori reason for that. It's definitely an amazing mystery. And a mystery which works even at very counter-intuitive levels: QM is good evidence that very abstract elaborations of our minds and of pure mathematics can guide us to conclusion that we would have never reached by other ways, and which work: outer reality seems strangely to correspond to our mathematical constructs about it. That said, I can agree with you that these problems are very subtle, and that we can define and build our mental products in very different ways. But, if there were no shared intuitions in our minds, that would only amount to a lot of changing local conventions, devoid of any universality and usefulness. gpuccio
BarryR,
Really? That would mean I was certain that I should be using a continuum. And I’m not certain about that — I have no problem admitting this approach may be wrong. It just happens to be the best one I’ve come up with so far.
In a few words, you seem certain that you should remain either uncertain or certain, depending on whatever you deem worthy in your criteria. You seem certain that there is such a thing as certainty and uncertainty. You seem certain that there really is such a thing as criteria. You seem certain that there is such a thing as you. You seem certain that the phrase "I am not certain" actually means what you want it to mean, and that I shouldn't change it to mean blue rabbits love chocolate. You seem certain that every word of the phrase "I am not certain" has individual and particular meaning, and can be used together to form a coherent sentence. You seem certain that you're typing these words on a screen. I could go on and on. I am surprised I have this much patience, normally I don't feed condescending trolls, especially ones that are self-triumphant (a tactic that reminds me of the minister of Iraq claiming that there was no American invasion when we were pulling up outside Baghdad while he was talking to reporters) when there is no victory. You seem to be certain that something like a continuum is "better" than a binary approach to truth. You seem certain that this is not tantamount to saying that there is no black and white truth, that it is all a shade of gray, (as if this statement were not asserted as a black or white statement itself). The endgame is that if you're near certain about anything, it is only by a fixed proximity of real certainty that you can get "near" to it. If the train station were just as mobile as the train, you could never get nearer to your destination. If certainty were not itself an actuality, you couldn't get nearer to it. Clive Hayden
So glad I didn't have to resort to physical models like that!! :-) I don't see how complex numbers point beyond materialism since they don't 'exist' (even though they can be measured and manipulated). Mathematics is not philosophy thank goodness. And if your mathematical models don't accurately reflect the real, material world then the model is wrong and needs to be revised. So, for me, at least as far as applied mathematics is concerned (and control theory is a particularly useful area) the mathematics is a slave to the material. And the fact that a cubic equation has three solutions, in most real-world situations it's only the real ones that count. It's worth looking at the ones that don't seem to have an observable parallel but topology was just an abstract field until relativity suggested a way it could be used. ellazimm
EZ:
Stuff that isn’t real [PHYSICALLY MANIFEST] can still be useful and have meaning in context
That is, I am pointing out the gap between reality and physical materiality. How ironic it is that right at the heart of science is number, and number, as NWE rightly corrected Wiki in an ever so eloquent and rich definition, is:
A number is an abstract mathematical object represented by a symbol that is used in counting and measuring. A symbol that represents a number is called a numeral.
So, right in the heart of the central citadel that the evolutionary materialists think they hold lies something that ever so plainly points beyond materialism. And, BTW, I used to teach my controls students to spot the complex domain poles for dynamical objects, based on their time domain transient responses, using Laplace transforms (based on the heavy rubber sheet with propped up poles and nailed down zeros model; love my 3 edns of Ogata). Dose "imaginary" objects does be very, very real! (And not just for flow fields and systems governed by dynamical ODEs. They also relate to something as mundane as the bandwidth of an FM signal, per Fourier.) GEM of TKI kairosfocus
KF: I wasn't trying to prove imaginary numbers exist! I was just saying that the construct is useful for modelling real life!! It's a a mental construct obviously. And i (SQRT(-1)) IS the solution to an algebraic equation: x^2 +1 = 0. When you allow that construct then all polynomial equations with real coefficients have solutions but in the complex numbers. And the number of solutions is equal to the degree of the equation with the exception of the trivial cases. Material objects don't have cardinality of zero, clearly. I wasn't suggesting they do! I'm not sure why you're arguing with me. The set of all subsets of a given set includes the empty set but that's not an object! I could say that a collection of objects containing no objects has a cardinality of zero but that's not what you're asking for. If you take a line segment, removed the middle third, take out the middle third of the remaining pieces and continue that to infinity you will get a non-empty collection of points which has a measure (length) of zero but again that is no a real object. You can do similar things with two and three dimensional 'objects' like Sierpinski gaskets and sponges. I concede! I was only pointing out that it is possible to construction mental . . . . NEVERMIND!! I'm not disputing that two object plus two more objects gives you four objects!! But if you want to analyse the flow of a fluid across a surface you might find it useful to use a complex valued function which, when integrated, gives you two terms (one of which is the coefficient of the imaginary part) which indicate two aspects of the fluid flow. Stuff that isn't real can still be useful and have meaning in context!! ellazimm
Zero: the number of apples after you have eaten all of them? DiEb
EZ: Kindly show me a material object that exhibits cardinality of zero. (A pair of empty braces, one of he symbols for the empty set, will not do: {} ) GEM of TKI PS: Similarly, show me the number SQRT (-1) as a material entity. (I did not say show me its manifestations in action, I am asking you to show me a material --physical -- object that exhibits the number in question. An Argand diagram, a 2-dimensional graph, will not do.) kairosfocus
F/N: Just to remind us from that hostile witness making an admission against interest, Wikipedia: ________________________ >> A number is a mathematical object used in counting and measuring. A notational symbol which represents a number is called a numeral, but in common usage the word number is used for both the abstract object and the symbol, as well as for the word for the number. In addition to their use in counting and measuring, numerals are often used for labels (telephone numbers), for ordering (serial numbers), and for codes (e.g., ISBNs). In mathematics, the definition of number has been extended over the years to include such numbers as zero, negative numbers, rational numbers, irrational numbers, and complex numbers. Certain procedures which take one or more numbers as input and produce a number as output are called numerical operations. Unary operations take a single input number and produce a single output number. For example, the successor operation adds one to an integer, thus the successor of 4 is 5. More common are binary operations which take two input numbers and produce a single output number. Examples of binary operations include addition, subtraction, multiplication, division, and exponentiation. The study of numerical operations is called arithmetic. >> _________________________ Notice that ever so delicately circular term "mathematical object"? Here is NWE's take:
A number is an abstract mathematical object represented by a symbol that is used in counting and measuring. A symbol that represents a number is called a numeral.
Number itself cannot be seen, touched, felt, tasted, smelled, held, pinched, or heard. It is not tangible or concrete. It is indeed abstract, a conceptual entity we infer from experience. It is a mental construct, whihc we manipulate using symbols and symbolised operations. Thence, it is reflective of the conscious, thinking, knowing reasoning mind. But, Wiki, ever so reliably materialistic wiki, cannot acknowledge that. So, that little part is conveniently left out. And that is why EZ above felt it so necessary to highlight how we find that collections of material objects manifest numerical properties. Guess what, so do souls: we are unified self-aware selves, i.e. we are ONES. GEM of TKI kairosfocus
EZ: 1: None of those things changes the fact that when we have collections, we see the following property of their cardinality: 2 + 2 = 4 2: Similarly, that cardinality has real properties such that we see that not only does 2 + 2 = 4 in fact, but it must do so. 3 : Going beyond that, from the operation +, we derive all other relevant arithmetic operations, and once we respect its meaning, we will see that 2 + 2 = 4 is a self-evident truth. 4: You are operating in an environment where various distractive devices have been used to try to persuade us that we can overlook, ignore or deny the basic, self-evident truth: 2 + 2 = 4. GEM of TKI PS: Kindly show me how number is a material entity, as opposed to how a collection of material objects will have a property, their number. (In short two guavas plus two guavas, as I can see by walking out into my backyard, will give me four guavas. But the property that a two set joined to another two-set yields a four-set is not itself a material property.) kairosfocus
---BarryR: "So while I see no difficulty creating a consistent formal system where Jupiter simultaneously exists and does not exist, my (tentative) conclusion is that such a system is useless." But you did not answer my question. Is it possible for Jupiter to exist and not exist at the same time? There are only three possible answers. Yes, no, or I don't know because I don't have (or acknowledge) the necessary tool of logic [the law of non-contradiction] that would justify a definitive answer. Since you will not assert a yes or a no, I have to assume that the answer is--"I don't know"--meaning that your system of thought cannot generate an answer. What good, then, is your thought system if it cannot affirm that which everyone else knows to be the case: Jupiter cannot exist and not exist at the same time. Further, inasmuch as you propose that the "principle" of non-contradiction is not a universal law and therefore is not "useful" in all contexts, and given that facts in evidence can provide no answer to the question, how do you know where to apply or or when it would be useful? Put another way, what is your standard for deciding when and where it fits? StephenB
I wasn't arguing with the notion that two material things plus two more material things gives you four things!! I was just saying that it is possible to define (and use) systems where some mathematical results look counter-intuitive. No biggie. And I didn't even get to surreal numbers, hyper-real numbers or different sizes of infinity!! Or fractional dimensions. ellazimm
PPPS: How could I forget pounds, shillings pence guineas; inches, feet, yards, chains, furlongs, miles, leagues, grains, drams, ounces, pounds [mass, not force!] troy and avoirdupois, slugs, cups, pints, quarts, gallons [and recall, it's not just Wine/US vs Imperial], square feet, square chains ["tasks" of land], acres, square miles etc, and all that wonderful menagerie of measurements and units that require ever so many odd steps to move up and down? Try CGS vs SI in science, too. None of these changes the basic self evident fact that: || + || --> |||| kairosfocus
PPS: EZ [and RB], it might help to know that I cut my eye-teeth on a fondly remembered HP 21, and changed milk teeth on the 6800/6809 MPU family, programmed at assembly/machine code level. RPN is my preferred calculator operating system, and a copy of XCalc set to ENG notation sits on my desktop as convenience calculator. Modulo-arithmetic, boolean algebra, clock arithmetic, complex numbers, complex exponents, and vectors, phasors, the right ascension and declination, etc are all familiar old friends. None of them affects the basic self evident fact that || + || --> |||| kairosfocus
PS: Clock arithmetic [cf. angle arithmetic on the degree system] is rooted in the sexagessimal system, with a duodecimal one added on for larger units, probably because of how many factors 12 has. I think astronomers did calculations in base 60 until just about the days of Newton; think of how many factors evenly fit into 60. [When I used to mention the base-60 system in a first lecture on digital electronics, my students used to get a kick out of the name.] kairosfocus
EZ: 1: Irrelevant -- that you can create systems in which you redefine + to be modulo etc does not change:
|| + || --> |||| 2 + 2 = 4
2: Distractive -- you are dragging that old red herring away to a strawman. In slightly more details: a --> What has been discussed is the self-evident truth that 2 + 2 = 4. b --> In this context, it is understood on our experience of the world [use match sticks], and the symbols 2, +, = and 4 take their common garden variety meanings we learned at age 5 from Ms Smith with the knuckle-smacking ruler. c --> The operation of addition is putting together, and once we do that two 2-sets cojoined yield a 4-set. d --> And, that MUST be so, given the way we recognise cardinality; in effect counting/matching to the standard ordered set to exhaustion, on or the equivalent. e --> Also, we are not defining anything here, we are observing an operation and its result, summarising the consequence for cardinalities. [Cf 187 - 191 above.] f --> Thus, 2 + 2 = 4 is an example of a self-evident truth. As is "Error exists." And as are many more cases. _________________ I suggest you should think again, very carefully, on why you find such a "need" to manufacture an objection to such an elementary and easily demonstrated truth. [Cf. 362 above on where that problem points.] GEM of TKI kairosfocus
Angle measure is in mod 360 degrees or 2pi radians or 400 gradians. So that: 285 + 200 = 125 degrees. Not taking into account the rotations just the ending position. Also, rotational symmetries in shapes can be described in modular arithmetic. I'm not even going to get into imaginary numbers (that include the square root of -1) which end up having applications in the real world. That is just really weird. But it works!! ellazimm
Clocks work on mod 12 (or 24 here in the UK or the Navy). So . . . 8:00 + 9:00 = 17:00 (in the UK) but 5:00 in the US. and 3:00 pm (or 15:00) + 12:00 = 3:00 in both. 9:00 + 48:00 = 9:00 Crazy arithmetic. But it works ellazimm
Might be worth mentioning modular arithmetic too. 1 + 1 = 2 (mod 4) 1 + 2 = 3 (mod 4) 2 + 2 = 0 (mod 4) 2 + 3 = 1 (mod 4) 3 + 3 = 2 (mod 4) You should see some of the 'systems' you can create in group theory. It's the non-abelian stuff that gets me. Bizarre. ellazimm
PS: What happens when you don't understand that 2 + 2 = 4 is self-evident, and instead think that the explanatory construct from axioms (useful as axiomatisation is . . . ) is the "real" foundation of 2 + 2 = 4. kairosfocus
O/T^2 F/N: For those interested in reading up on floating point stuff, this may be a useful read. Also, this. (Observe the real world cases.) kairosfocus
Onlookers (& BR): Yet more side tracks and evasions. Sad. Ever so sad. Scientific notation and its extension into floating point arithmetic, are of course BASED on ordinary arithmetic. [I suspect, originally, the underlying notation was devised to permit use of logarithms in calculation, e.g. the bar-notation technique for negative powers of ten.] Including, the self-evident truths of that basic arithmetic. Which include: || + || --> |||| 2 + 2 = 4 In short, such extensions build on the basic principles of arithmetic, inclduing the first facts of addition. They do not contradict them. They are thus consistent with the fact that, for instance, 2 + 2 = 4 is not only true, but MUST be true [and is not merely true by definition or saying the same thing twice over in different words: 2 + 2 is a binary operation, and 4 is its RESULT, not a mere definition, never mind the equivalence of cardinalities involved], once on our experience of the world [here acquired by about age 5], we understand the symbols and operations involved. That is, 2 + 2 = 4 is indeed self-evident. (Cf 187 - 191 above.)) Again, for the reason why self-evident truths are being so stoutly resisted, at the price of evasions, irrelevancies [the history of citations????!!!!], side-tracks, attempts to blind with science, contradictions and adherence to other absurdities triggered by rejection of self-evident truth, cf 362 above. As well, CH, in 361 above, amply shows how radical relativism becomes self-referentially absurd by asserting universal claims to try to reject such claims. The issue on the merits is plainly long since over. And, maybe we need to highlight what is in the end at stake on evolutionary materialism, including its commitment to radical relativism and where it leads through its impact on the ideology and agendas of the bright but ill-advised young minds that it corrupts. So, again, Plato, c. 360 BC (yes, evo mat is over 2,300 years old, and is [ill-founded] philosophy at root, not science); in his The Laws, Bk X: ____________________ >> Ath. . . . [[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. [[In short, evolutionary materialism premised on chance plus necessity acting without intelligent guidance on primordial matter is hardly a new or a primarily "scientific" view!] . . . . [[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might, and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions, these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others, and not in legal subjection to them. >> _____________________ More is at stake, at a higher potential cost to our common civilisation and world, than we might think at first. GEM of TKI kairosfocus
I will be offline for the next few days, as there are many experiments to run and a conference deadline is looming. I'm happy to rejoin later if the discussion will involve something more than repeating myself. Otherwise, it's been quite educational, and I thank you for your close attention. Best, Barry BarryR
CY@386
But then you haven’t concluded that it is indeed useless, and you have done exactly what I expected you would – redefined the terms
As you didn't define your terms, I hardly think it's fair to accuse me of redefining them. If you had put the question in a context of cosmology, the answer would be been an obvious "no". If you had put the answer in the context of quantum physics, the answer would have been "maybe, but probably not". If you decline to specify a context, I'm happy to do so for you, and that's what I did.
You have the knowledge, yet you deny from whence it came, you seem to deny that it is knowledge (because you can’t be certain), and you also appear to deny that you acquired the knowledge because of your previous denial that you could be certain.
There have been many studies done on how students' thinking matures as they go through college. There's a definite phase in the teenage years where everything is seen as black and white, and trying to teach concepts that aren't black and white tends to be pretty futile. That's my best explanation as to what's happening here. Or maybe it's something simpler: once you've done science for a living, you realize that "model" is a incredibly powerful metaphor that allows you to do some really amazing things. (Quoting Box) all models are wrong, but some are useful. Because all models are wrong, we're free to make models that are still wrong but progressively more useful. If a model was right, there would be no reason to try to improve it. But once you get outside of academia and the sciences, the continuum disappears and you're either certain or you don't know anything. I suppose that's a comforting thought, but you can do better.
You’d make a terrible judge. You couldn’t convict anyone. You’d always find a way out of a situation for the accused by changing the context, and making all the arguments irrelevant. You’d say things like: Prosecutor, the “evidence,” which you presented may not be evidence because of such and such a context in which it might not be so.”
And yet, a juror is never asked to return a verdict of absolutely guilty or absolutely innocent. It's always "beyond reasonable doubt" or "a preponderance of the evidence". So the juror is presented with the same set of facts and two contexts, and then the jury does exactly what I do as a scientists: evaluate the models, not to determine which one is True, but simply to determine which one is better. There is no certainty expected anywhere in the process. (This is even more evident when you move beyond juries and on up the appeals process. I've had a long habit of listening to US Supreme Court oral arguments on long car drives. They're fascinating. If a case has gotten all the way to the Supreme Court, then you can be guaranteed both that it's not obvious or easy and that counsel will be making a very good argument. Definitely worth a listen.) BarryR
VB@384 I'm taking "universal" to mean "applies to every context". Demonstrating a single context that allows contradictory statements is sufficient to show that non-contradiction does not apply to all systems, and thus I do not consider it to be universal. BarryR
Do you concede then that this is contradictory, because you’re tacitly claiming that one logical truth, that is, the context dependency of truth, is supposed to be true regardless of context,
As I'm not claiming that, the question is moot. There's no difficulty at all in creating a context where all contexts are ruled by overarching universal truths. Simply state it and it comes into being. I can examine that context critically like any other context. I can measure its utility. And I'm not impressed. You're trying for something much grander --- not only do there exists contexts defined by universal rules, but there cannot exist any contexts not defined by these universal rules, and any imagined context not defined by these universal rules must somehow be in error. I can step out of that system, evaluate the utility with regard to other systems and decide to discard it like any other non-useful system. I'm not arguing for any universal rule, especially the absence of universal rules. There may be contexts that it's impossible for me to step out of. There may be contexts where I cannot measure the utility. And, of course, the whole approach may be wrong. But it's worked better than anything else I've examined so far, so until something better comes along I'm going to stick with it.
“no one can tell anyone else what they can or cannot do”
As a command, that's contradictory. As an observation, it's fine. I'm making observations on how different contexts interrelate. If I wanted to say something (with certainty) about all contexts, then yes, I'd be in danger of contradicting myself. Again, this is a very scientific way of thinking. By looking at a handful of hydrogen atoms, I have a good idea about how hydrogen atoms a the other end of the universe behaved near the beginning of time. Am I certain about this? No, and indeed those models have been corrected over the years. But I can be near-certain (modulo Descartes malevolent demon) that this is the best explanation I currently have.
You presuppose it even in determining whether something is logical or not for goodness sakes.
What's "it"? What's logical for a deterministic finite state machine isn't logical for a non-deterministic finite state machine. What's logical for Euclidean geometry isn't logical for non-Euclidean geometry. What's logical for systems based on Peano axioms isn't logical for lambda calculus. Why would I think there was some overarching logic common to all of them?
If we were in a blank slate, tabula rasa, in a logical void, we could never compare any system to logic in order to even call it logical, all determinations as such would be like the arbitrary wishes of a partisan.
Evolution guarantees that we don't start out on a blank slate, and we model our initial systems of logic from abstractions of the world around us. Evolution has also given us a brain that allows us to make abstractions of those abstractions and, to some limited extent, reason about them. But yes, I agree with you: a tabula rosa state would not give rise to logic (at least not until there had been significant interaction with the environment --- depends on how rosa you allow your tabula to be.)
I once read a story about a guy who claimed that there was no truth (or at least, that we couldn’t know it if there were), and didn’t see the contradiction in his own claim, but I have to admit this is the first time I’ve actually ever met one.
Who was it? I don't know of a reliable way to ascertain universal truth. I've never met anyone with a reliable way of ascertaining universal truth. I don't see how it can be done. Stating that it cannot be done in principle is an argument from ignorance and a fallacy. Stating that it's so extraordinarily unlikely that we can dismiss it as a possibility is good scientific thinking. In still other words: I can't be certain that we cannot access universal laws. But I can be near-certain, and I am.
It’s called thinking.
And thinking is notoriously unreliable. (That's on of the reasons we have peer-reviewed journals to hash these things out.)
You’re going to remain certain about your uncertainty, anything else reduces itself, step by step, how ever far you trace it back, to nonsense
That's not clear to me. Please demonstrate. For example: In the third chapter of my dissertation I proposed a model that reduced prediction error by an order of magnitude. I am near certain that the experiments were run correctly, I'm near-certain that the calculations were done correctly, and I'm near-certain that the reduction in error is due to my model, not a malevolent demon. My Ph.D. adviser and my dissertation committee share my near-certainty. Two other groups, one in the Netherlands and one in Greece, independently discovered the same model (which has made publication tricky). They're also near-certain. How does this reduce to nonsense?
Even your continuum of certainty presupposes that you should certainly use a continuum
Really? That would mean I was certain that I should be using a continuum. And I'm not certain about that --- I have no problem admitting this approach may be wrong. It just happens to be the best one I've come up with so far. [I'm surprised I have this much patience, but it's very much like me speaking to myself at a much younger age. I have no difficulty remembering believing in universal laws and certainty. Reading Thomas Aquinas was probably what dislodged me from that position, although the philosophy classes gave me the framework to understand what I was reading.] BarryR
BarryR, But then you haven't concluded that it is indeed useless, and you have done exactly what I expected you would - redefined the terms, which I asked you not to do, and furthermore, repositioned the context in order to escape the illustration of a point. This is where I'm confused (read baffled) with all your knowledge of these issues, yet you can't seem to (or won't) grasp the very simple basis for truth. It's like a man who builds a house. He has all the knowledge on how to build it, including the knowledge of architecture, engineering, carpentry, masonry, electrical and plumbing. Once he finishes building the house he moves into it, and it is very real to him, yet he denies that he built it, or that he lives in it, or even that it is in fact a house. That man appears to be you. You have the knowledge, yet you deny from whence it came, you seem to deny that it is knowledge (because you can't be certain), and you also appear to deny that you acquired the knowledge because of your previous denial that you could be certain. In other words, you live in a house you build, but can't grasp the simple fact that it is a house. You'd make a terrible judge. You couldn't convict anyone. You'd always find a way out of a situation for the accused by changing the context, and making all the arguments irrelevant. You'd say things like: Prosecutor, the "evidence," which you presented may not be evidence because of such and such a context in which it might not be so." BTW, Barry, if every truth is context dependent, then every word in your above post should have a separate set of quotes around it, not just the word "useful." Let me ask you this: Do you teach? If so, I feel sorry for your students. I fully expect further equivocation on all these points. You are anything if not persistent. CannuckianYankee
RE 374 “It’s the fact that not all systems need be NC that prevents it from being universal.” I forgot to ask if you are certain of the fact that not all systems need be NC? Vivid vividbleau
RE374 "It’s the fact that not all systems need be NC that prevents it from being universal." So what? Why does that fact prevet NC from being a universal law? Vivid vividbleau
CY@380
Could you hold that such a system is useful and useless at the same time and still remain coherent?
Ah, so you want utility *and* coherence.... ;-) In my "useful" definition of utility, I take a single measurement against a single context. As such, no examples of simultaneous 2-valued utility are coming to mind. But that's not the only definition of utility, and realistic ones that measure a single context across several dimensions might be closer to what you're thinking of (perhaps the predictions are accurate --- high utility, but the model doesn't explain why --- low utility). Or if you wanted to broaden the definition of context, then a theorem that would have high utility for a graduate student might simultaneously have low utility for an undergraduate. So yes, I think you can construct contexts with simultaneous, distinct values of utility, and those may have a more practical application for how we think in day-to-day life, but here I think it's simpler to define the context such that this doesn't occur. BarryR
A further note from Burkhard at talk.origins on the origins of citations:
Since you are like me a collector of cultural trifles: It is not totally relevant to your question regarding referencing, but it is the oldest (563AD), if somewhat apocryphal, case of a copyright dispute. St. Colm cille of Iona had copied St Finian of Movilla’s vulgata translation of St Jerome’s Psalter (without acknowledging Finian as translator, of course) . Finian was not amused and claimed the copy as his. The High King Diarmuid Mac Cearbhaill, sitting in judgement famously ruled: “To every cow her calf, and to every book its copy” and decided for Finian Enraged, Colm cille called his clan, the northern Uí Néill, to war with Diarmuid’s followers, and in the subsequent battle at Cúl Dreimhne, 3000 of the King's man died. Colm cille was later forced into exile by the King. So tell your contact that inappropriate citing can lead to clan warfare and the death of many, that should give him better reasons than some peer reviewed paper.
BarryR
BarryR,
That’s correct. “Logic” and “illogical” apply within contexts, not outside them.
Do you concede then that this is contradictory, because you're tacitly claiming that one logical truth, that is, the context dependency of truth, is supposed to be true regardless of context, and is supposed to apply to all contexts. If it is not true, or not applicable to all contexts, then your argument fails. But so does your argument if it is true, for that means you're making an exception for this one truth applying to all contexts. It's akin to saying, "no one can tell anyone else what they can or cannot do"----which is actually a way of telling everyone what they can or cannot do. Claiming that all truth and logic is context dependent, makes an exception for the truth of the claim that it is context dependent, because at least that truth is not context dependent, but rather is supposed to hold true to all contexts. I can do this all day if you would like. How many more contradictory twists can we get you out of?
Where? I’ve looked at the Peano axioms. I’ve looked at the axioms used to create lambda calculus. I’ve studied perhaps another half-dozen systems in depth and can probably come up with another dozen more I’ve heard of. What you’re saying simply isn’t true for these systems. But for sake of argument, let’s say there’s a conspiracy of textbooks writers and that they do actually borrow from a greater logic. I’d be willing to consider that if you could describe the logic, but for whatever reason you only suggest concepts that are obviously context-specific (like modus ponens). So yes, I’m a little skeptical.
You presuppose it even in determining whether something is logical or not for goodness sakes. If the category didn't already exist at large, no system could be compared to it to even call the systems logical or illogical. If we were in a blank slate, tabula rasa, in a logical void, we could never compare any system to logic in order to even call it logical, all determinations as such would be like the arbitrary wishes of a partisan. We would be playing with counters. I guess I'm going more foundational and fundamental than you're used to. It appears to me that you assume that any one thing, such as modus tollens, must be used in every instance or else it is not universal, whereas I see it as a tool to be used within a universal truth, and that it has it's place is not an argument against its universality. The argument you're making seems to me to be like saying that since every house is not painted blue, then colors are not real. A sense of discernment will tell you which logical tool to use when, and it will even tell you when some things it cannot do, such as discern the inner synthesis of nature as Chesterton so eloquently described. I once read a story about a guy who claimed that there was no truth (or at least, that we couldn't know it if there were), and didn't see the contradiction in his own claim, but I have to admit this is the first time I've actually ever met one.
Am I certain that I’m uncertain? How would I go about demonstrating that? I don’t know that I can.
It's called thinking.
So I think I’m going to remain uncertain about my uncertainty. (Sure, I’m nearly-certain that I’m uncertain, but scientifically I’m not able to say anything more. Yes, I can construct a logical system that will give me certainty, but I can also construct a logical system that preserves uncertainty. As I don’t have access to any universal logic, I think (but am not certain) that I’m at an impasse.)
You're going to remain certain about your uncertainty, anything else reduces itself, step by step, how ever far you trace it back, to nonsense, until not only have you done away with your argument, you've done away with yourself. Reductio ad absurdum. The interesting thing is that you think this is the more enlightened path. As if walking backwards, which will eventually do away with even walking itself, is supposed to get you somewhere. This thinking leaves you frozen to not even be able to think. Even your continuum of certainty presupposes that you should certainly use a continuum. Some things are non-negotiable, even with you, on pain of idiocy if you remove them. Clive Hayden
BarryR "So while I see no difficulty creating a consistent formal system where Jupiter simultaneously exists and does not exist, my (tentative) conclusion is that such a system is useless." Let's assume for arguments sake that you've abandoned the tentative nature of your conclusion. You now hold that such a system is indeed useless; It serves no purpose. Could you hold that such a system is useful and useless at the same time and still remain coherent? Please answer without redefining the terms "useful" and "useless." CannuckianYankee
StephanB@370
I appreciate your thoughts on this matter. My question, though, persists. Can the planet Jupiter exist and not exist at the same time and under that same formal circumstances.
There's nothing preventing you from constructing that hypothesis. Let's call it H. What does this hypothesis allow you to do? Not much that I can see. So there's not a lot of utility here. So while I see no difficulty creating a consistent formal system where Jupiter simultaneously exists and does not exist, my (tentative) conclusion is that such a system is useless. BarryR
kairosfocus@375 I was hoping you say something like:
such computations is based on ordinary arithmetic, and the way the significant digits are manipulated is also dependent on ordinary arithmetic.
My initial impression of you was that you enjoyed going on at length about things you no nothing about, but it's nice to get that impression validated with a particular example. This program: #include int main(){ double x=2.0e-5; double y=2.0e5; double z = x+y; fprintf(stdout,"x=%012.12lf\n", x); fprintf(stdout,"y=%012.12lf\n", y); fprintf(stdout,"z=%012.12lf\n", z); return 0; } prints out x=0.000020000000 y=200000.000000000000 z=200000.000020000007 Hmmm... where did that 7 come from? With these changes double x=2.0e-7; double y=2.0e7; double z = x+y; I see x=0.000000200000 y=20000000.000000000000 z=20000000.000000201166 And with these changes double x=2.0e-9; double y=2.0e9; double z = x+y; I see x=0.000000002000 y=2000000000.000000000000 z=2000000000.000000000000 (code compiled using gcc 4.4.3 on a 64-bit core2 duo). I find nothing remarkable in all of this, but then I have a tolerable understanding of computer arithmetic, how it's implemented, and why these results are necessary for good performance. But it is not ordinary arithmetic. Nor is it invalid or illogical within its particular context. BarryR
CH@372
then nothing is anything in particular, in mathematics or logic, and there would exist no basis for calling anything logical or illogical.
That's correct. "Logic" and "illogical" apply within contexts, not outside them.
but unless there is a greater logic that these systems borrow from
Where? I've looked at the Peano axioms. I've looked at the axioms used to create lambda calculus. I've studied perhaps another half-dozen systems in depth and can probably come up with another dozen more I've heard of. What you're saying simply isn't true for these systems. But for sake of argument, let's say there's a conspiracy of textbooks writers and that they do actually borrow from a greater logic. I'd be willing to consider that if you could describe the logic, but for whatever reason you only suggest concepts that are obviously context-specific (like modus ponens). So yes, I'm a little skeptical.
you can do no comparison between the two, to say that they are themselves logical or illogical
Correct --- comparisons of logic and illogic aren't made between systems for just this reason. That's why you don't hear of Euclidian geometry described as more or less logical than non-Euclidean geometry. They each follow their internal logic, not an external logic.
I could’ve just as easily asked how much dignity or shame does yellow have
And I can just as easily come up with a context where such a question is sensible.
The yellow badge (or yellow patch), also referred to as a Jewish badge, was a cloth patch that Jews were ordered to sew on their outer garments in order to mark them as Jews in public. It is intended to be a badge of shame associated with antisemitism.
Fide wikipedia Why yellow? Perhaps because that color is associated with cowardice in that culture. So yes, it's a valid question to ask if yellow is associated more with dignity or shame, and if so, how much. I think I've demonstrated pretty conclusively here that nonsense and illogic are context dependent. But I'm a little more imaginative than you are, so let's try something a little more difficult: How about: :%s/\(0x\)A/\1B/g Nonsense or not? To me, the answer is obvious because I treat nonsense as context-dependent. You, however, have to rely on universals. So, is this nonsensical? If not, how do you make sense of this? I think it's clear that your universals aren't going to be very helpful here. But I can't think of any context where they would be.
It’s nonsense, but it’s only nonsense when you have an idea of sense.
I certainly have an idea of "sense". In fact, I have several.
I wasn’t asking about characters built in to the words themselves, but a physical distance between the two.
On my monitor, they're a bit less than an inch apart. How many more times do we have to do this? You're thinking of contexts where that question doesn't make sense. I can think of contexts where it does make sense. Sense is context-dependent.
You’re not giving an answer to my questions, you’re changing the question and giving an answer to something I didn’t ask.
YES! YOU'VE GOT IT! I'm not changing the words of the question, I'm just changing the context in which those words appear. Depending on the context used, you can expect a sensical answer, a nonsensical answer, both or neither. The words themselves don't have some absolute universal meaning --- the meaning derives from the surrounding context.
But you cannot make a total argument against certainty, you are essentially making an argument that you are certain that certainty doesn’t exist
I thought I had gone out of my way to state that I'm making an argument that I'm nearly-certain absolute certainty doesn't exist. Science doesn't deal in certainty, but it does handle probabilities well. So when I take a measurement I have some sense of how likely it is that the measurement conforms to reality --- I'm not often wrong, but it's happened before. And when I assemble several thousand of those measurements into a hypothesis, I can calculate how certain I am that my hypothesis is correct. But as Descartes observed, there might be some malevolent demon deceiving me, so I can't say (unless I'm speaking informally) that I am absolutely certain. When it comes to things like the earth going around the sun and, yes, evolution, we have such a high degree of certainty that you'd need to invoke an impressive stream of miracles from a deceitful god in order to explain how it could be false, but since we can't rule it out, we can't be absolutely certain.
If you want to say that you’re not certain that you’re not certain, you’re still certain of that much.
Am I certain that I'm uncertain? How would I go about demonstrating that? I don't know that I can. So I think I'm going to remain uncertain about my uncertainty. (Sure, I'm nearly-certain that I'm uncertain, but scientifically I'm not able to say anything more. Yes, I can construct a logical system that will give me certainty, but I can also construct a logical system that preserves uncertainty. As I don't have access to any universal logic, I think (but am not certain) that I'm at an impasse.)
(Reading Lewis might help. I found De Futilitate to be perfectly straightforward. You might want to give it a go.)
I did, remember? And I pointed out that Lewis's argument fails if you're willing to abandon binary truth/falsity in place of a continuum of certainty.
What I’m also saying is that we have no reliable way of knowing any.
You seem to at least know that much.
Philosophical arguments are much more pleasant if goodwill can be assumed. Please read the above as: I am nearly-certain that I have no knowledge of how we can come to know universal truths. I'm also nearly-certain that you have no such knowledge either. BarryR
CH@372 I'd like to stop these exchanges where yet another universal law is proposed and I have to come up with a trivial examples that shows not all systems use the proposed law. I'll give you one more shot.
It’s called logical deduction, used in philosophy, as is modus tollens, modus ponens, maybe you’ve heard of them.
Do you want this to be your final answer? Modus tollens and modus ponens are universal across all systems of reasoning? And if I pull out a textbook (say, white with red borders) that presents a standard system of reasoning that uses neither, will you concede that you're not able to name any such law? Let's not bother with the marginal cases. I want your strongest candidate, and when the strongest candidate is shown to be valid only in particular contexts, I want you to acknowledge this. So, what's your answer? BarryR
Onlookers: It is now clear that this thread is over on the merits. When the reality of floating point arithmetic [basically base-2 scientific notation arithmetic . . . ] is put up as a contradiction to the principles of arithmetic and the self-evidence of things like 2 + 2 = 4, that tells us all we need to know about the red herring distractions that have been used above. Let us just day to that, that the way indices are used in such computations is based on ordinary arithmetic, and the way the significant digits are manipulated is also dependent on ordinary arithmetic. After hundreds of posts, it remains plain that self-evident truths are real, and that those who object to them end up in distractive irrelevancies or outright absurdities. As to why such objections are being made [regardless of cost], 362 above may help us understand. As to the original issue of the thread, that evolutionary materialism has no coherent basis for explaining the origin of consciousness, that is even more plainly unanswered. So, let us understand the bankruptcy of the agendas that have led people to try to justify such irrelevancies and absurdities for hundreds of posts. To sign off, let us remind ourselves: || + || --> |||| 2 + 2 = 4 Good day GEM of TKI kairosfocus
vividbleau@371
Barry I dont see that the fact that NC is a property of systems means it cant be that NC is a universal law.
You are correct; I misspoke. If NC was a property of every system then it would indeed be a candidate for becoming a universal law. It's the fact that not all systems need be NC that prevents it from being universal. BarryR
SAR, I think what might help you in comprehending the position is this: Your space aliens understand that 2+2=4. For them it is also a self-evident truth. However, the space aliens operate on an entirely different (and alien to us) symbolic system to quantify it. As I mentioned to markf, it's not the symbols we use, which makeup the truth. The symbols are a representation or quantification of the truth. The truth lies outside of the them. In other words, if we humans in our limited intellect had never discovered that 2+2=4, and had never developed a symbolic language to express it, it would no less be true. You're operating on an assumption that truth is not so until we quantify it; which is entirely and demonstrably false. The reason we can even talk about "true" is because it exists whether we recognize it or we don't. We are not the inventors of it. If you don't believe that, then you can't be coherent in any attempt to deny it. If you learned anything in school (and I'm not denying that you did), then you operated on the assumption that there are certain things that are self-evidently true. And if you didn't operate on that assumption, then you learned nothing. CannuckianYankee
BarryR,
…but you can’t seem to name any.
It's called logical deduction, used in philosophy, as is modus tollens, modus ponens, maybe you've heard of them.
2+2=4, not a necessity in all contexts. “Whole is the sum of its parts”, not necessary in all contexts. “Law of causation”, definitely not necessary in all contexts (do integers have a cause?). “Law of non-contradiction”, your computer certainly don’t think is is particularly necessary. Yes, you *can* understand ideas as being logical necessities, just as I can construct an axiomatic system that results in 2+2=147. I can’t think of a reason why reasoning and rational people would prefer either approach.
Yes, you can understand ideas as logical necessities, I'm glad you at least agree with that much, I feel like we're making slow progress, but progress nevertheless. However, as I've said over and over, if any axiom can be made to make 2+2=147, and you're applying this to logic, then nothing is anything in particular, in mathematics or logic, and there would exist no basis for calling anything logical or illogical. You seem to pick and choose from "different" "logical" systems, as you say, depending on how you choose which system to use (first order, second order, etc.), and choosing "axioms" but unless there is a greater logic that these systems borrow from, you can do no comparison between the two, to say that they are themselves logical or illogical, you would only be playing with counters. Indeed, the whole endeavor presupposes a logical system not reducible to them, but of which they are subject. I wasn't asking about anything physical, such as a light beam, in asking how large yellow is, I was asking about the idea, which you cannot get to see nature as ideas, which is part of your problem in refuting Chesterton. I could've just as easily asked how much dignity or shame does yellow have, and how free or tired it is. It's nonsense, but it's only nonsense when you have an idea of sense. But if there were no such beginning, there would be no such end, no answer either way, and I would be perfectly within my own logical system to invent logic to suit my purposes according to your fun grab-bag of logic and mathematics.
How fortuitous that you chose two phrases with exactly the same number of letters. The genetic distance is 13 point mutations (cf “methinks it is like a weasel”).
This is a good example, because what I said with regard to asking how far it is from London Bridge to Christmas Day is just as much nonsense as methinks it is like a weasel was. I wasn't asking about characters built in to the words themselves, but a physical distance between the two. You're not giving an answer to my questions, you're changing the question and giving an answer to something I didn't ask. This is, of course, the only was to proceed in any attempt at making what I asked logical, you have to change what I asked; as if sleight of hand will help you in an argument, which tells me that you're only interested in rhetorical points, which tells me that you're not really interested in real dialogue. I won't participate in this, for future reference.
You and StephenB are the ones claiming that these universal laws exist. I’m happy to believe in them — it would certainly make my life a lot simpler —, but I’d like to know what they are first. The ones that have been suggested so far (except the one I suggested) are entirely context dependent. Nor am I arguing that there is no “right”. I’m making an argument against certainty (which, as a scientist, comes very naturally).
But you cannot make a total argument against certainty, you are essentially making an argument that you are certain that certainty doesn't exist, or at the very least, that you are certain that you aren't certain. If you want to say that you're not certain that you're not certain, you're still certain of that much. This is the same "one-step-removed" process I keep having to reign you in from.
What I’m also saying is that we have no reliable way of knowing any.
You seem to at least know that much.
That’s wordy, so I will from time to time say “No universal laws exist” with the hope that the context will make it clear that this is an epistemic claim consistent with the above, not an ontological claim.
Yes, it is an epistemic claim, one that is a contradiction. I've pointed out this several times now. At this point, I think I just need to accept that you're either not willing or not able to understand this. (Reading Lewis might help. I found De Futilitate to be perfectly straightforward. You might want to give it a go.) Clive Hayden
RE369 Barry I dont see that the fact that NC is a property of systems means it cant be that NC is a universal law. I am not disputing your contention that NC is property of systems I am disputing your conclusion. You have not demonstrated why the NC as a universal law is false if NC is a property of systems. Vivid vividbleau
---BarryR: "Non-contradiction is a property of systems, not a universal law." I appreciate your thoughts on this matter. My question, though, persists. Can the planet Jupiter exist and not exist at the same time and under that same formal circumstances. StephenB
vividbleau@366
Barry because NC is a property of systems does this mean NC cannot be a universal law?
Correct, and there are useful systems designed around the idea that contradiction (in controlled doses) is a downright handy thing to have. BarryR
We, as reasoning and rational people, can understand ideas as logical necessities
...but you can't seem to name any. 2+2=4, not a necessity in all contexts. "Whole is the sum of its parts", not necessary in all contexts. "Law of causation", definitely not necessary in all contexts (do integers have a cause?). "Law of non-contradiction", your computer certainly don't think is is particularly necessary. Yes, you *can* understand ideas as being logical necessities, just as I can construct an axiomatic system that results in 2+2=147. I can't think of a reason why reasoning and rational people would prefer either approach.
I guess you’ve never seen a logical deduction, you know, major premise, minor premise, conclusion, modus tollens, modus ponens, etc., and I suppose you’ve never encountered the experience of seeing when one thing logically leads to another.
Nope, I've never seen those used in a universal context. I have, though, seen them applied in the context of several different varieties of mathematics, philosophy and science. That's part of the reason that I conclude they're context-dependent.
Can I determine how large yellow is?
Yes. Why wouldn't you be able to construct a system that does that? It's built into both python and R:
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> len("yellow") 6
Wow, how about that, here's a context where yellow has dimensionality! Here's another:
An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit at two frequencies (wavelengths of 1064 nm and 1342 nm) simultaneously. This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is the sum of the two incident beams; in this case corresponding to the wavelength of 593.5 nm ("yellow").
fide wikipedia. Did you notice that you're making an argument from incredulity? You don't know how do determine the size of yellow, thus, based only on your ignorance, no context exists where this could be true. When ignorance is all you have, that's the only argument you can make.
And how far it is from London Bridge to Christmas Day?
How fortuitous that you chose two phrases with exactly the same number of letters. The genetic distance is 13 point mutations (cf "methinks it is like a weasel").
What’s that you say, these are nonsense?
Absolutely, in the context you were thinking of them. In other contexts, they're perfectly reasonable. You might even draw the conclusion from this that "nonsense" is as dependent on context as "truth".
When you remove logic, as with atheism, it’s not that nothing will be believed, it’s that everything will be believed.
I get the impression that I'm using logic far more effectively than you are. I'd hate to remove it, especially since there are so many to choose from. Small correct to the above: everything *can* be believed, not will. When you remove universals, you have to work a bit harder to justify what you believe (as opposed to making an argument from incredulity that something is either "nonsense" or "self-evident"). As I've mentioned elsewhere, the tools I use for this are utility and elegance.
As for your contention that there are no universal truths (as if that’s not attempting to make one)
Cutting and pasting again.... [A] Mathematical laws exist and depend on context, specifically the initial selection of axioms. No context-free laws have been observed, and we know of no way to accurately perceive any that may exist. When speaking informally, I shorten the above to “mathematical laws do not exist”.
you have provided no argument except to say that Descartes got one, at least, but no others, and thence argue as if what StephenB and I say is somehow wrong, as if you have guidance to what is right, while arguing that there is no right
You and StephenB are the ones claiming that these universal laws exist. I'm happy to believe in them --- it would certainly make my life a lot simpler ---, but I'd like to know what they are first. The ones that have been suggested so far (except the one I suggested) are entirely context dependent. Nor am I arguing that there is no "right". I'm making an argument against certainty (which, as a scientist, comes very naturally). At this point, I'm near-certain that you have no access to any universal truths (as opposed to private truth that you want to believe are universal), and so I'm near-certain that you're wrong in any context that I would find useful. To save a few electrons, I'll be shortening this to: I believe you're wrong.
saying that truth is context dependent, is saying that one truth is not context dependent, which is the context dependence on all other truth, and applies to all truth in general, except itself.
The fallacy in the above was addressed by Edmonds (using formal notation and everything). I'm not saying that it's a universal rule that truth is context dependent. Let me repeat that. I'm not saying that it's a universal truth that truth is context dependent. Nor am I saying that universal truths cannot exist. Let me repeat that as well. I am not saying that universal truths cannot exist. What I am saying is that we don't know any (with a possible exception of cogito, which has its own problems). What I'm also saying is that we have no reliable way of knowing any. That's wordy, so I will from time to time say "No universal laws exist" with the hope that the context will make it clear that this is an epistemic claim consistent with the above, not an ontological claim. I've pointed out the distinction several times now. At this point, I think I just need to accept that you're either not willing or not able to understand the difference. (Reading Edmonds might help. I found the paper to be perfectly straightforward. You might want to give it a go.) BarryR
SAR, to try to act as though 2 + 2 = 4 is not a generally and objectively (even, self evidently) true claim because you can choose — unannounced — to write the NUMBER 4 with a different symbol that means the same, whether IV in Latin notation, or 11 in base-3 notation, is to highlight just how diversionary the above arguments are It may be generally and objectively true. However, I made quite clear what system I was working under, where you didn't. So, it certainly wouldn't be self-evident to our alien friend that 2+2=4. He would find you rather nonsensical. :-P San Antonio Rose
RE 346 "Non-contradiction is a property of systems, not a universal law." Barry because NC is a property of systems does this mean NC cannot be a universal law? Vivid vividbleau
F/N: SAR, to try to act as though 2 + 2 = 4 is not a generally and objectively (even, self evidently) true claim because you can choose -- unannounced -- to write the NUMBER 4 with a different symbol that means the same, whether IV in Latin notation, or 11 in base-3 notation, is to highlight just how diversionary the above arguments are. [And if instead you meant what Spaniards call "once" then you have redefined + to mean nonsense, contradictory nonsense.) kairosfocus
CH@360
That you can’t be gotten to see that it’s Chesterton pointing out that scientists are making an argument from false credulity, that is, an argument from knowledge where there is none, and Chesterton pointing out that there is none, is not a fallacy, astounds me.
I thought we were discussing mathematics. If we're done with that, I'm happy to move on to the errors Chesterton made on the science side of things. BarryR
KF@358
If your machines are giving results as grossly in error as that, their “arithmetic” is wrong, period.
Oh, it's certainly wrong, if your only idea of arithmetic is what you've been taught at grammar school. Seymour Cray had several great ideas, but one of his most lucrative was realizing that people would pay more money for faster machines that gave "wrong" answers. He sold a lot of computers. Similar ideas are now incorporated into every processor (including the one you're using) that makes use of non-integer arithmetic. BarryR
Onlookers: Let us ask ourselves a question: why are the objectors above so desperate to avoid squarely facing the reality of self-evident truths? (So desperate that they are evidently willing to cling to absurdities and to use every distraction they can come up with?) Maybe, because of what such truths entail for their preferred worldviews. For instance, start with SET # 1: "Error exists": 1 --> It is of course a blatant and universally accepted fact of life. 2 --> But it is more than that. If we try to deny SET 1, WE PROVIDE AN EXAMPLE OF AN ERROR, THUS AFFIRMING IT. IT IS UNDENIABLY TRUE. 3 --> Thus, objective and even absolute truth exists, as we here have a live example of a truth. 4 --> It is also a well warranted, credible truth, so it is a case of knowable, known truth: knowledge also exists, even in the strong form of justified, true belief. 5 --> Thus, truth and knowledge rise above the level of perceptions and beliefs. 6 --> So, radical relativism, hyperskepticism and the like are all discredited as being false to the reality of knowable, known truth. 7 --> In addition SET 1 is a very humbling truth, so it pulls the sting from the slander that to claim to know truth is to be arrogant and intolerant. If one of the most easily known truths is the possibility of error, then to claim to know such a truth is not to claim to be omniscient or to be closed minded. 8 --> Also, this is an example of a self-evident truth, i.e self evident truths and knowledge of such truths are also real. Which rips up the neat little attempted dichotomy of analytic and synthetic truths and questions on a priori and a posteriori. 9 --> Through that breach in the radical skeptic's defense lines rush a stream of other SET's: non-contradiction at the head, with the excluded middle and ident5ity close on the heels. 10 --> So,now the whole area of knowledge, learning, reasoning and arguing suddenly have a framework of first principles of right reason that we must account to, or descend into absurdity. ____________ And so, much pivots on that issue. GEM of TKI kairosfocus
BarryR,
Chesterton’s error lies in thinking ideas are inevitable, that it’s impossible to imagine 1+2 not equaling 3. To the extent that he tries to support this, he does so by appeal to incredulity. I’m saying that neither nature nor mathematics are inevitable, which is simply the Chestertonian idea of nature applied to mathematics.
We, as reasoning and rational people, can understand ideas as logical necessities, I guess you've never seen a logical deduction, you know, major premise, minor premise, conclusion, modus tollens, modus ponens, etc., and I suppose you've never encountered the experience of seeing when one thing logically leads to another. As I asked you before, can I determine how large yellow is? And how far it is from London Bridge to Christmas Day? Can you? What's that you say, these are nonsense? Not logical? How do you determine that? When you remove logic, as with atheism, it's not that nothing will be believed, it's that everything will be believed. As for your contention that there are no universal truths (as if that's not attempting to make one), you have provided no argument except to say that Descartes got one, at least, but no others, and thence argue as if what StephenB and I say is somehow wrong, as if you have guidance to what is right, while arguing that there is no right. Barry, what I'm trying to enlighten you on in a nutshell, is this sort of reasoning: saying that truth is context dependent, is saying that one truth is not context dependent, which is the context dependence on all other truth, and applies to all truth in general, except itself. This is the same issue you ran in to with regard to your skepticism of your skepticism; if you say you cannot know about the cogency of your skepticism, you are saying that you're at least not skeptical of the cogency of the skepticism of your skepticism. Because you make anything one level removed is not a refutation, nor is it a real argument, for you still make a tacit exception for yourself. You cannot argue in this way without self refutation, no matter how many removals you make. This is self evident. That you ought not to contradict yourself is a universal truth. This is why you couldn't validly argue against Lewis's De Futilitate as well without using this exact tactic of a one-removed process of skepticism. But this argument might make a comfortable life for those unwilling to concede ultimate truth, but it is shallow and doesn't really look itself in the mirror and asses itself by its own criteria of skepticism. Clive Hayden
BarryR, That you can't be gotten to see that it's Chesterton pointing out that scientists are making an argument from false credulity, that is, an argument from knowledge where there is none, and Chesterton pointing out that there is none, is not a fallacy, astounds me. Clive Hayden
simply point out that to substitute that 2 + 2 = 11, you have redefined symbols. Which symbol did I change? 2 is still 2, in apples or matchsticks. + still means to add and = still means the result follows. The only thing I changed was the context of a different math system. San Antonio Rose
BR: Re: I work with expensive machines where the arithmetic is far stranger than 1+1=3 If your machines are giving results as grossly in error as that, their "arithmetic" is wrong, period. If what you really are referring to is that you are doing work in different algebras and other similar mathematical structures, then that is very different from trying to make arithmetic say: "1 + 1 = 3." Just go out there with a calculator that makes 1 + 1 = 3 and see what happens when you sell it to customers. GEM of TKI kairosfocus
CH@353
Agreed, it is an argument from incredulity, as Chesterton admits, and it would be rational and logical if you admitted it too
Oh, it's perfectly rational and logical, and it's also wrong. Pointing out that it's a logical fallacy ends the argument (and yes, argument from incredulity is a logical fallacy). That ends the argument. But just in case, I provided a counterexample that applies not only to what I'm able to imagine, but what every scientist who uses decimal numbers on a computer can also imagine. [Yes, there are contexts where an appeal to incredulity is not a fallacy --- usually when you're talking to an expert about their area of expertise, but even there it's a very weak argument. And yes, I could set up a contextual system where Chesterton is absolutely right. Such a system has no obvious utility or elegance, and so I'm free to ignore it.] Cutting and pasting large blocks of Chesterton is not an effective way to convince me that my argument fails, particularly after I've shown his argument relies on a logical fallacy. BarryR
Stephen: So, tell me SAR, is the law of non-contradiction self evident or not? Since my school doesn't offer courses in philosophy, that is one question I will let you work out with Mr. Oxford. San Antonio Rose
Onlookers: The onward exchanges are just a bit saddening; but are also quite revealing. In particular, it is increasingly clear from the threadbare remarks above, that MF is ignoring what he cannot and/or will not address. (Let us ask: after months, is he able to address the point that "Error exists" is a self-evident truth, one undeniable on pain of self-refutation by counter example? After many days, can he show how, without injecting an irrelevancy by way of arbitrary, contradictory redefinition of "+" -- and let us not forget the one who (unannounced) resorted to stating 4 in base-3 notation, as though that changed the meaning!!!! -- 2 + 2 = 4 is not necessarily true on the ordinary meaning of 2, +, = and 4?) Truly, truly sad. GEM of TKI kairosfocus
CH@351
I regret that you don’t know any philosophy
Amazing what you can do when you don't know any philosophy: consult textbooks from the philosophy classes you've taken, read the peer-reviewed literature to find evidence and support, read and make arguments using formal notation, define terms, provide counterexamples that aren't trivially refuted... I wish someone else here didn't know any philosophy either. It would make this conversation much more interesting.
If you read ... especially the chapter The Ethics of Elfland
That's what I'm quoting from.
everything in nature was and is perfectly inevitable as ideas
No philosopher I'm aware of holds that nature is inevitable. I certainly don't hold that position. So I'm going to ignore that part of Chesterton's argument. Chesterton's error lies in thinking ideas are inevitable, that it's impossible to imagine 1+2 not equaling 3. To the extent that he tries to support this, he does so by appeal to incredulity. I'm saying that neither nature nor mathematics are inevitable, which is simply the Chestertonian idea of nature applied to mathematics.
The first line of argument contradicts itself, because it presupposes logic and uses it for an argument
At this point I don't know which line you're talking about. I have several logics to chose from, though, and since they're strictly limited to specific contexts contradiction is rarely an issue.
the second argument you attempted to make at first, and abandoned it in short order and deferred to your friends at talk.origins
I deferred to talk.origins on the history of citations in scientific literature and reported back on the results. I may post a summary of this exchange there on Monday, but that requires I read a few more papers on evolutionary epistemology and I don't know that I'll be able to get to that by Monday.
actual reply or argument that sequences in nature are logical necessities.
Sequences? Oh, as Chesterton used it:
It might be stated this way. There are certain sequences or developments (cases of one thing following another), which are, in the true sense of the word, reasonable. They are, in the true sense of the word, necessary. Such are mathematical and merely logical sequences.
I thought this entire conversation had to do with the fact that the sequence 2+2=4 was not necessary and 2+2=147 was possible.
If 1+1=147, or any other number that could be inserted, then nothing means anything, because nothing is anything in particular, and becomes vacuous, thus there is not even an argument to be made that 1+1=147, because none of these properties would actually be anything in particular.
You're repeating yourself. I answered this at least a couple of times and you've not given a response to those answers. So here we go again: You can try to order your worldview so that meaning must be rooted in universals. So long as you don't try to enumerate what these universals are, and you don't bring the ones you do enumerate out for critical scrutiny, you can probably live a pretty happy life. However, once you start asking questions, the set of possible universals seems pretty puny. I can construct mathematical and logical systems, and looking back through history I can see a lot of other people doing the same thing. These are not universally true --- and in fact are very much influenced by the environment they were created in. (And, like religion, an ignorance of the history of mathematics will mislead you into thinking that the mathematics you were raised with is obvious, perfect and universal.) Once you decide to abandon universals, can anything really mean anything? Sure. So we need a way of figuring out which of these systems are interesting and which we should ignore. I like to use a combination of utility and elegance (neither of which are universal). Thus, I have no difficulty saying: 2+2=4 is both useful in most day-to-day situations and incredibly elegant as part of the theory of integers. 2+2=147 is useful in pointing out the contextual nature of truth, but doesn't have much utility or elegance beyond that. 0>x>y>z s.t. x+y=z and y=z has a great deal of practical utility in specific contexts, and doesn't fail to have a certain imperious elegance..... All three statements are true within specific contexts. All three statements may be true universally, but I have no way of determining this. So. At this point you and StephanB have convinced me that you're not going to be able to provide examples of "universal truth" or "universal law" or "self-evident universals". Descartes couldn't do that either, so that's nothing to be ashamed of. You might want to make a different argument: that universals are somehow better, even though we can't know what they are. I think that's going to be a tough point to carry, and I can't give you any advice as to how to get started, but at least it would be less futile than you continuing to repeat the same points that I've answered multiple times. BarryR
BarryR,
That’s a textbook argument from incredulity. I have no difficulty imagining either (and I work with expensive machines where the arithmetic is far stranger than 1+1=3). That is Chesterton’s argument. It is an argument from incredulity. As such, it relies on be having at least as much incredulity as Chesterton. I don’t. Thus, the argument fails.
Agreed, it is an argument from incredulity, as Chesterton admits, and it would be rational and logical if you admitted it too, for some things we do not know, and cannot know in the same way that we know other things that we actually understand, such as logic. This, of course, includes you and everyone else.
"It is not a "law," for we do not understand its general formula. It is not a necessity, for though we can count on it happening practically, we have no right to say that it must always happen. It is no argument for unalterable law (as Huxley fancied) that we count on the ordinary course of things. We do not count on it; we bet on it. We risk the remote possibility of a miracle as we do that of a poisoned pancake or a world-destroying comet. We leave it out of account, not because it is a miracle, and therefore an impossibility, but because it is a miracle, and therefore an exception. All the terms used in the science books, "law," "necessity," "order," "tendency," and so on, are really unintellectual, because they assume an inner synthesis, which we do not possess.
The confusion and actual misunderstanding comes into play when scientists forget their incredulity, and mistake observation of nature as logical explanations of nature, that is, as mental necessities:
I deny altogether that this is fantastic or even mystical. We may have some mysticism later on; but this fairy-tale language about things is simply rational and agnostic. It is the only way I can express in words my clear and definite perception that one thing is quite distinct from another; that there is no logical connection between flying and laying eggs. It is the man who talks about "a law" that he has never seen who is the mystic. Nay, the ordinary scientific man is strictly a sentimentalist. He is a sentimentalist in this essential sense, that he is soaked and swept away by mere associations. He has so often seen birds fly and lay eggs that he feels as if there must be some dreamy, tender connection between the two ideas, whereas there is none. A forlorn lover might be unable to dissociate the moon from lost love; so the materialist is unable to dissociate the moon from the tide. In both cases there is no connection, except that one has seen them together. A sentimentalist might shed tears at the smell of apple-blossom, because, by a dark association of his own, it reminded him of his boyhood. So the materialist professor (though he conceals his tears) is yet a sentimentalist, because, by a dark association of his own, apple-blossoms remind him of apples.
Thus, your argument fails.
This elementary wonder, however, is not a mere fancy derived from the fairy tales; on the contrary, all the fire of the fairy tales is derived from this. Just as we all like love tales because there is an instinct of sex, we all like astonishing tales because they touch the nerve of the ancient instinct of astonishment. This is proved by the fact that when we are very young children we do not need fairy tales: we only need tales. Mere life is interesting enough. A child of seven is excited by being told that Tommy opened a door and saw a dragon. But a child of three is excited by being told that Tommy opened a door. Boys like romantic tales; but babies like realistic tales--because they find them romantic. In fact, a baby is about the only person, I should think, to whom a modern realistic novel could be read without boring him. This proves that even nursery tales only echo an almost pre-natal leap of interest and amazement. These tales say that apples were golden only to refresh the forgotten moment when we found that they were green. They make rivers run with wine only to make us remember, for one wild moment, that they run with water. I have said that this is wholly reasonable and even agnostic. And, indeed, on this point I am all for the higher agnosticism; its better name is Ignorance.
All that we call common sense and rationality and practicality and positivism only means that for certain dead levels of our life we forget that we have forgotten. All that we call spirit and art and ecstasy only means that for one awful instant we remember that we forget.
Clive Hayden
CH@350
I still don’t see an argument
At this point I think it's best if I just cut and paste. Here’s the nub:
You cannot IMAGINE two and one not making three. But you can easily imagine trees not growing fruit;
That’s a textbook argument from incredulity. I have no difficulty imagining either (and I work with expensive machines where the arithmetic is far stranger than 1+1=3). That is Chesterton's argument. It is an argument from incredulity. As such, it relies on be having at least as much incredulity as Chesterton. I don't. Thus, the argument fails. BarryR
BarryR,
The more I think about it, the more I think that I’m more Chestertonian than Chesterton.
That’s a tolerably good definition of mathematics, and I regret that Chesterton didn’t know any or he might have written a much more interesting essay. I’m glad that 1+1=2 because I understand that 1+1 could have been 147, and the decision is not one that can be appealed to the real world, only to the storyteller (the mathematician). If the internal logic of the story (theorem) is sound, the story rings true, and it doesn’t matter if the (real or metaphorical) snow is black or white or plaid. Thank you for reminding me to revisit that.
I regret that you don't know any philosophy, and get Chesterton exactly backwards. If you read the book Orthodoxy, and especially the chapter The Ethics of Elfland, you'll see that the argument he is making is that natural things, physical occurrences, are opaque to our philosophy as to seeing the inner synthesis as we do with something that we can perceive with our intellect, such as mathematics. I still feel like I'm having to drag you out of the mire of materialism here, by repeating the same thing five or six times. You have yet to make an argument against him, except to say that logical sequences don't really exist even in philosophy (apparently there is no such thing as a deductive argument, at all, ever, that was true) and that mathematics doesn't exist, but that everything in nature was and is perfectly inevitable as ideas. The first line of argument contradicts itself, because it presupposes logic and uses it for an argument, albeit a bad one, and the second argument you attempted to make at first, and abandoned it in short order and deferred to your friends at talk.origins, and as yet I've not seen an actual reply or argument that sequences in nature are logical necessities. The confusion being made here is in thinking that the laws of nature are the laws of thought, and that to imagine something in nature as being different than it is, is the same as imagining a logical impossibility. It, of course, isn't, as any philosopher worth his weight knows. If 1+1=147, or any other number that could be inserted, then nothing means anything, because nothing is anything in particular, and becomes vacuous, thus there is not even an argument to be made that 1+1=147, because none of these properties would actually be anything in particular. You might as well say that flim + flam = flum. You're talking confusedly about things that don't exist, even as place holders, assuming your argument. Clive Hayden
BarryR,
Backing up a bit and reading Chesterton in context, I now realize that he’s writing polemic, not philosophy, and the style he has chosen in a perfectly reasonable one for polemic. Here’s the nub:
You cannot IMAGINE two and one not making three. But you can easily imagine trees not growing fruit;
That’s a textbook argument from incredulity. I have no difficulty imagining either (and I work with expensive machines where the arithmetic is far stranger than 1+1=3). This passage may be more telling:
It is a dreadful thing to say that Mr. W.B.Yeats does not understand fairyland. But I do say it. He is an ironical Irishman, full of intellectual reactions. He is not stupid enough to understand fairyland. Fairies prefer people of the yokel type like myself; people who gape and grin and do as they are told.
I don’t believe he’s intending either passage to be read literally, or, perhaps I should say that I don’t think either passage reflects what he actually believes. This is emphatically written for an audience, an audience who wants to be reassured. But no, I don’t think there’s an English professor or philosopher who would mistake this for philosophy.
I do think so, so, where are we now? I still don't see an argument, unless your argument is "Chesterton didn't really mean anything he said, because he was writing to people." Even a surface reading would conclude that he's deadly serious, but it's his way to be humorous at the same time, he can't help but laugh at folks enraptured in scientism, and he pokes fun at himself too, but make no mistake, his jovial attitude is not a dismissal of his intellect. He was known as "The Happy Man" which permeated every inch of his philosophy. When are you going to address the argument itself? Clive Hayden
#340 (cont) I forgot the "little in common" aspect. I sharply disagree with Gpuccio and many others. But at least I understand what they are saying. I really struggle to understand what KF is saying much of the time. I am sort of borderline with you - which is why I keep on asking what you mean. With KF it would be continuous. Must go - I really have run out of time! markf
StephanB@342 Well, now you have me curious: how could you have taken both graduate and undergraduate classes in philosophy that resulted in an appreciation of syllogisms and Adler? I'm sorry your education hasn't served you better than it has. On the one hand, I'm curious about what classes you took and where, but on the other hand there's not much that can be done about it now, so perhaps it's best to let it go. If you'd like to continue discussing this particular topic offline, I can be reached at cartographical at gmail dot com. BarryR
#338 a) KF's posts are usually very time consuming to read - by an order of magnitude compared to most others. They are long, full of references which need following up and often written in dense abstract language. b) I think of responding to a comment as a committment to engage in some kind of dialogue for a bit (with some exceptions for a light-hearted or trival comment). Actually committment is the wrong word - it is psychologically hard to give up an ongoing argument. You get drawn into it. So I have started responding to your posts after a period of not doing so. c) Right at the moment I have a lot of time. I work freelance and I hope this is the exception not the rule! KF is not the only one I avoid. They may all, including KF, have great points to make. I just think this thing would dominate your life if you got into a debate with everyone. Surely you have people you avoid being drawn into discussion with? markf
StephanB Non-contradiction is a property of systems, not a universal law. In some contexts it's useful. In other contexts, it's not. It's not useful for your computer, which implement (non-modular) arithmetic as follows: ExEyEx | 0<x<y<z s.t. x+y=z and y=z. Spelled out, there exists positive values for x, y and z such that y=z and x+y=z. On your computer, that's not a contradiction, that's a feature (it allows your computer to work with numbers other than integers). In your grammar school arithmetic, that's a contradiction. BarryR
---Markf: "Rather it is nonsensical thing to say [Jupiter can exist and not exist at the same time] and I do not know what it would be like to believe it. Why is it nonsensical? StephenB
---Markf: "I can’t read everyone’s response to everything. It is nothing personal against KF – I just think we have so little in common that there is no basis for an interesting discussion . . ." Mark, in all honesty, it really is difficult to make the case that you are ignoring KF's correctives, arguments, and refutations of arguments on the grounds that you are short on time and have "little in common" with him. On this thread alone, you have probably invested fifty-fold the amount of time necessary to read one of KFs posts. On the other hand, the purpose of debate is to pair contrasting views from advocates who, by definition, are likely to have little "in common." StephenB
#333 Stephenb I keep on meaning to come back to the law of non-contradiction. I don't know what others have been saying, but I do not hold that Jupiter can exist and not exist at the same time. But this is not because of a law. That suggests that if the law were changed then I could belief it. Rather it is nonsensical thing to say and I do not know what it would be like to believe it. markf
---BarryR: "I’m not quite sure what you meant to communicate, but what came across was: 1. No undergraduate philosophy classes, 2. No graduate philosophy classes, and 3. No graduate degree." Yes, I have a graduate degree. It is a bit difficult to graduate with high honors without actually graduating. There is that law of non-contradiction rearing its ugly head again. Yes, I have taken numerous philosophy courses both at the undergraduate and graduate level. On the other hand, credential checking is your gig, not mine. I would prefer to address substantive points on the table, such as all those unattended hanging chads of yours that I listed @323. StephenB
PS: I will ignore the personalities, and simply point out that to substitute that 2 + 2 = 11, you have redefined symbols. kairosfocus
SAR: Not at all. The point of a self evident truth is that we are intelligent, experienced minded creatures who routinely UNDERSTAND meanings and contexts. Which is a basic fact of life you imply every time you post a message here: you expect to be understood without giving the full context of your remarks -- from the definition of alphabetic glyphs and English words and grammar on up -- without text in the first intance. In short, the objection is based on an exercise in self-referentially inconsistent selective hyperskepticism. And, in the case in view, 2 + 2 = 4, that meaning and context is the property of just about every 5 year old who has had the privilege of schooling. So, the objection is specious and self-defeating. Indeed, it aptly illustrates how those who choose to object to a self-evident truth are forced to resort to absurdity. GEM of TKI kairosfocus
---San Antonio Rose: "If you fail to completely state the context, then the truth is hardly self-evident." So, tell me SAR, is the law of non-contradiction self evident or not? Your friends on this thread say that it is not self evident. They believe that, in principle, the planet Jupiter could exist and not exist at the same time and under the same formal circumstances. For them, there is no logical law to prevent us from saying otherwise. Do you agree with them? StephenB
once we use the relevant symbols in their usual ways [and as not one of those reading misunderstands!], the claim 2 + 2 = 4 is both true and necessarily true, on pain of absurdity. Nope. I have not changed the meaning of 2 or + or =. And I provided other cases where on different interpretations of the glyphs we do get different results, but that is about a different context. Which is strictly irrelevant to the self-evident nature of the truth that 2 + 2 = 4. If you fail to completely state the context, then the truth is hardly self-evident. Just think of the alien who only understands base 3. If the first thing he heard when he got off his spaceship was you saying "2 + 2 = 4 is self-evidently true," he would think you were the one laboring under the pain of absurdity. So, until you fully explain your context, you aren't being nearly as self evident as you seem to think. The willful injection of distractive red herrings and strawmen to try to undermine a simple and well-warranted point, is therefore not a sign of good intellectual health Is that your way of saying you think I am not smart enough to comment? Pardon, finally, but I must object to your taking unwarranted offense at my pointing out a simple matter. You came across as haughty in your comment to me at 327. If that wasn't your intent, then I accept your apology. San Antonio Rose
SAR: Again, one last time for now: once we use the relevant symbols in their usual ways [and as not one of those reading misunderstands!], the claim 2 + 2 = 4 is both true and necessarily true, on pain of absurdity. (And I provided other cases where on different interpretations of the glyphs we do get different results, but that is about a different context. Which is strictly irrelevant to the self-evident nature of the truth that 2 + 2 = 4.) The willful injection of distractive red herrings and strawmen to try to undermine a simple and well-warranted point, is therefore not a sign of good intellectual health on the part of evolutionary materialists, radical relativists and fellow travelers, as may be seen above. Pardon, finally, but I must object to your taking unwarranted offense at my pointing out a simple matter. At no point have I said or implied anything denigratory. To correct is not to denigrate. G'day GEM of TKI kairosfocus
Whoopsie! That should have read "self evidently true when talking to an alien (who is only familiar with geometry) about right angles." San Antonio Rose
Pardon, but the points you are making were discussed long since above, Is that your way of saying "run along little girl, the adults are talking?". I've read those comments and you are wrong. I can make 2 + 2 = 11 without changing the meaning of 2 or + or =. The point is that once we do properly understand the relevant symbols, 2 + 2 = 4 is SELF EVIDENTLY TRUE. It is not self evidently true to an alien that is unfamiliar with your unstated assumption of a base 10 system. If an alien used a base 3 system, it would be self-evidently true to him that 2 + 2 = 11. Furthermore, 2x + 2x = 0 (for a non-zero value of x) is self evidently true when talking to an alien only familiar with geometry about right angles. The issue is not that we must appreciate context. That, is a distractive red herring led away to a strawman. Yes, we must appreciate context, Mr. Smartypants. LOL. San Antonio Rose
PS: Interpreting the Inclusive Or function: 1 --> true, and + means AND/OR. So, a compound statemment p + q means "P is true and/or q is true." the truth value of the composite statement will be true if at least one of p and q are so. For instance, "A guava is a fruit AND/OR the moon is made of green cheese" is true, as the first of these is true. Does not matter that the second is false, even ridiculously false. kairosfocus
SAR: Pardon, but the points you are making were discussed long since above, e.g. cf 187 - 191 for about the fourth round of such explanation: 65, 74, 111, etc. Again, the precise point is that we are to understand the situation we deal with. As the simple illustration below suffices to show [try it with some matchsticks], 2 + 2 = 4 is not a mystery, nor is it to be dismissed by arbitrarily and contradictorily redefining + to make a point: {||} + {||} --> {||||} And that thus, with the usual symbols and meanings, we can see that: 2 + 2 --> 4 Thus, we can then go on to see that in that context, 2 + 2 = 4 is not only true in fact but must be so; given what 2, +, + and 4 normally mean. To come up with claims that 2 + 2 = 11 or that 2 + 2 = 147 etc, and assert that they are "just as true" requires unannounced injection of a contradiction into the system. Of course, I know of a very common situation where we use very similar symbols thusly: 0 + 0 = 0 0 + 1 = 1 1 + 0 = 1 1 + 1 = 1 But of course, how that comes to be underscores the point: + here symbolises a boolean operation of INCLUSIVE OR, and 1 is here a boolean variable that takes the values 1 or 0. [And BTW, the boolean algebra relationship here is self evident, once its symbols, meanings and relationships are understood. Self-evident is not the same as simplistic.] The issue is not that we must appreciate context. That, is a distractive red herring led away to a strawman. The point is that once we do properly understand the relevant symbols, 2 + 2 = 4 is SELF EVIDENTLY TRUE. That is: 1 --> we understand based on our experience of the world as minded creatures, i.e. this is not a definition we are making up out of the air. 2 --> on understanding, we see that the claim is true, and MUST be true, on pain of absurdity. And so it illustrates how self-evident truths are real. As does: "Error exists." As does: "a finite whole is greater than any of its parts." As does: "a given thing cannot both be and not be." The problem is not contexts of discussion -- any use of symbols to express meaning is inherently about contexts, conventions, and the like -- but about a reality that ever so many are loath to acknowledge: there are truths which on understanding what is being said we can see are so and must be so on pain of absurdity. GEM of TKI kairosfocus
Kairosfocus: And, we see why 2 + 2 = 4 is self-evident: once we understand the symbols, the operation of addition and the relationship of equality [typically, age 5 or so] we see that 2 + 2 = 4 [however we may label or symbolise] as a matter of brute fact, AND that it could not have been otherwise, on pain of confusion and gross error. I may just be a dumb high schooler, but I can see a situation where 2 + 2 = 11 and a situation where 2x + 2x = 0 for a non-zero value of x. Which I think is Mark's point about context. San Antonio Rose
CY: Thanks, your input above was both helpful (cf how MF responded, however reluctantly) and encouraging. You are right that he seems to conflate perception [on evolutionary materialistic premises] and reality. Thus, he needs to appreciate that there are diverse worldviews held by intelligent, educated, thoughtful people, and that they are worthy of sitting down to the same serious table of comparative difficulties. Those who adhere to evolutionary materialism also need to realise that their own view is not privileged. For instance, observe MF's closing remark in his excerpt above: ". . . end up making up some abstract world of which it is true." These words reflect an extreme view, whereby the only "real" things are/"must be" physical/ natural (NB: to try to fully and exactly define those will land you in the most horrendous morass of complexities that make the objections on parts and wholes pale into insignificance . . .), and the meaning of abstractions lies in operations and observations on those things. But in fact, that begs the question of the central experience of the world we all have: our self-aware, conscious, thinking, enconscienced, unified selves. It is through that central fact that we interact with the external world and reflect in turn on it. In short, we are minded, indeed ensouled creatures. And to such creatures, love, good and evil, number, truth and the like are as much experiences as are the redness of a ball, or the velocity of the car in which we may be travelling. The issue is to coherently account for these, and that brings us tot he reality of self-evident truths starting with first principles of right reason. We may indeed understand and perceive such things as true, but their truth is not he product of our perception. Self evident first principles of right reason are true in the context that once we understand them, to try to reject them at once lands us in the shipwreck on the reefs of absurdities. (For instance, so soon as you utter a claim that something is the case, you entail that it cannot be not the case as well, in the same sense and at the same time and place. When someone tries that, as was done with the meaning of + above, the anchors start and we end on the reefs of absurdity in a moment.) Looking at remarks above on parts and wholes, one is left with the very strong impression of an utterly strained objection, one made in the teeth of commonplace concrete experience and concepts formed on such experience. Of course, it is probably felt that there must be some exceptions out there that allow us to doubt the specific and broader claims, so freighted with implications that seem to be unwelcome to materialists. But, we can unpack the claim that "a finite whole is greater than any of its parts." In context, we are talking of composite unities, things that have a unified identity, but which are made up from components. In such a case, it is -- or should be -- obvious that the part/component is not the same as the unified whole, and that as there must be at least two parts, no one part will be equivalent to, the same as, or as great as the whole. For components have to be joined together in organised ways and/or relationships to get to the whole. Letters are joined to make words, parts are joined to make engines, varieties of animals are organised in hierarchies to compose the Animal Kingdom, and lines of code are organised to yield a functioning program. But, if one is sufficiently determined not to acknowledge that, one can doubtless find and make objections that one would not ever have come to mind if one were open to the idea. Only to land in absurdities. GEM of TKI kairosfocus
Onlookers (and MarkF): I must shake my head as I see the likes of:
[MF, 319:] I can’t read everyone’s response to everything. It is nothing personal against KF – I just think we have so little in common that there is no basis for an interesting discussion . . .
This is getting even more and more sadly threadbare. Evidently, MF -- a trained philosopher -- is unwilling to engage in comparative difficulties analysis on the issues that are at stake; which automatically locks him into worldview-level question-begging. (And the gobbet he snipped out of context shows why: he refused to address the point that the SET on whole-part relationships is an in-common claim, which is instantiated in particular cases in diverse ways, simply citing it as though that were enough to set it aside. But, any intelligent 6 year old will know that once we have a composite whole, the part is a component, not the whole. As we may see/understand in light of abundant experience from early childhood on.) Now, above, MF has had three cases of self-evident truths to deal with explicitly, and a fourth mentioned more or less in passing, of which he picks what he wants to deal with (and ducks the most directly evident one, of course):
1 --> "Error exists" (This has been brought up as an undeniably true case, for months; it is WCT no 1 in the list of 7 foundational principles of right reason here. Why it is undeniably true is that once we try to deny it, we directly entail that an error exists, i.e. we immediately exemplify what we try to deny.) 2 --> || + || --> ||||, i.e. 2 + 2 = 4 3 --> The whole is greater/more than the part. (The engine is more than the crankcase, the poem than the letter, the house than the brick, etc.) 4* --> The law of non-contradiction: A thing cannot at once be and not-be. (It is by trying to make + simultaneously mean two opposed things that the error on 2 + 2 = 4 has come about.) ___________________ * This has been mentioned but was not a focal point above.
Now, in MF cites a rebuttal to GP on case 2, which I will remark on with arrow-points: ___________________ >>A really important insight comes from Wittgenstein. Often a statement will be true for a combination of different types of criteria and because they always or often coincide we never have to resolve which apply.
a --> "often a statement will be true" i.e. W is acknowledging that things may well be true, and b --> He also admits that it may be warranted as such. c --> Warrant, of course is not proof, and self-evident statements will be warranted by observing that to reject them [once understood], lands you in immediate and patent absurdity.
For example, an arithmetic statement such as 2+2=4 is mathematically true because it follows from the Peano axioms (I think those are the right ones).
d --> Actually, 2 + 2 = 4 is true because of what 2, +, - and 4 mean, as may be grasped immediately on inspection of a few concrete cases, by many persons who have never heard of Peano's Axioms nor how they lead to the conclusion that || + || --> ||||. e --> For good reason relating to joining sub-groups to form a whole, and by direct inspection of the cardinality of the subsets and the joint whole, we were sure of the conclusion as a fact, thousands of years befoe Dedekind et al came up with what we know as Peano's axioms. f --> Further, we were long since convinced for good reason that this MUST be so, in light of what 2, 4, + and = mean, on pain of absurdity. g --> Peano's axioms came along much later, and serve as a unifying principle that allows us to analyse the underlying structure and relationships of the facts of arithmetic. h --> In fact, if they did not lead to the conclusion that 2 + 2 = 4, that would have been regarded as reason to reject the axioms. Facts come before theories, and before unifying ideas.
Now the Peano axioms apply very accurately to a vast range of situations so it is also true that 2+2=4 for a vast range of situations.
i --> Putting the cart before the horse.
Unless we are pure mathematicians we are never called upon to decide whether 2+2=4 is true in the mathematical or the descriptive sense.
j --> Just the opposite of the truth. Every child studying mathematics is led to see for him or herself that || + || --> |||| k --> As already noted, if Peano's axioms got this wrong, they would have been rejected. l --> That is, we are not here dealing with proof on premises [no sane person doubts that 2 + 2 = 4, but axiom ssts are open to challenge and change to see what happens as a result], but an explanatory construct that is accepted as (i) it unifies important facts, and (ii) it exposes an inner structure that allows us to then infer to other interesting phenomena, and to see the relevant general structural patterns and contrasts [e.g. of groups, rings, fields, and algebras]. m --> So, what we have here is a categorical confusion between proof and explanation. (Of course, axiomatisation can be fruitful, leading to other results that are deduced and which may well have been unexpected. n --> But that should not fool us into thinking that we accept self-evident facts like 2 + 2 = 4 BECAUSE that is a consequence of Peano's axioms. That is precisely back ways around. o --> And, we see why 2 + 2 = 4 is self-evident: once we understand the symbols, the operation of addition and the relationship of equality [typically, age 5 or so] we see that 2 + 2 = 4 [however we may label or symbolise] as a matter of brute fact, AND that it could not have been otherwise, on pain of confusion and gross error.
And when challenged as to what makes it true we get confused
p --> An attempted turnabout accusation. q --> The evidence above is that those who are confused are those who injected a contradictory definition of "+" and so deduced a contradiction whereby they wished to assert that 2 + 2 = 4 and 2 + 2 = 147. r --> That is, we see the pernicious effect of rejecting the principle of non-contradiction.
and end up making up some abstract world of which it is true!
s --> Dismissal by ridicule, rather than addressing of substance. t --> Symbols, such as 2, 4, + and = are inherently non-concrete. Two-ness is a property that allows sets of discrete objects to be put in one-to-one correspondence with a standard 2-set. u --> To see this, we use counting and the standard ordered set of symbols {1, 2, 3, . . . } so that when we match members in succession, we exhaoust the set being counted at the appropriate symbol: {||} --> |:1, |:2, exhaustion --> {||} has cardinality 2. v --> So, abstractions here have a reality that is implied every time we use symbols, including verbal ones, to represent the ral world. w --> And truth, as has repeatedly been noted, is that situation where the meaning of certain claims [which will be exressed in symbols] matches accurtately to reality. x --> Nor is that particularly mysterious, as every child who has had to be swatted for fibbing over what happened to the cookies in the jar full well understands. (As in, the crumbs on the lips tell the truth more than the words coming out of the mouth.)
>> ______________________ So, it becomes ever more plain how rejection of self-evident first principles of right reason leads us ever deeper into a morass of error. And, we then see the turnabout, where plunging into the morass of ever deepening errors, is held to be the epitome of intelligence, knowledge and wisdom. [The inadvertent echo of Rom 1 is ever so richly ironic.) It is time to cut the Gordian knot with one clean stroke. And Adler shows us just how: identify and correct the little error at the beginning. GEM of TKI kairosfocus
CH The more I think about it, the more I think that I'm more Chestertonian than Chesterton.
First, I found the whole modern world talking scientific fatalism; saying that everything is as it must always have been, being unfolded without fault from the beginning. The leaf on the tree is green because it could never have been anything else. Now, the fairy-tale philosopher is glad that the leaf is green precisely because it might have been scarlet. He feels as if it had turned green an instant before he looked at it. He is pleased that snow is white on the strictly reasonable ground that it might have been black. Every colour has in it a bold quality as of choice; the red of garden roses is not only decisive but dramatic, like suddenly spilt blood. He feels that something has been DONE.
That's a tolerably good definition of mathematics, and I regret that Chesterton didn't know any or he might have written a much more interesting essay. I'm glad that 1+1=2 because I understand that 1+1 could have been 147, and the decision is not one that can be appealed to the real world, only to the storyteller (the mathematician). If the internal logic of the story (theorem) is sound, the story rings true, and it doesn't matter if the (real or metaphorical) snow is black or white or plaid. Thank you for reminding me to revisit that. BarryR
CH Backing up a bit and reading Chesterton in context, I now realize that he's writing polemic, not philosophy, and the style he has chosen in a perfectly reasonable one for polemic. Here's the nub:
You cannot IMAGINE two and one not making three. But you can easily imagine trees not growing fruit;
That's a textbook argument from incredulity. I have no difficulty imagining either (and I work with expensive machines where the arithmetic is far stranger than 1+1=3). This passage may be more telling:
It is a dreadful thing to say that Mr. W.B.Yeats does not understand fairyland. But I do say it. He is an ironical Irishman, full of intellectual reactions. He is not stupid enough to understand fairyland. Fairies prefer people of the yokel type like myself; people who gape and grin and do as they are told.
I don't believe he's intending either passage to be read literally, or, perhaps I should say that I don't think either passage reflects what he actually believes. This is emphatically written for an audience, an audience who wants to be reassured. But no, I don't think there's an English professor or philosopher who would mistake this for philosophy. BarryR
CH@309 As to absolute skepticism --- this simply isn't a problem. [At this point my posting here has been reduced to trying to find ways of restating the same five or six points over and over again. It becomes tedious.] A naive formulation of absolute skepticism (like that used by Lewis) might lead to a logical impossibility. This is naive because Lewis only admits true and false. This makes for easy but not very convincing philosophizing. My personal skepticism is of the more useful scientific variety. I don't have any certainty about truth or falsity, and this easily extends to "the idea that I don't have any certainty about truth and falsity". However, I do have a continuum of certainty, and it's no great loss that the extremes aren't available to me. I can have near-certainty about the truth of some proposition and near-certainty about the falsity of others, but I spend most of my time somewhere closer to the middle where I'm constantly revising what I believe as I continue to experience new things. Could this approach be wrong? Sure. And with that admission I remove the possibility of contradiction. All I've given up is certainty, and that's made me a much nicer person (as well as a better scientist). BarryR
CH@310
Because things are self evident doesn’t mean that everyone realizes them just yet, just as counting numbers up to twenty aren’t always realized by tribes people, that doesn’t mean they are nothing.
I think you're starting to see the morass that "self-evident" becomes. Is the integer 20 self-evident? I think you'd say yes, even though many native societies never ran across the concept. Is zero also self-evident? Here we have many European societies not figuring this out until relatively recently (9th Century India). Is infinity self-evident? The mathematical world's reaction to Cantor's formulation of it ranged from skeptical to hostile. Are quarternions self-evident? Tensors? Lie groups? They don't appear to be. But mathematically, there's nothing that distinguishes some of these concepts as being self-evident and others not. Given the right selection of (non-obvious) axioms, I can derive any of these ideas. As a term of art, I just don't see where "self-evident" adds anything. BarryR
StephanB@323 Thank you for settling the issue of your qualifications. The textbook I used is _Symbolic Logic and Automated Theorem Proving_, although any standard textbook should cover Skolem normal form. Thank you for providing the list of topics you're still unsure about. [A] Mathematical laws exist and depend on context, specifically the initial selection of axioms. No context-free laws have been observed, and we know of no way to accurately perceive any that may exist. When speaking informally, I shorten the above to "mathematical laws do not exist". [B] Peer-reviewed literature relies on a shared body of knowledge between the author and the reader. Edmonds' idea of truth is clear enough to his intended audience: those who are [actually] trained in philosophy. I can understand your frustration --- I go through the same thing when I start reading up in a new field --- but it's usually safe to assume the fault is your lack of knowledge, not the author's lack of clarity. [C] Syllogisms haven't been relevant in philosophy for, oh, 150 years? No, that's an overestimate:
The syllogism was superseded by first-order predicate logic following the work of Gottlob Frege, in particular his Begriffsschrift (1879) Syllogism dominated Western philosophical thought until The Age of Enlightenment in the 17th Century. At that time, Sir Francis Bacon rejected the idea of syllogism and deductive reasoning by asserting that it was fallible and illogical.
fide wikipedia. That's probably why I kept running into first-order predicate logic in philosophy classes. Syllogisms were mentioned as an historical artifact, if that. There's just no way to make them rigorous. [D] As I said before, Adler errs in identifying legitimate differences as errors. That makes for a very marketable book that's easy to read. It also makes for lousy philosophy. [E] Repeating my earlier comments, Adler defines a whole in terms of parts and then calls his definition self-evident. To get around this problem, you've had to introduce qualifiers like "finite" wholes and disambiguating dictionary definitions --- that wouldn't be necessary for something universal and self-evident. [F] You did not argue that truth must be unified to be truth. You stated it, without defining truth, no less. I can just as easily say that truth must be contextual. I have the advantage of a citation from someone who does philosophy for a living. [G] You did not argue that it is impossible to do science without the law of causation. You stated it (without stating the law). I get paid to do science. I somehow manage to do it pretty well without any such universal law. Your opinion as to how science is done carries even less weight with me than your philosophical opinions. You can improve this defect by providing a citation to the peer-reviewed literature for someone else who thinks this way. Then, I'm not only disagreeing with you, I'm also disagreeing with someone who has a presumption of competence. But around here, asking for citations is seen as an evasive tactic. It doesn't have to be that way --- assuming you knew how to do a literature search and that your opinion was sane enough to be held by someone who published. I don't have a great deal of confidence that either of those conditions hold. So I think that's where things will remain. My expertise in science and philosophy exceeds yours; I can buttress my opinions from the professional literature and you either cannot or will not. BarryR
---BarryR: "With that bit of administrivia out of the way, I’d like you to propose a solution to Edmonds’s proof given in section 4 that does not rely on assuming the existence of universal truth. Please use formal notation." I appreciate the creativity involved in your latest attempt at a distraction, I really do. However, this is the umpteenth time that you have ignored a refutation, changed the subject, and asked me to solve an irrelevant riddle. As I pointed out earlier, you will be more persuasive if you advance cogent arguments and ask/answer relevant questions. Here are a few items of unfinished business: [A] First, you claimed that there are no mathematical laws, followed by an acknowledgement that there are, indeed, mathematical laws. I asked you to take one position or the other. You remained silent. [B] Next, Edmond’s presented no definition of truth. That point still stands and you have not addressed it. [C] Further, Edmonds used an inappropriate example in order to argue, wrongly, that syllogisms are unreliable and may produce conclusions at variance with their major and minor premises. You did not respond. [D] Further, you have yet to tell me which philosophical errors that Adler alluded to are not really errors and why you think so. [E] Further, you have not addressed the current theme about the self evident truth concerning the relationship between the whole and its parts. [F] Further, I argued that truth must be unified in order to be truth, and I explained why. You have not explained how a contextual truth, which could conflict with other contextual truths, could be true in any case. How, for example, can a mathematical "truth" be true if it contradicts a scientific "truth?" [G] Further, I argued that it is impossible to do science without acknowledging the law of causation. Given that all evidence must be interpreted through the first principles of right reason, how does one track down causes if some effects can occur without them? StephenB
markf, I perceive that your difficulty is that you equate our perception and explanation of truth as the truth itself. The problem is that we have to perceive and explain in order to understand truth, but the truth exists apart from our perception or explanation of it. Therefore 2+2=4 is merely our way of quantifying a truth. The truth expressed by the equation lies outside the equation itself. This is why the argument that if you change the value and meaning of the +, you can change the truth represented by the equation. This is bizarre. It's like saying that if I multiply the figure representing the balance of my bank account by two, this increases the amount of money I have in my bank account (I think this example has been used before). The self evident truth that the whole is greater than the parts lies outside the physical examples we use - such as crank shafts and automobiles; however it applies to them, in the same way that two apples added to two apples gives us 4 apples. It's true that our perception and quantifying representations of truth are limited, but our inadequacies in this regard do not affect the truth itself. 2+2=4 represents an abstract that is true. That abstract is applied to physicality, demonstrating that it is true. However, we don't require the physicality for it to be true. This is why mathematics works. Allow me to repeat then what has in my view been sufficiently explained by KF and StephenB: We know of wholes only in reference to parts, and we know of parts only in reference to wholes. A whole is not equal to, but is greater than its parts, and a part is not equal to but is less than the whole. To say that you have a crank shaft does not mean that you have an automobile. However, to say that you have an automobile IS to say that you have a crank shaft and all the other component parts that make up an automobile - given that a crank shaft is a part of an automobile. While this perception of the truth is limited, it does not limit the truth itself. That I can draw a complete automobile as part of a crank shaft does not change the truth; it only changes the perception to a false representation of the reality of automobiles. CannuckianYankee
#312 Stephenb Thanks for the Adler passage. I had already read it but you weren't to know that. I think you (and Adler) are saying that the sense in which a whole is greater than its parts is indefinable. This is surprising - because in any other context I can think of the phrase "greater than" is definable. There is the rather obvious sense that a whole object like a car is comprised of one or more parts. But this follows from the definition of "part" that you give: 1. A portion, division, piece, or segment of a whole and is just an analytic statement. I think we have to give up on the whole and parts bit. I promise you I am not being deliberately obtuse. I really think that if you break down the loose phrase "a whole is greater than its parts" into all the things it might mean they will either be analytic or not necessarily true. markf
StephenB, One of my favorite movies is "The Adventures of Baron Munschausen," in which all kinds of bizarre explanations end up being true. I also like Mel Gibson's "Conspiracy Theory," in which a paranoid delusion ends up being true. I was also a big fan of "The X-Files." Chris Carter's scepticism of conspiracy theories was what inspired the show. Of course, these are all works of fiction, intended to poke fun at bizarre beliefs. None of them are serious. But the Darwinists making arguments here are DEAD serious; which is scary. CannuckianYankee
#313 It appears you completely ignored KFs responses at 300 and 301. I can't read everyone's response to everything. It is nothing personal against KF - I just think we have so little in common that there is no basis for an interesting discussion. However, you asked me to look at them so I have. I have to say paragraphs such as: And so the part-whole, less than- greater than distinctions make sense once one recognises that they are going to be generic because of that generality, and will have to be be fleshed out in various particular ways in particular cases, once one accepts that one is dealing with composite wholes. (Such experience is abundantly accessible for anyone with significant experience of a world that is full of composite wholes, starting with the letters, words, sentences and paragraphs used to make posts here.) confirm to me that my decision is the best for both of us. KF does offer 2+2=4 as another example of a self-evident truth. I thought we dealt with that but this is what I said above in a response to Gpuccio - I stick by it. A really important insight comes from Wittgenstein. Often a statement will be true for a combination of different types of criteria and because they always or often coincide we never have to resolve which apply. For example, an arithmetic statement such as 2+2=4 is mathematically true because it follows from the Peano axioms (I think those are the right ones). Now the Peano axioms apply very accurately to a vast range of situations so it is also true that 2+2=4 for a vast range of situations. Unless we are pure mathematicians we are never called upon to decide whether 2+2=4 is true in the mathematical or the descriptive sense. And when challenged as to what makes it true we get confused and end up making up some abstract world of which it is true! markf
KF, others Another observation here is that Darwinists are fond of the argument that modern medicine owes it's development to methodological naturalism in science, and that it would not have advanced apart from it. What they fail to understand is that modern medicine is founded upon the value of human life - that it developed because people understood that they "ought" to care for people. Methodological naturalism had nothing to do with it. If you study the history of the care and treatment of various disorders, and the institution of hospitals, nursing homes, clinics, etc., there exists a trend towards more humane treatment based on the philosophy that humans are valuable. Darwinism does not offer this kind of insight. If we founded our values on a faulty understanding of truth; i.e., if relative truth were the only operating paradigm in this world among humans, we would not have developed an understanding that morality and values are intrinsic to certain absolutes. In other words, it matters that there are self-evident truths in establishing that there are absolutes. Without absolutes there can be no morality, because we could not distinguish what is good from what is evil. These concepts would be meaningless. Consciousness must therefore be something that is separate from the "is" of our physical makeup; for it is through consciousness that we distinguish what is an "ought." Of course there are other arguments for why consciousness is of this nature. CannuckianYankee
KF, I always look forward to reading you posts, because you could not be more clear. I admit that I had trouble with your unique use of the English language when I first encountered your writing several years ago, but it's something I've become accostomed to now. Your use of the best examples, references and input are quite compelling. I don't know why anyone would not want to gain someting from your insight. I look forward to getting to the end of this debate, however, as it's getting tiresome. We know of parts only in reference to wholes, and we know of wholes only in reference to parts. wholes are not equal to but are greater than the parts. Attempts to object that this is self-evident is simply an exercise in evasion in order to maintain a relativistic outlook on truth. What's interesting here is that if you hold to truth as relative, you are then not committed to being truthful. You can evade, change the subject, equivocate, supplant and use all kinds of other trickery in order to avoid what is inconvenient, because your morals and values are relative to your need to please the self, rather than to make commitments to love and to serve others. The Darwinist does not believe that truth has anything to do with morality. This is a huge mistake. CannuckianYankee
CY @13, you would have been entertained by a recent blogger who insisted that a crankcase could, indeed, be greater than an automobile, asking us to conceive of an artist's conception in which an automobile was situated inside a giant crankcase. I am not joking. Obviously, he missed the point about the finte whole being greater than any one of ITS parts, but it was a wild ride which featured a number of Darwinists taking up for the irrational side of the argument. StephenB
---BarryR: "If you need a dictionary to explain a concept, I think that’s a pretty strong hint that it’s not a universal self-evident truth." As it turns out, the terms, whole, part, and greater cannot be defined as elemts of a self evident truth--which is the whole point under discussion. However, as a tribute to those pretend not to know what I mean when I say that the "whole" of an automobile is "greater" than its crankcase, the dictionary is a handy tool for such low level responses. StephenB
CY: Thanks. Maybe MF will wake up now that his gambit of persistently ignoring what I have said -- on the pretence that what I say is garbled, or incomprehensible or takes too long to process, etc -- is plainly getting threadbare. G kairosfocus
markf "But why not choose a different, clearer, example of a self-evident truth?" It appears you completely ignored KFs responses at 300 and 301. Why are you trying to make this more complicated than it is? I think KF and StephenB have been quite clear on the issue of "a whole is greater than any of its parts." Your feigned incomprehension of this (causing KF to shake his head) is nothing short of absurd. However, your evasive maneuvers are quite entertaining at times. Of course I am often amused by bizarre things, like this: "There are plenty of physical objects where one part is the outer casing and thus the size of that part is the same as the size of the object." In that you have created a miracle. Let's see, the outer body of a car is pretty much the same size as the car itself, so it must be equal to the car. Right? The miracle is that we can now drive around in the bodies of our cars without the engine and few other important parts like whieels and such. Right? The body is not equal to the whole car. The body is a part of the whole. The whole is greater than the parts. The outer casing of a computer is not equal to the computer itself. The whole is greater than the parts. A banana peel is not equal to a banana. The whole is greater than the parts. A facade is not equal to the whole building. The whole is greater than the parts. Do you require any more examples? CannuckianYankee
---Markf: "You don’t clarify what is meant by “greater than” Although I provided dictionary definitions of the words "part" and "whole," the terms [insofar as they refer to the self-evident nature of the fact] cannot be defined. From Adler: "One example will suffice to make this clear -- the axiom or selfevident truth that a finite whole is greater than any of its parts. This proposition states our understanding of the relation between a finite whole and its parts. It is not a statement about the word "whole" or the word "part" but rather about our understanding of wholes and parts and their relation. All of the operative terms in the proposition are indefinable. We cannot express our understanding of a whole without reference to our understanding of its parts and our understanding that it is greater than any of its parts. We cannot express our understanding of parts without reference to our understanding of wholes and our understanding that a part is less than the whole of which it is a part. When our understanding of an object that is indefinable (e.g., a whole) involves our understanding of another object that is indefinable (e.g., a part), and of the relation between them, that understanding is expressed in a self-evident proposition which is not trifling, uninstructive, or analytic, in Locke's sense or Kant's, for no definitions are involved. Nor is it a synthetic a priori judgment in Kant's sense, even though it has incorrigible certitude; and it is certainly not synthetic a posteriori since, being intrinsically indemonstrable, it cannot be supported by statements offering empirical evidence or reasons. The contemporary denial that there are any indisputable statements which are not merely verbal or tautological, together with the contemporary assertion that all non-tautological statements require extrinsic support or certification and that none has incorrigible certitude, is therefore falsified by the existence of a third type of statement, exemplified by the axiom or self-evident truth that a finite whole is greater than any of its parts, or that a part is less than the finite whole to which it belongs. It could as readily be exemplified by the self-evident truth that the good is the desirable, or that the desirable is the good -- a statement that is known to be true entirely from an understanding of its terms, both of which are indefinables. One cannot say what the good is except by reference to desire, or what desire is except by reference to the good. The understanding of either involves the understanding of the other, and the understanding of both, each in relation to the other, is expressed in a proposition per se nota, i.e., self-evident or known to be true as soon as its terms are understood." Thus, you understand as a self evident truth, that any finite whole is greater than any one of its parts--just as you understood that an automobile is greater than its crankcase. In this case, greater means more parts, greater volume, weight etc. In other cases, it could mean more of something else. Thus, the word "greater" cannot be defined the way you are asking except to say that there is "more to" the whole than any one of its parts. To the question more "what," the answer depends on which "whole" is in question. A paragraph may not weigh more than one of its sentences, but it contains more words. StephenB
Onlookers: Observe again, how MF, with his attention already drawn to an example he has been corrected on above [2 + 2 = 4], and one he has ignored for months ["error exists"], proceeds -- predictably -- to ignore inconvenient evidence yet again. In addition, he now wishes to belabour the concepts whole-part and greater-lesser, as though he has no experience of whole-part relationships (which BTW play a very important role in information systems, e.g. the structure of a program). The precise ways in which parts and wholes interact or relate differ from one case to the other, but there is a common conceptual core that we may legitimately extract and address: the part-whole relationship, and the part is always a component, making it less than the complex whole. In turn, that complex whole is made up from the parts, their specific organisation/ relationships, and the interactions that are associated with that organisation. One feature of such is of course that we come to understand that a part is a component of a whole. Consequently, the whole not only is, but must be, more than the part. On pain of absurdity; which may of course be disguised by the claim that one does not understand. (But if you have worked with software systems as a career, as MF acknowledges, then such a plea looks very suspiciously evasive indeed.) In the case of a star undergoing supernova [cf section a esp point 17 on here], the Iron core that forms and stops fusion from releasing further energy (as Fe is at the peak of the binding energy per nucleon curve), and the imploding and bouncing envelope (for want of radiation pressure etc to keep it from imploding) are both parts, and the star is a whole. In the case of a sentence or a poem, the letter is a part and the composition is a whole. In the case of say a 4G63 Mitsubishi automobile engine, the pistons, the engine blocks, and the crank case are parts. And so on. Reductio ad absurdum, right before our eyes. If this were not so saddening, it would be funny. GEM of TKI kairosfocus
CH@308 The direct answer to the question: "Are there no universal truths? Is that statement itself universal and true.?" For the fourth time, I think. "We don't know any universal truths" is simply an observation. "We can't know any universal truths" is an epistemic argument that can be made from observations. I see nothing controversial with either statement. "There are no universal truths" is an ontological statement that, as Edmonds discusses, requires assuming universal truth exist in order to prove. (He makes this argument using formal notation; if you disagree, please point out which step you disagree with.) I don't see where that statement can be justified. Likewise, "There are universal truths" also requires assuming universal truths exist in order to prove that it's the case. I don't find this justified either. (The universal and existential qualifiers in logic are usually limited to a well-defined context, and as such are perfectly fine. The trouble comes in when trying to map that system onto a poorly-defined concept like "everything".) BarryR
BarryR,
If you need a dictionary to explain a concept, I think that’s a pretty strong hint that it’s not a universal self-evident truth.
Why? No truth can be written down? Because things are self evident doesn't mean that everyone realizes them just yet, just as counting numbers up to twenty aren't always realized by tribes people, that doesn't mean they are nothing. Clive Hayden
BarryR, My point of view is in my last comment to you, articulated by pointing out your confusion pertaining to skepticism and how you think it can be absolute and yet not. I noticed that you haven't responded to Chesterton, and your response to Lewis was short (but thank you for responding nevertheless), but answered by me, and I've yet to see any more response. Clive Hayden
BarryR,
If you need a dictionary to explain a concept, I think that’s a pretty strong hint that it’s not a universal self-evident truth.
Are there no universal truths? Is that statement itself universal and true? Please answer me directly, with a yes or a no, and explain how it isn't (if you claim it's not), and explain how your affirmation or negation is not itself a universal truth if there are no universal truths. Clive Hayden
#302 Stephenb Of course I understand the meaning of the English words "whole" and "part". I am trying to understand what they mean in this context. They are very broad words that can be used all sorts of ways and this is a most unusual statement. Does the statement apply to anything which can have components of any kind? e.g. musical chords, poems, business processes, supernovae? You don't clarify what is meant by "greater than" - although I think the sentence: If you don’t understand that an automobile is more than its crankcase, or its axles, or its frame, or its wheels, or its engine etc, I don’t think I can help you. is meant to throw some light on this. I do understand that an automobile comprises many components in addition to a crankshaft. If by "greater than" you simply mean "comprises more than" then "X is greater than one of its parts" simply follows from the definition of "part". Normally if you say X is greater than Y there is some implicit or explicit dimension in which it is greater - mass, volume, artistic merit, decibels whatever. But the dimension is not clear to me in this case. In the example of the system unit I was treating the system unit as the whole and trying to point out that it was no larger than its case which is one of its parts (others being things such as motherboard). I should have made myself clearer. But why not choose a different, clearer, example of a self-evident truth? markf
StephanB@305 If you need a dictionary to explain a concept, I think that's a pretty strong hint that it's not a universal self-evident truth. BarryR
---markf: "I am confused as to what is meant by “whole” , “part” and “greater than”. From the dictionary: Whole 1. comprising the full quantity, amount, extent, number, etc., without diminution or exception; entire, full, or total: He ate the whole pie. They ran the whole distance. 2. containing all the elements properly belonging; complete: We have a whole set of antique china. Part 1. A portion, division, piece, or segment of a whole. 2. Any of several equal portions or fractions that can constitute a whole or into which a whole can be divided: a mixture of two parts flour to one part sugar. ---"All this is to establish the existence of self-evident truths that are not true by definition. Perhaps you could propose a clearer example?" If you don't understand that an automobile is more than its crankcase, or its axles, or its frame, or its wheels, or its engine etc, I don't think I can help you. ---"There are plenty of physical objects where one part is the outer casing and thus the size of that part is the same as the size of the object. The system unit on my PC for example." If you don't understand that there is more to your computer than its system unit, I don't think I can help you. StephenB
Third, on looking at your remarks just above, especially the crank-case example: pardon, but I must shake my head. A crank case is a proper part of an engine, and it is indeed less than the whole: a crank case by itself is nowhere near an engine. The case for an item of lab equipment will again be less than the whole, i.e the "guts" have to be added to get the whole. That is in both cases you have ignored the part-whole context and have in fact descended into precisely the sort of obvious absurdity that rejecting a self-evident truth will immediately precipitate. And so, the part-whole, less than- greater than distinctions make sense once one recognises that they are going to be generic because of that generality, and will have to be be fleshed out in various particular ways in particular cases, once one accepts that one is dealing with composite wholes. (Such experience is abundantly accessible for anyone with significant experience of a world that is full of composite wholes, starting with the letters, words, sentences and paragraphs used to make posts here.) So, sadly, the above objections come across as utterly strained and driven by what you wish to reject rather than by any truly substantial point. By selective hyperskepticism, in short. And, they land you in precisely the sort of absurdities that such an incoherent rhetorical pattern leads to. I must therefore ask: have you seriously tried to understand instead of to object on any and every convenient excuse? [Do you not realise that Mortimer Adler was not exactly a philosophical novice who would make elementary mistakes?] I know your usual claim is that you refuse to look at what I write (which comes across more and more as a convenient evasion), but this one went utterly beyond the pale. You need to revise your "policy" -- which rather sounds like Wilson's cynical advice to evade inconvenient points, in his Arte of Rhetorique -- and you need to stop and re-examine what you have said many times above in this thread [starting with on 2 + 2 = 4], and what it is pointing to. GEM of TKI kairosfocus
MarkF (and onlookers): RE: . . . All this is to establish the existence of self-evident truths that are not true by definition. Perhaps you could propose a clearer example? First, there has been a very simple case all along in this thread, on matchsticks and symbolisations of what is going on with cardinalities:
{ || } { || } --> { |||| } i.e. 2 + 2 = 4
Second, for many months, you have known or should have known of the simple but potent -- indeed, pivotal -- case of an undeniably true and self-evident claim from Josiah Royce, that I have repeatedly linked or cited [cf 6 - 7 here, and the following discussion here:
"Error exists"
This last is obviously not a tautology, nor a definition, and you cannot define the underlying concepts in isolation. It is undeniably true as to try a denial exemplifies its truth. And it immediately implies that truth exists, warranted credible truth -- i.e. knowledge exists -- and that we would be wise to have a humble epistemic stance. [ . . . ] kairosfocus
#298 So, it isn’t clear to you that an automobile is greater than its crankcase, and that the automobile’s crankcase cannot be greater than the automobile?” By "greater than" do you mean "larger than"? The purpose of automobiles is such that I couldn't see how to make a working automobile that did not have some parts extraneous to the crankshaft - but I thought this was a necessary truth that applied to all "wholes". There are plenty of physical objects where one part is the outer casing and thus the size of that part is the same as the size of the object. The system unit on my PC for example. If you allow imaginary objects (after all we talking necessary truths which should be true in all conceivable universes) consider the Tardis in Doctor Who - open the door and inside it is full of components much larger than police box which contains them. —”I believe a nuclear reaction could mean that the whole had a mass less than any of the contributing components.” You are confusing a process with an entity or thing. —”Or perhaps the whole is not a physical object but say a sound? A combination of sounds may be less pleasant to the ear and under certain conditions less loud than any of the single elements.” First you describe the sound as a whole and then try to apply it as parts of a larger whole. I am confused as to what is meant by "whole" , "part" and "greater than". They are such vague generic words. So I am throwing out some examples to check my understanding. I can't say I am much the wiser yet. All this is to establish the existence of self-evident truths that are not true by definition. Perhaps you could propose a clearer example? markf
CH I posed a direct question, and if you would answer it I think it would help me to understand your point of view. Please search the above for: "Why is the above impossible? Please be specific." Thanks. BarryR
StephanB
I don’t normally wield my credentials around here because I think it is bad form. You will notice, for example, that I exposed Edmond’s ignorance of syllogisms and I explained the texture of his error without alluding to my own training in logic. (So tacky would that be) If you want to discuss his logical errors in greater detail, let me know.
I'd much prefer to formalize this conversation; no doubt I'm going to have some difficulty keeping up, but I promise to do my best. Just so I know we're on the same page, could I trouble you to solve this problem? Give the following in standard form: (Ex)(Ay)(Az)(Eu)(Av)(Ew)P(x,y,z,u,v,w) This is given as an example in the textbook I used for my first logic class. It's perfectly straightforward with the answer given after a short paragraph of discussion. Given your training in philosophy, I expect this will be trivial for you. But it will assure me you didn't google the answer. With that bit of administrivia out of the way, I'd like you to propose a solution to Edmonds's proof given in section 4 that does not rely on assuming the existence of universal truth. Please use formal notation. I look forward to your reply. BarryR
StephanB@282
I am formally trained in philosophy at the graduate level, and I am also formally trained in applied communication at the graduate level, and yes, I did graduate with the highest honors.
My, but that was worded carefully. I'm not quite sure what you meant to communicate, but what came across was: 1. No undergraduate philosophy classes, 2. No graduate philosophy classes, and 3. No graduate degree. In the interests of unambiguous communication, by classes I mean seat-of-the-pants-applied-to-the-seat-of-the-chair in an accredited university within a philosophy department. If you meant to communicate something else, please clarify if you think it's important. One of the many nice things about expertise is that it allows you to judge competence using criteria other than credentials. You're not making a competent argument, regardless of your credentials (and regardless of whether or not I agree with you). You don't qualify your statements, you don't handle counterexamples, you don't cite the literature, and you're not learning from your mistakes as they're pointed out. These are the kinds of things that students learn in the give-and-take of university philosophy classes. Perhaps you had a run of back luck and only experienced classes where this give and take wasn't encouraged. If so, you've been very ill-served by your "training". MarkF in 297 has the good sense to ask what you (and Adler) mean by a whole and a part. The fact that he has to ask is an excellent indication that the definitions are not self-evident. Your rejoinders are frankly childish.
So, it isn’t clear to you that an automobile is greater than its crankcase, and that the automobile’s crankcase cannot be greater than the automobile
If you needed to show a counterexample, that would suffice. However, you've painted yourself into a corner where you need to show not only that the rule is self-evident, but that it is universal. This is the difference between "there exists" and "for all", (something I became very familiar with in a couple of graduate logic classes). Even if I thought it was doable, you don't have the technical skill to build that proof (and Adler can't be bothered).
Notice how you completely ignore the main point of the argument.
It's really endearing how you put this right after the counterexamples I gave to invalidate the hypothesis. Here's the main point of my argument: Adler gave an example that is self-evident in certain contexts, self-evidently false in other contexts, and nonsensical everywhere else. Because he was writing for a popular audience, he didn't feel the need to point this out (which is the most charitable explanation I can give).
It is not a mistake on his part. He has already pointed out that the concepts themselves cannot be defined.
!?! Undefinable concepts can now be self-evident? Ok.... And what's so difficult about defining these? I can define a whole, in certain contexts, as that which is created by summing a predefined set of parts. In another context, I can define it as a concept greater than the sum of it's parts. And for some job applications, I can define the whole applicant as much less than sum of his parts on his resume. Universal definition might be impossible, but that's a strange way to get to self-evident.
BarryR
---Markf: "The same objections arise. So, it isn't clear to you that an automobile is greater than its crankcase, and that the automobile's crankcase cannot be greater than the automobile?" ---"I believe a nuclear reaction could mean that the whole had a mass less than any of the contributing components." You are confusing a process with an entity or thing. ---"Or perhaps the whole is not a physical object but say a sound? A combination of sounds may be less pleasant to the ear and under certain conditions less loud than any of the single elements." First you describe the sound as a whole and then try to apply it as parts of a larger whole. StephenB
#288 I apologise for misreading the quote, but I think my argument applies equally to the statement "the finite whole is greater than any one of its parts" The same objections arise. What kind of whole? What kind of parts? Greater in what respect? Are we talking about physical objects and mass or volume? I believe a nuclear reaction could mean that the whole had a mass less than any of the contributing components. Even if that is not possible it is not obviously absurd. Similarly for volume. Or perhaps the whole is not a physical object but say a sound? A combination of sounds may be less pleasant to the ear and under certain conditions less loud than any of the single elements. markf
Steve: Strangely, in my own studies on organisations as sociotechnical systems, teamwork based synergies that give rise to unique competitive advantages that cannot be duplicated play a key role. (BTW, when the environment undergoes a catastrophic, butterfly-effect type change, that can come back to haunt. Hence issues on robustness and agility as opposed to optimisation for a given environment. And this is also one reason why I do not take the notion of thinking that sub-optimal designs are inferior, seriously. If you are over specialised and the world changes, you are dead, Fred.) Synergy is a powerful concept, and this is one way that we can get the magic of more out than was put in; thence one ground of profit making as a morally defensible activity. But, the misreadings that you describe falls under the head of strawman fallacies; which may be inadvertent, but lead to profound misunderstanding. Which is precisely one of the things Adler implicitly warned against; by stressing understanding rooted in our reflective experience of the world. G kairosfocus
Frosty: One last point, pardon: a self evident truth is NOT a tautology. That is, it is not merely analytically true, simply restating the same thing in different terms. Adler rightly points out that there are statements whose truth is contingent on observation, e.g. that I am a Caribbean, male person. others are true by the sort of reiteration just described. But others are true on our understanding based on experience, where we also see that on pain of immediate reductio ad absurdum, they MUST be true. As you will recall, my favourite example of this is "Error exists," following Josiah Royce. [Onlookers, try the experiment of an attempted denial and you will immediately see why it is undeniably true.] G kairosfocus
[Once] self evident truths are denied, all things are possible--or rather, deemed to be possible. StephenB
---kairosfocus: "I think many people are thinking of a common saying about systems, a reflection on synergies whereby the whole as a complex functional entitiy depends on the particular organisation of the parts." Precisely. The same phenomenon is often referred to in communication as the "assembly affect bonus" or "synergy," especially with respect to the creative faculty inherent in "brainstorming." Some appear to be reading that common understanding into Adler's words rather than extracting the intended meaning. This practice, by the way, is not unusual for postmodernist deconstructionists, who believe that readers can and should change [deconstruct] an author's intended meaning into something more congenial with their inclinations. One self-evident truths are denied, all things are possible--or rather, deemed to be possible. StephenB
Steve: You are right to spot that point. I think many people are thinking of a common saying about systems, a reflection on synergies whereby the whole as a complex functional entitiy depends on the particular organisation of the parts. So the systemic whole is indeed greater than the MERE sum of the parts. (A pil4e of bricks, timbers, glass and cement a house do not make, nor will one get a house by stirring all with a tornado!) G kairosfocus
PS: You will see how I modified the sort of remark Adler made: a FINITE whole is greater than any of its PROPER parts. kairosfocus
---Frost: "Firstly to say that a whole must be greater than the sum of its parts is to make a statement which is not tautologically certain." I don't understand why this continues to be misread. The self-evident truth is that the finite whole is greater than ANY ONE OF IS PARTS. StephenB
Frosty: Hi mon. I am citing actually, Adler. But, he is right. He is talking about -- note the context -- complex entities and saying that once something is made of or has parts in it, the whole is more than any one of the individual parts; and that parts and wholes can only be understood on our experience of such complex entities and their complementarity. Of course, if you take him out of the context of complex unities, you may will THINK you have a counter-example, but you don't. As to the others, we are dealing with people who tried to deny that 2 + 2 = 4 is something that you can deny without tying yourself up in absurdities. To do so, they argued above that by changing he definition of + in midstream, we could also argue that 2 + 2 = 147. EEP! The absurdity here is injecting an equivocation that embeds a contradiction. G kairosfocus
---Markf:“The whole is greater than the sum of its parts” ---"This seems to me far from self-evident and very likely false, depending on what exactly it means." Of course it is far from self evident and, of course, it may well be false. The self evident truth is NOT that the whole is greater than the sum of its parts. [That's BarryR's gig. Didn't you read the comment that I was responding to?] The self evident truth is that the finite whole is greater than any one of its parts. Please read for context. StephenB
--DOUBLE correction above.. please remove post 286 if possible. What I meant is: "The value of a desire is DEPENDENT on its object." That is, a desire is ONLY good if its object is warrented. Pardon the double mistake above. Frost122585
^Pardon me- a correction above- "The value of a desire is independent of ITS object" Frost122585
KF, I must admit that I have to disagree with both examples you listed above of instructive yet tautological statements. While I agree with your assertion that such statements which can be both tautological and instructive can and probably do exist I see both of your examples as being totally flawed. Firstly to say that a whole must be greater than the sum of its parts is to make a statement which is not tautologically certain. If there was a substance which was composed of only one part which was objectively indivisible (such as God- or what the Atom was believed to be before its interior was elucidated) then in this case the sum of its part would be equal to whole- which negates the notion of the original statement- because it would still be true to say that the whole has at least some (one) parts. If however you admit that your statement does in fact presume that at least 2 parts must exist for it to be true- there are in fact examples in cosmology and non-euclidean geometry where the parts of an object could be greater than the whole which they comprise- though I admit my understanding is that these presupposed phenomena are purely speculative. However, the second example is totally false on its face as many people desire things which are not good. I surmise you are using a definition of "desire" which is loaded as meaning all but the same thing as "good"- or a definition of good which somehow implies that which is desireable- but in that case the value of the example is really not that instructive at all. A value of a desire is independent of object. Certainly in theology (which many people myself included see as the highest form of understanding) there exists desires which are not good but in fact its opposite- evil. However, we probably agree on the principle which your examples are directed at. Frost122585
#282 "The whole is greater than the sum of its parts" This seems to me far from self-evident and very likely false, depending on what exactly it means. Greater in what respect? If we add matter to other matter the result may actually weigh less (that's one thing that happens in a nuclear reaction). I believe that under extreme conditions it is also possible that the result may take less volume - the additional matter causes the mass to collapse under the additional gravitional pull. In any case it is certainly imaginable that the result would take less volume. markf
SB: Excellent. I only note that I tend to make it a little more specific: a finite whole is greater than any of its proper parts. Maybe, there is need to cite the key section in Adler in-thread: ____________________ >> The little error in the beginning, made by Locke and Leibniz, perpetuated by Kant, and leading to the repudiation of any non-verbal or non-tautological truth having incorrigible certitude, consists in starting with a dichotomy instead of a trichotomy — a twofold instead of a threefold distinction of types of truth. In addition to merely verbal statements which, as tautologies, are uninstructive and need no support beyond the rules of language, and in addition to instructive statements which need support and certification, either from experience or by reasoning, there is a third class of statements which are non-tautological or instructive, on the one hand, and are also indemonstrable or self-evidently true, on the other. These are the statements that Euclid called “common notions,” that Aristotle called “axioms” or “first principles,” and that mediaeval thinkers called “propositions per se nota.” One example will suffice to make this clear — the axiom or selfevident truth that a finite whole is greater than any of its parts. This proposition states our understanding of the relation between a finite whole and its parts. It is not a statement about the word “whole” or the word “part” but rather about our understanding of wholes and parts and their relation. All of the operative terms in the proposition are indefinable. We cannot express our understanding of a whole without reference to our understanding of its parts and our understanding that it is greater than any of its parts. We cannot express our understanding of parts without reference to our understanding of wholes and our understanding that a part is less than the whole of which it is a part. When our understanding of an object that is indefinable (e.g., a whole) involves our understanding of another object that is indefinable (e.g., a part), and of the relation between them, that understanding is expressed in a self-evident proposition which is not trifling, uninstructive, or analytic, in Locke’s sense or Kant’s, for no definitions are involved. Nor is it a synthetic a priori judgment in Kant’s sense, even though it has incorrigible certitude; and it is certainly not synthetic a posteriori since, being intrinsically indemonstrable, it cannot be supported by statements offering empirical evidence or reasons. The contemporary denial that there are any indisputable statements which are not merely verbal or tautological, together with the contemporary assertion that all non-tautological statements require extrinsic support or certification and that none has incorrigible certitude, is therefore falsified by the existence of a third type of statement, exemplified by the axiom or self-evident truth that a finite whole is greater than any of its parts, or that a part is less than the finite whole to which it belongs. It could as readily be exemplified by the self-evident truth that the good is the desirable, or that the desirable is the good — a statement that is known to be true entirely from an understanding of its terms, both of which are indefinables. One cannot say what the good is except by reference to desire, or what desire is except by reference to the good. The understanding of either involves the understanding of the other, and the understanding of both, each in relation to the other, is expressed in a proposition per se nota, i.e., self-evident or known to be true as soon as its terms are understood. Such propositions are neither analytic nor synthetic in the modern sense of that dichotomy; for the predicate is neither contained in the definition of the subject, nor does it lie entirely outside the meaning of the subject. Axioms or self-evident truths are, furthermore, truths about objects understood, objects that can have instantiation in reality, and so they are not merely verbal. They are not a priori because they are based on experience, as all our knowledge and understanding is; yet they are not empirical or a posteriori in the sense that they can be falsified by experience or require empirical investigation for their confirmation. The little error in the beginning, which consists in a non-exhaustive dichotomy mistakenly regarded as exhaustive, is corrected when we substitute for it a trichotomy that distinguishes (i) merely verbal tautologies, (ii) statements of fact that require empirical support and can be empirically falsified, (iii) axiomatic statements, expressing indemonstrable truths of understanding which, while based upon experience, do not require empirical support and cannot be empirically falsified.[6] >> ____________________ G kairosfocus
---BarryR: "As I said, if you don’t have any great interest in philosophy, the book might be better than nothing." I am formally trained in philosophy at the graduate level, and I am also formally trained in applied communication at the graduate level, and yes, I did graduate with the highest honors. So, your implied ad hominem [that I have no interest in philosophy] simply falls flat. I don't normally wield my credentials around here because I think it is bad form. You will notice, for example, that I exposed Edmond's ignorance of syllogisms and I explained the texture of his error without alluding to my own training in logic. (So tacky would that be) If you want to discuss his logical errors in greater detail, let me know. The problem with your earlier readings of Adler, and thank you for sharing your experiences, is that you were not given very good guidance from your philosophy professors, which is quite common. Philosophy has been bitten with the postmodernist bug along with most other disciplines. Where, though, is the substance in your comments. What mistake is it that Adler alludes to that you claim is not a mistake? On this, the really important matter, you remain silent. ---"To make the error more clear:[Adler's essay, "Little Errors in the Beginning] I describe an object. Is it a whole? How do you know? Well, if its parts were summed and did not result in the whole, then we probably wouldn’t call it a whole in the first place. Do the set of integers form a whole? Does the empty set? The computer I’m using is certainly a whole, and certainly has a number of parts, yet I’d argue that the whole is certainly far greater than the sum of those parts>" There is no error here at all. [A] Adler is providing an example of a self-evident truth and explaining that self-evident truths comprise an important element in our pursuit of knowledge, arguing rightly, that analytic reasoning and synthetic reasoning are not our only mental tools. Notice how you completely ignore the main point of the argument. [B] If the whole is greater than any one of the parts, and if no one part can be greater than the whole, this fact ALONE, does not preclude the possibility that the whole could be greater than the sum of all the parts. In only rules out the possibility that one part cannot be greater than the whole. Thus, your final comment is irrelevant and reflects a lack of understanding of the obvious self evident truth. A part cannot be greater than the whole of which it is a part. Everyone understands this as a self evident truth, and that includes you, in spite of your protests. ---"By the time he gets to the end, I’ve been talked out of the idea that this concept is self-evident and instead believe that it follows from the definition of the word “whole”. Is that a mistake?" It is not a mistake on his part. He has already pointed out that the concepts themselves cannot be defined. It is the relationship between the two that must be understood as a self evident truth independent of any definitions at all. ---"Perhaps, or perhaps it’s just sloppy writing. No, just inattentive reading. Ask yourself the only real question that matters. Can a part of a whole [of which it is a part] be greater than the whole? You already know, as a self evident truth, that it cannot. On the other hand, you refuse to admit it because your entire materialist/secularist/hyperskepticiirrational framework crumbles with the admission. As Adler puts it, "We cannot express our understanding of parts without reference to our understanding of wholes and our understanding that a part is less than the whole of which it is a part." Again, I stress the significance of this point. Once it is understood that self evident truths exist, Kant's anti-metphysical skepticism (which was corrected by Reid in his own time) and Hegel's attempt to correct a problem that didn't exist, followed by each additional anti-intellectual domino toppling over for the next two hundred years all hearken back to the "Little Error in the Beginning." StephenB
F/N: The live donkey kicking a dead lion ad hominem game above is just a little distateful. Why not summarise CSL's arguments, then actually show why -- on the merits -- you think they fail? (And for instance, CH has brought out the infinite regress implicit in any absolute assertion of skepticism. Practical skepticism is necessarily selective, and when that goes over into an inconsistent standard of warrant for what one wishes to accept vs reject, it then becomes self-refuting. The declaration of universal skepticism is inevitably self-referentially absurd.) kairosfocus
kairosfocus,
“Self-evidence” is not a mere arbitrary and subjective declaration rooted in a perception. (That mistaken notion, itself already shows what C S Lewis so aptly called the poison of subjectivism at work.)
Great point, and that is a great essay. Clive Hayden
Onlookers (and MF . . . ): "Self-evidence" is not a mere arbitrary and subjective declaration rooted in a perception. (That mistaken notion, itself already shows what C S Lewis so aptly called the poison of subjectivism at work.) Instead, it is a claim, that, once a particular claim is truly understood, it is at once plain that its denial will directly lead to reductio ad absurdum. In the case of 2 + 2 = 4, it is plain by simple observation that this is indeed a self-evident claim. And, duly, the attempt to substitute unannounced a new "definition" of + that makes 2 + 2 = 147 instead or something else, immediately lands in hopeless contradictions. Lesson: one may not blithely disregard the principle of non-contradiction in reasoning, including elementary mathematical reasoning. And, those who do and then try to stoutly "dance wrong but strong" simply show ever more and more, how absurd the position of rejecting LNC is. Sad . . . GEM of TKI kairosfocus
BarryR,
and to me it meas that he either never read Descartes or (more likely) didn’t read deeply.
The years intervening were devoted to what Sayer calls the “immense amount of reading” that Lewis did because (unlike many reviewers) he “refused to give an opinion on a book he had not read.”6 Gene Edward Veith reports that when Charles Huttar was working in the Magdalen College library he saw the register of books Lewis had checked out during the late 1940’s and early 1950’s. It appears that Lewis had “essentially checked out the entire sixteenth-century collection.”7 What was too obscure for either Magdalen or his own personal library to have, he read in the Bodleian’s magnificent Duke Humphrey library—basically what an American library would call its rare book room. Some of it must have been dull going, but he plowed ahead until he had mastered the entire preserved literary output of the century. At the end of some of the books from his own library he marked the date on which he had finished them, and in a few, the added annotation “Never again." This is an excerpt of an essay that is a draft of an article for Lion and Logos: The Life and Legacy of C. S. Lewis, ed. Bruce L. Edwards, Jr., 4 vols. (Greenwood/Praeger, 2007) Clive Hayden
BarryR,
This is trivially false, and to me it meas that he either never read Descartes or (more likely) didn’t read deeply. The counterexample that refutes Lewis is “I am skeptical of everything, including the correctness of my own skepticism.” Descartes’ formulation was much stronger: “Cogito ergo sum”.
I don't see how this refutes Lewis's claim that "“We are always prevented from accepting total skepticism because it can be formulated only by making a tacit exception in favor of the thought we are thinking…”. I see your statement "“I am skeptical of everything, including the correctness of my own skepticism.” as one more removed, that you're not skeptical of the skepticism of the correctness of your skepticism. You cannot be assured in your skepticism of skepticism without making a contradiction. No matter how many time you may remove the assured-ness behind layers of skepticism, in the end you're not skeptical of at least one thing. This doesn't, of course, refute Lewis. Your argument by substituting evolution for inference never even leaves the ground, because we already know evolution is false :). We cannot say that inference is false in every sense and in every case without contradicting ourselves. It doesn't matter how trivial it is to point out that inference is true, it is true, and to say otherwise is, as I said, a contradiction. And yes, Lewis mentions Descartes, I'll have to look up where, but I do remember coming across it. I don't think you've read very deeply Barry. As I said before, Lewis read in 7 different languages, so from everything from Plato to Ptolemy, Dante to Dryden, Aristotle to Aquinas, he read in the original language. Lewis wrote the textbook on 16th Century English literature for Oxford's English literature series, he had a chair invented for him at Cambridge. You should read his book Studies in Words, and The Discarded Image (which deals with the origination of the scientific method and the mental atmosphere of the medieval and renaissance time period), and Studies in Medieval and Renaissance Literature, and The Allegory of Love, before you would presume to know his reading background. Clive Hayden
F/N: The self-evidently true mathematical fact that 2 + 2 = 4 is conceptually prior to Peano's axioms, which serve more as an explanatory construct than as a foundation for arithmetic. We may then indeed go on to infer interesting theories about arithmetic on said axioms, but had the axioms entailed that 2 + 2 does not equal 4, then they would have been rejected as fatally flawed. kairosfocus
PS: Newbie onlookers, Cf above for some of what MF -- as usual -- refuses to address on the merits. kairosfocus
Onlookers: if you want a 101 level look at how first principles of right reason are foundational to reasoning, look at the WCTs here. Remember, we have already seen how rejecting the law of non-contradiction allowed some above to try to reject the self-evident truth that 2 + 2 = 4. This was done by arbitrarily injecting a contradiction in the definition of the + operator, sot hat one thing did not mean one thing in one context. That error of descent into self-contradictory absurdity should serve us as a warning of the penalty to be paid for dismissing first principles of right reason that are self-evidently true. GEM of TKI kairosfocus
StephanB@261
You provide no evidence of having read a word of Adler
Why would any be required beyond the statement that I've done so? The essay was later enlarged into the book "Ten Philosophical Mistakes", which I read (the 1985 edition) before leaving high school. The memory I have of it, though, was discussing the text with a faculty member of the philosophy department a couple of years later. Its reception among people who did philosophy for a living ranged from disappointed to cringeworthy. As I continued to take philosophy classes I started understanding why. As I said, if you don't have any great interest in philosophy, the book might be better than nothing. To the good, it does go over several issues of concern to philosophers. To the bad, philosophers Adler doesn't agree with come across as making dimwitted mistakes that Adler can explain in just a few paragraphs. These aren't mistakes, and Adler does a disservice to the discipline by representing them as such. As you did read the paper I suggested, I glanced at the essay you suggested. Here's his example of a self-evident truth:
One example will suffice to make this clear -- the axiom or selfevident truth that a finite whole is greater than any of its parts. This proposition states our understanding of the relation between a finite whole and its parts. It is not a statement about the word "whole" or the word "part" but rather about our understanding of wholes and parts and their relation. All of the operative terms in the proposition are indefinable. We cannot express our understanding of a whole without reference to our understanding of its parts and our understanding that it is greater than any of its parts. We cannot express our understanding of parts without reference to our understanding of wholes and our understanding that a part is less than the whole of which it is a part.
By the time he gets to the end, I've been talked out of the idea that this concept is self-evident and instead believe that it follows from the definition of the word "whole". Is that a mistake? Perhaps, or perhaps it's just sloppy writing. To make the error more clear: I describe an object. Is it a whole? How do you know? Well, if its parts were summed and did not result in the whole, then we probably wouldn't call it a whole in the first place. Do the set of integers form a whole? Does the empty set? The computer I'm using is certainly a whole, and certainly has a number of parts, yet I'd argue that the whole is certainly far greater than the sum of those parts. So that's Adler. (I'm now wondering why you've haven't read him. The "sum of parts" example was quite early on, and far stronger than what you've come up with on your own. I'm not encouraging you to read himi, mind --- I just find it curious that there's no, as you say, "evidence" that you've read him, and some evidence against.) BarryR
CH@269 Continuing...
All reasoning does rely on self evident truths, fundamentals, what we call first principles. You get rid of these, and you won’t even recognize whether anything else can or cannot “reason.” It would be like taking out your eyes to look at them—it’s impossible.
Well, I'm certainly willing to be convinced, but all the examples that have been given to me so far (with the exception of the cogito) doesn't appear to be both self-evident and universal. But your making an even stronger claim than "self-evident truth exists". You're claiming that every reasoning system, no matter how obscure, must use these truths. Yet when I look at lambda calculus, I see nothing self-evident there. So let's say I have an unfortunate brain injury that causes me to be unable to understand the cogito. In fact, this brain injury prevents me from understanding any self-evident truth (although I can perceive them if I can derive them otherwise). Someone gives me a copy of Stansifer (or I'm pretty sure they did --- I could be dreaming) and as I'm working through chapter 7, I use only the material in that chapter, what I've derived, and a few self-evident "highly-likelies" (not truths) to reason that lambda calculus can be used to derive the integers. I don't have absolute certainty that my derivation is correct, but I've checked it twice and have a high degree of confidence. Why is the above impossible? Please be specific. BarryR
CH@269 In reverse order:
So yes, if you’re enumerating self-evident truths, this is probably the best place to start. It’s also probably the best place to stop.
Why?
Because beyond that point Descartes' argument fell apart, and while a lot of ink has been spilled on the question (including the redoubtable Dr. Adler), nobody has been able to improve on it. This is exactly the problem Lewis is trying to solve in De Futillitate. He telegraphs it with the phrase "We are always prevented from accepting total skepticism because it can be formulated only by making a tacit exception in favor of the thought we are thinking...". This is trivially false, and to me it meas that he either never read Descartes or (more likely) didn't read deeply. The counterexample that refutes Lewis is "I am skeptical of everything, including the correctness of my own skepticism." Descartes' formulation was much stronger: "Cogito ergo sum". To move from this point to some certain knowledge of the world, Descartes is forced to postulate a very specific God. While not compelling, it does have the virtue of clarity. Lewis is far less clear and even less convincing: 1. All knowledge depends on inference. 2. If inference is false, we can know nothing. 3. We know something (his version of "cogito". 4. Thus inference is true. To see how this falls apart, let's substitute "evolution" for "inference". 1. All knowledge depends on evolution. 2. If evolution is false, we can know nothing. 3. We know something. 4. Thus evolution is true. You might be motivated to point out that starting with "all knowledge depends on evolution" assumes evolution is true. Likewise, starting with "all knowledge depends on inference" is a statement that can only be reached if inference is true. Once you've assumed inference is true, it's pretty trivial to prove inference is true. There's also the small problem that inference has been proved for this single case, not as a general proposition. I conclude this argument was never presented to an undergraduate philosophy class in the philosophy department. They would have shredded it. (Anyone know if Lewis ever read Descartes?) You can use evolutionary epistemology to do an end-run around this problem if you're willing to give up binary certainty in exchange for a continuum of certainty, but I'll save that for another time. The Stanford Encyclopedia of Philosophy has a dense article on Descartes' epistemology as well as a much more readable article on evolutionary epistemology. BarryR
A few notes on citations, courtesy of Burkhard at talk.origins:
If you can stomach Foucauldian language, you could try Paul Veyne, Did the Greeks believe in their myths?: an essay on the constitutive imagination (1988), not so much on its own but as a source for further references. He has amongst other things a quote from Etienne Pasquier (16th century historian) who send a manuscript to friends for comments, and was rebuked for having all these irritating references in it. Veyne does traces the tradition of referencing to the legal profession, but not as one might think the emergence of copyright, but that of a salaried academic community which started to criticise each other - to avoid litigation for defamation, they put in references in "controversial works" to cover their backsides. You might also find something in Robert Merton's book " On the Shoulder of Giants", he links it more to the emergence of copyright and the idea of the moral right sof the author. ***** Your correspondent seems to be more interested in the present practice than its historical origins. In that case, it is the Berne Convention you are looking for, and especially the section on "moral rights" which also includes the right to be acknowledged as an author. Common law jurisdictions were historically averse to this notion, which comes from continental European (especially German) Romanticism and the idea of the "Genius artist" who invests his heart and soul in his work (compare and contrast Basil and Dorian in the first chapter of the Picture of Dorian Gray" as representatives of the two camps). By now though, everyone is signed up to it. I vaguely remember a paper given by Stina Teilmann on this ( It’s A Wise Text That Knows Its Own Father), its on the net somewhere but probably never published in a peer reviewed journal For that, you could try Jane C. Ginsburg, The Concept of Authorship in Comparative Copyright Law, 52 DEPAUL L. REV. 1063, 1091 (2003) ***** > Berne Convention? That sounds more like copyright law. I always > thought of citations as a way of providing a service to the reader > rather than to provide copyright protection to the prior art. > > -loki Well, historically that may well have been the case - see my other two references elsethread . But Garamond's correspondent wanted a reference why everything " has to be" cited - and the Berne convention does indeed establish a right of an author to be cited (as you say, as an aspect of copyright) Since these days, promotions etc are often depending in your citation index, authors are more likely than previously to evoke convention rights.
These and other comments may be found in the thread "Citation on citation". BarryR
BarryR, I don't know why I bother responding to your posts, I find them awful, in the new sense of the word. You change the subject often or intentionally evade the real crux of the subject. All reasoning does rely on self evident truths, fundamentals, what we call first principles. You get rid of these, and you won't even recognize whether anything else can or cannot "reason." It would be like taking out your eyes to look at them---it's impossible. Your thinking is very familiar and bizarre to me, one that needs the treatment of Lewis and Chesterton, which is why you had to defer to your talk.origins buddies to find an argument that wasn't an ad hominem against the men themselves.
So yes, if you’re enumerating self-evident truths, this is probably the best place to start. It’s also probably the best place to stop.
Why? Clive Hayden
#265 Stephenb The misunderstandings seem only to get deeper. I wasn't trying recount any history (I did get a comment number wrong which may be the cause of the confusion). All I intended to say was - look at comments #260 and #261 - you may find them more substantial than #254 (and more substantial than simpy declaring something to be self-evident or rational) markf
---markf: "I think there is a misunderstanding I was referring to my comments in response to Gpuccio i.e. #261 and #262. The only names mentioned in those comments are Wittgenstein and yours!" I was responding to your comments @254, at which time you listed five authors [later two more] and implied, wrongly, that I didn't know much about their philosophy, ending with this snippy comment: "you only have to go as far as Wikipedia to find that out." To that remark, I responded with a call for less name dropping and more substance. It was @262 that you made this comment: "I hope you will find my comments in response to Gpuccio more substantial. So, your recounting of the history is a bit off the mark. ---"Yours – ever trying to be friendly – Mark" I am always ready to establish and maintain a friendly relationship. I need the practice. [Insert smiley face]. StephenB
#263 I think there is a misunderstanding I was referring to my comments in response to Gpuccio i.e. #261 and #262. The only names mentioned in those comments are Wittgenstein and yours! Yours - ever trying to be friendly - Mark markf
---mark: "(They do at least go beyond asserting that my theory is correct because it is self-evident or because all rational men believe it.)" No, they don't even go that far. They simply appeal to big names and ask us to believe their doctrines on the grounds that they are famous and, one gathers, smart. Since each of these smart men posited contradictory notions of truth, and given the fact that you identify with all of them, contradictions and all ["I am happy to be in this company"] I can't imagine why you would think that your comments were substantive. StephenB
#257 Stephenb I am sorry you found my comment lacking in substance. I hope you will find my comments in response to Gpuccio more substantial. (They do at least go beyond asserting that my theory is correct because it is self-evident or because all rational men believe it.) markf
Gpuccio #225 cont. Broadly I agree with what you wrote on truth. I prefer to talk about statements rather than judgements - but it comes to much the same thing. Some statements simply describe some state of affairs in the world and are true if that state of affairs obtains. As you point out, other types of statement are true or false for different reasons. Mathematical statements are true if they follow from a set of explicit or implicit premisses. Logical statements if they correctly describe a type of valid inference. As I said in #254 I guess I could be described as a pluralist - although I hate "...isms" and "...ists". A really important insight comes from Wittgenstein. Often a statement will be true for a combination of different types of criteria and because they always or often coincide we never have to resolve which apply. For example, an arithmetic statement such as 2+2=4 is mathematically true because it follows from the Peano axioms (I think those are the right ones). Now the Peano axioms apply very accurately to a vast range of situations so it is also true that 2+2=4 for a vast range of situations. Unless we are pure mathematicians we are never called upon to decide whether 2+2=4 is true in the mathematical or the descriptive sense. And when challenged as to what makes it true we get confused and end up making up some abstract world of which it is true! A really powerful example of this ethical statements which are true in virtue of specific facts but also our attitude to those facts but that is a whole new can of worms. I am sorry, I seem to have practically written a paper here - but these issues are so much subtler than people realise. markf
#225 Gpuccio I think you were wise to stay out of this debate. However, you are as ever polite and interested in different views so let me do my best to respond. These are quite difficult to explain so please expect false starts and hopefully minor inaccuracies in what I say. I will split this into two comments to try and prevent them becoming too long. 1) Can logic be reduced to some overriding principle and if not how do we tell what is logical? I think not. Or rather it depends on what you mean by "reduced to". I see logical principles as descriptive not prescriptive. I don't think we have to justify valid reasoning by referring to some higher generality. This approach leads to an infinite regress because of any rule we can always ask the proposer of that rule to justify it. You have to stop somewhere - so why not stop at the first level - with the actual statement you are making. Let me try to be concrete with an example. I think we all recognise that it is valid to argue something on the lines of: (A) My father was a pilot. All pilots had an eye test. Therefore, he had an eye test. But if someone was to challenge this line of reasoning it would add little to say it is an example of the general rule: (B) All As are Bs. x is an A. Therefore x is a B. It is not the fact that statement A matches rule B that makes it valid. Rather it is the other way round. I would test the validity of the general rule by seeing if it works for specific examples. We might then later on use the rule to remind us that another situation is similar to the first case. But essentially we don't need criteria or proof of the validity of a statement like A. How we tell it is valid? By imagining the situation, by drawing Venn diagrams, by failing to produce counterexamples. markf
---BarryR: "Oh, I have read Adler. If you’re not very curious about philosophy, he may be better than nothing." You provide no evidence of having read a word of Adler, much less his article that I sent you to a few days ago. If you had read that small portion of Adler and comprehended it, you would likely have approached the subject matter the same way that I approached the article that you asked me to read from Edmonds. You would have, as I did, presented the argument you believe to be false, explain why it is false, and offer a better alternative. As it is, you are just going through another round of name dropping [this time with Stanifer]. StephenB
F/N: NWE -- as opposed to Wikipedia -- has a very helpful 101 on truth, for those trying to follow basic ideas and catch a hint of nuances. G kairosfocus
---mark: "The irrational folks include of Spinoza, Leibniz, G.W.F. Hegel,William James and Dewey. (You only have to go as far as Wikipedia to find that out." Yes, so what? With respect to truth, Hegel was totally irrational; Dewey was eminently irrational; James was monumentally irrational. Do you have anything of substance to say? ---"I am happy to be in this company. To be clear – I would say that a statement is sometimes true because it corresponds to some facts about the world – but other statements are not like this – a pluralist theory." Yes, I am sure that you are happy to be in their company, and I agree that you belong there, holding each of their contradictory notions of truth at the same time. I suspect, however, that you have read little of nothing of these men that you identify with. If so, perhaps you would like to argue on behalf of one of these irrational positions. Here is a clue: Popularity among the elitist academians is not a measure of rationality. StephenB
GP, and Onlookers: A very long time ago, Aristotle, in Metaphysics 1011B, observed:
since the contradiction of a statement cannot be true at the same time of the same thing, it is obvious that contraries cannot apply at the same time to the same thing . . . . Nor indeed can there be any intermediate between contrary statements, but of one thing we must either assert or deny one thing, whatever it may be. This will be plain if we first define truth and falsehood. To say that what is is not, or that what is not is, is false; but to say that what is is, and what is not is not, is true . . .
Or, in a more morally tinged vein, we may cite the Sermon on the Mount:
Matt 5:37Simply let your 'Yes' be 'Yes,' and your 'No,' 'No'; anything beyond this comes from the evil one.
And, the prince of the prophets had something to add to this:
Isa 5:18 Woe to those who draw sin along with cords of deceit, and wickedness as with cart ropes . . . . 20 Woe to those who call evil good and good evil, who put darkness for light and light for darkness, who put bitter for sweet and sweet for bitter. 21 Woe to those who are wise in their own eyes and clever in their own sight.
In short, there is something both profound and morally significant in seeing thst the truth will say that what is, is, and it will say that what is not, is not; refusing to put light for darkness and bitter for sweet. For, one's yes ought to be yes, and one's no ought to be no. That is why we recognise the crimes of perjury, slander and libel. Now, there are of course any number of philosophers, theologians who have proposed alternative statements [most often emphasising clarity and coherence . . . which bespeaks the unity of truth]. But what is interesting is that in so asserting their own statements and arguing for them, they consistently either imply or are dependent on this underlying very simple principle: true claims say of what is that it is, and of what is not, that it is not. And, that holds even when the philosophers in question dispute the claim that truth is about correspondence of thought or statement or proposition with reality. Merely listing that certain philosophers have said something diverse from the above is not enough to overturn it. For instance, if we were to take MF's claim above and read it as saying that he agreed with Aristotle, he would for good reason think that we have uttered falsehood: we have failed to say of what is that it is, and have instead said that what is not, is. In short, if we were to ignore and dismiss the principle that truth says of what is that it is,and of what is not that it is not, and that were to become the general rule, verbal communication itself would collapse in a morass of confusion and deception, destroying society. In short, such fails the test of the Categorical Imperative. Indeed, what goes beyond letting he yes be yes and the no be no, cometh of evil. And so, to the extent that certain philosophers in recent centuries have indeed denied that truth says of what is, that it is, and of what is not, that it is not, we must therefore not flinch from saying something that is sadly true: to that extent, such have failed at the bar of the duty of truth and reason. GEM of TKI kairosfocus
mark: I have rather backed out of this debate about consciousness, because frankly I am not much interested in philosophical subtleties. But just to understand, would you agree on the statement that all logical reasoning is based on some innate rules of the mind? And that, even if you reduce those rules to the minimum (maybe it is the principle of non contradiction, or something even more basic, I leave that to logician), still some fundamental rule of reasoning, universally shared, must be the basis for all logical reasoning? If it were not so, how could we ever distinguish (in a shared way) logical reasoning from illogical reasoning? Another point: would you agree that "true" and "false", at least in the most common sense, are properties of judgements? So, even if we don't want to state that "truth is the correspondence of the mind to reality", which could be a bit methaphysical, would you agree that if a mental judgement (such as "this table is round") corresponds to reality (the table is really round, and not square), then we can say that such a judgement is true? That would be an empirical meaning of truth, while there is also a logical meaning: in a logical system, I suppose that some proposition is true if it can be logically derived from the original axioms and assumed procedures. Or not? Then there is always the metaphysical meaning of truth, like in big questions such as "what is truth?". But to that question, even very important persons have not given an explicit answer, and remained in silence. I would appreciate your thoughts on those points. gpuccio
markf@255 Nice cite! I had no idea that was there. BarryR
Stephenb As all rational people know, truth is the correspondence of the mind to reality It’s true if it corresponds to reality. It’s the only rational definition that has ever been offered. The irrational folks include of Spinoza, Leibniz, G.W.F. Hegel,William James and Dewey. (You only have to go as far as Wikipedia to find that out) I would also add Kant and Wittgenstein - but that is more debateable. I am happy to be in this company. To be clear - I would say that a statement is sometimes true because it corresponds to some facts about the world - but other statements are not like this - a pluralist theory. markf
MF: As soon as you make a distinction of meaning by asserting a proposition to describe a true state of affairs, and then proceed to infer from it a conclusion, you are -- usually implicitly [that's where the trick is onlookers] -- using the self-evident first principles of right reason, e.g. the law of non-contradiction. For instance, above, you claim:
the Stanifer example is Barry’s refutation of the statement: “All reasoning begins with the acknowledgement of self-evident truths” The Stanifer books includes a lot of reasoning. It does not begin with self-evident truths. QED.
In so asserting claimed facts, you are asserting that certain things are so, as opposed to not being so. That a certain book "does not begin with self-evident truths" is an asserted state of affairs about the world, i.e that certain books exist, that they are by a certain author, and that they argue in a particular way that does not begin with self evident truths. Eachof thsese is as opposed tothe denial of same, and to preserve meaning you imnplicitly assume that if a book exists it does not int he same sense not exist. If it is written by a given author, it is not simultaneously true that they are not written by the said author. If they do not begin with SET's it is not also just as ture that hey do begin with SET's. All of this is a manifestation of the implicit appeal to the law of non contradiction, and leads directly to the conclusion that on the contrary the books DO begin with SET's, starting with LNC. Just, it is easy to overlook an implicit appeal. That's the trick and trap. GEM of TKI kairosfocus
CH@250
Wow. Just wow. Do you mean Stanifer, the person who wrote on programming languages, is your answer to StephenB that there are not self evident truths?
Not unless I really screwed up the formatting. Let's check: Nope. StephanB said this:
All reasoning begins with the acknowledgement of self-evident truths.
That's trivially false. I explained why to him. I guess you didn't read that far. Premise: All foo are bar. Observation: There exists one foo that is not bar. Conclusion: The premise is false. So far so good? Premise: All reasoning begins with the acknowledgement of self-evident truths. Observation: There exist at least three reasoning systems that do not rely on self-evident truth. Conclusion: The premise is false. In junior high school, when you said something like "All the Red Sox suck", there should have been a geeky guy with braces and glasses pouncing on you, pointing out that one particular player on the Red Sox was having a career year and thus could not be properly said to suck, and despite the near-universal suckage of the rest of the team, the statement "All the REd Sox suck" was, strictly speaking, incorrect. This teaches you the value of words like "some", "most", "several", and "the systems of reasoning of which I am aware".
Are you a human being?
Depends on who and when you ask. Being a caucasian landowning male in this culture, I haven't had to worry about the effects of my ancestors being considered three-fifths of a human being. However, in Western China I'm treated very much as a monkey who talks (just not very well). So no, being human is not self-evident.
Are you alive?
If I was undead, how would I know? (Is that why the scary organ music keeps following me around?)
Do you live in the universe?
You just don't put much thought into these exchanges, do you. Which universe are we speaking of? The geocentric universe of the ancient Hebrews and early Christians? The heliocentric universe of the Greeks and late Christians? The early scientific cosmologies that did not extend past the Milky Way? The universe pre- or post-Hubble? Pre- or post-hubble telescope? I'd say the universe isn't self-evident at all.
Do you think?
Descartes figured out "Cogito ergo sum", and despite several standard criticisms (you can look them up yourself, they're in wikipedia) he's justly famous for this. He's much less famous for what came after, but I'll save that for the Lewis essay. So yes, if you're enumerating self-evident truths, this is probably the best place to start. It's also probably the best place to stop. Had I argued that no self-evident truths exist, this would be sufficient to prove me wrong. Since I've read Descartes, I knew enough not to make that argument (although evidently I still don't know how to communicate my actual argument to you).
Your thinking is so bizarre to me
Your thinking is quite familiar to me. I used to think exactly the same way. Education is a terrible, awful thing (in the archaic senses of those words).
you apparently think the world and all of its glorious multiplicity is reducible to programming computers.
You use "apparently" in a sense I'm not familiar with. You're having so much fun sneering at points I'm not making that I hesitate to ask you to engage with what I've actually written. Eh, you're having fun. I won't insist. BarryR
Onlookers: In Bk I Ch 2 of his famous The Rhetoric {4th C BC, the first major work on the subject), Aristotle aptly summed up how arguments work:
Of the modes of persuasion furnished by the spoken word there are three kinds. The first kind depends on the personal character of the speaker [ethos]; the second on putting the audience into a certain frame of mind [pathos]; the third on the proof, or apparent proof, provided by the words of the speech itself [logos]. Persuasion is achieved by the speaker's personal character when the speech is so spoken as to make us think him credible . . . Secondly, persuasion may come through the hearers, when the speech stirs their emotions. Our judgements when we are pleased and friendly are not the same as when we are pained and hostile . . . Thirdly, persuasion is effected through the speech itself when we have proved a truth or an apparent truth by means of the persuasive arguments suitable to the case in question . . . .
The point is that emotions (the most persuasive appeal) may rest on an accurate judgement, but their mere intensity says nothing about the degree of warrant. Secondly, no authority -- and appeal to peer review is appeal to the collective authority of the guild of scholars at a given place and time -- is better than his or her facts, assumptions and reasoning. So, in the end the only appeal that actually successfully warrants claims is the presentation of the material, credible facts, in light of reasonable assumptions [including first principles of right reason], and reasoning. This last may be deductive, inductive or abductive, and may appeal to different degrees of certainty: undeniable and/or self evident, demonstrative, probabilistic, moral certainty. In the above, what we have been seeing is a persistent evasive refusal to engage the actual issues on the merits [e.g. the original issue on consciousness, the arguents by CSL and GKC, even to some extent the implications of 2 + 2 = 4], by playing at the game of "cite me some recent authorities." (This, in the teeth of pointing to some serious sources, both classical and modern.) We are now, sadly, plainly looking at a case of selective hyperskepticism. Just remember, the same folks raising these arguments, have taken 2 + 2 = 4, and by injecting an arbitrary and contradictory redefinition of the + operator, have tried to justify the notion that axiomatic foundations and associated definitions and concepts are arbitrary, and so 2 + 2 = 4 is not a strictly true claim. You CAN set up axiomatic systems as you please, but as my 6th form math teacher pointed out long ago, if you inject contradictions, the math spits back at you: things fall apart in incoherence. Something that Wikipedia fails to underscore the significance of in the intro-summary for its article on Axioms:
In traditional logic, an axiom or postulate is a proposition that is not proved or demonstrated but considered to be either self-evident, or subject to necessary decision. Therefore, its truth is taken for granted, and serves as a starting point for deducing and inferring other (theory dependent) truths. In mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". In both senses, an axiom is any mathematical statement that serves as a starting point from which other statements are logically derived. Unlike theorems, axioms (unless redundant) cannot be derived by principles of deduction, nor are they demonstrable by mathematical proofs, simply because they are starting points; there is nothing else from which they logically follow (otherwise they would be classified as theorems). Logical axioms are usually statements that are taken to be universally true (e.g., A and B implies A), while non-logical axioms (e.g., a + b = b + a) are actually defining properties for the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom," "postulate", and "assumption" may be used interchangeably. In general, a non-logical axiom is not a self-evident truth, but rather a formal logical expression used in deduction to build a mathematical theory. To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms). There are typically multiple ways to axiomatize a given mathematical domain . . .
Immediately, we see that this 101 view on conventional, materialism dominated views, is forced to accept the point that logic is a universal basis for reasoning. In going on to tryt to make axioms out to be in effect arbitrary defintitions, it fails to reckon withthe point that mere coherence is a powerful constraint on sets of possible axioms, and that fields of work are therefore constrained by the need for coherence and a related acknowledgement of observable mathematical facts, which may indeed be self-evident [e.g. the matchbook case on 2 + 2 = 4]. If axioms do not point to facts we observe in mathematics, we get suspicious. For good reason: proof by contradiction via reductio ad absurdum, is about reducing a suspect assertion to falsity by showing up an internal contradiction and/or a contradiction to a known mathematical fact. But, in an era that discounts and dismisses the significance of self-evident principles of right reason, we see ever more and more unacknowledged reduction to absurdity. Wiki gets it right this time:
Reductio ad absurdum (Latin: "reduction to the absurd") is a form of argument in which a proposition is disproven by following its implications logically to an absurd consequence.[1] A common species of reductio ad absurdum is proof by contradiction (also called indirect proof) where a proposition is proven true by proving that it is impossible for it to be false. For example, if A is false, then B is also false; but B is true, therefore A cannot be false and therefore A is true. This of course only works if it is not a false dichotomy. Meaning that, if there are other choices than true or false (like undefined or something entirely different), then this form of proof is a logical fallacy.
(And, onlookers, observe how these issues are being studiously ignored. That is not a coincidence.) GEM of TKI kairosfocus
#250 Clive I think you will find that the Stanifer example is Barry's refutation of the statement: "All reasoning begins with the acknowledgement of self-evident truths" The Stanifer books includes a lot of reasoning. It does not begin with self-evident truths. QED. This is different from the statement "There are self-evident truths". I am also confused by the statements that you listed as being self-evident: Are you a human being? Are you alive? Do you live in the universe? Do you think? These are all obviously true - but self-evidently true? The examples that Stephenb has offered of self-evident truths have been general statements such as "every event has a cause". These are statements about an individual who may or may not be alive. I would have thought that they are far from self-evident. It requires observation or experience to confirm their truth. markf
BarryR,
I have next to me Stansifer’s _The Study of Programming Languages_. I’m familiar with the chapters on Lambda calculus, denotational semantics and Hoare’s axiomatic approaches. All of these are methods of reasoning, none of these begin with an enumeration of self-evident truths. Thus, your statement is false.
Wow. Just wow. Do you mean Stanifer, the person who wrote on programming languages, is your answer to StephenB that there are not self evident truths? Are you a human being? Are you alive? Do you live in the universe? Do you think? All of these are self evident truths. All the way down to you using a keyboard and typing a sentence. Your thinking is so bizarre to me, you apparently think the world and all of its glorious multiplicity is reducible to programming computers. I would surely hope that a book on programming won't mention that there is no use in living any longer; if it did, I'm afraid you'd take that to heart too. Clive Hayden
BarryR, That quote reminds me of the Keats line from Ode on a Grecian Urn: "Beauty is truth, truth beauty, that is all ye know on earth and all ye need to know" zeroseven
CH@246 Bother, I've dropped another closing blockquote above.... sorry.
My point is that you pick and choose which philosophical aspects of StephenB that you’d like to see cited in peer review,
Certainly. Some of his mistakes are worth me correcting personally, some are worth me digging up a citation, and some should best be corrected by Stephan realizing he can't possibly find any support for that claim. Yes, I've run into people who start off conversations by demanding multiple citations. The best way to handle that is to provide them. When they see that tactic isn't working, they stop asking. In my case, I actually go off an read the citation I'm given, so just a handful is enough to slow me down.
It’s your opinion as to what qualifies as needing a citation in a discussion like this.
Certainly. But it's an opinion shared by many educated people. If Stephan is here to express an opinion, he's free to disregard mine. If he's here to change my mind (and I'm here to change his, or at least show him how to argue his point more effectively), then he can take my opinion and learn from it how I can be convinced. Preview seems to think this is ok, here goes.... BarryR
StephanB@235
All reasoning begins with the acknowledgement of self-evident truths.
Refutation of universals is trivial. Or, in more words, you have a bad habit of using words like "all" when staking a claim. This tells me you're not thinking about what you're saying. I have next to me Stansifer's _The Study of Programming Languages_. I'm familiar with the chapters on Lambda calculus, denotational semantics and Hoare's axiomatic approaches. All of these are methods of reasoning, none of these begin with an enumeration of self-evident truths. Thus, your statement is false. (You'll notice this bit of reasoning also failed to begin with self-evident truths, but there's no need to pile on.)
If you don’t believe that, try beginning with something else.
Hmmm.... like this?
In the lambda calculus are a collection of terms built out of variables, function symbols, and other lambda expressions. We suppose we have a countably infinite set of variables ... and an arbitrary set of function symbols... The only additional symbols used in the syntax of lambda expressions are the period, the Greek letter lambda, and the parentheses.
It works for me...
With respect to the problem that some say, “it’s not self-evident to me,” there is a simple answer to that objection. Such individuals are simply afraid or unwilling to acknowledge that which they do, indeed, know.
There's an even simpler answer: you're wrong. Suggestions on how we can go about disambiguating those two solutions?
It is possible to improve one’s understanding of basic logic without abandoning its principles.
That's true and actually profound; I'm also worried that it's a typo. In hopes that it isn't, yes, I agree that in order to understand logic we must be able to step outside of it. Logic isn't terribly useful in intellectual work --- it's far too clumsy. The ideas and most of the refutations come from intuition. Logic is useful for writing the results up afterwards. This is especially true in mathematics. Back when I was getting my Masters, artificial intelligence was all the rage and I took several grad classes in it. There was some work being done on automated theorem proving: give the computer the initial axioms and a theorem, and see if the computer could swizzle around the axioms in such a fashion to either prove or disprove the theorem. What computer scientists learned from this exercise (and mathematicians already knew) was that this kind of brute-force approach only works with the most trivial problems. The program didn't know how to step outside of the purely logical approach, and so was ultimately ineffective. (The proofs that they did generate were unreadable by anything not a computer, and I think that particular field has been abandoned.)
Once one “discards” the law of non-contradiction, he/she loses his capacity to reason in the abstract.
Hmmm... no, I don't think so. Quantum mechanics would be the standard counterexample, but I should be able to some up with something less esoteric if you can give me a formal definition of what you mean by non-contradiction.
Modern mathematics is founded on laws, as I have pointed out several times.
You have indeed pointed out several context-dependent laws, and I keep asking you things like "what does it mean to raise a graph to an exponent?" and "what does it mean to take the sine of a graph?". So no, you still haven't postulated any universal (context-free) laws. If you did (and postulating is the easy part) you'd still have to demonstrate their universality without recourse to an universal axioms. That's a really tough bootstrapping problem, and the fact that you still can't tell context-sensitive laws from universal laws tells me that solving that problem is well out of your reach. (I you reread the above carefully, you'll notice that I'm reasoning about universal laws without using universal laws. Might be a hint in there....)
All of Euclid’s elements are known to be self evident
What's self-evident about a set of coordinates describing a volumeless space? I don't find this self-evident at all. If you like, you can call me names, but I'm still not going to see this as self-evident, nor will you be able to prove it's self-evident. So as a personal belief, you're welcome to it. You're even free to believe that your personal beliefs are universal. You're not free to believe that you can construct an argument that would make them so.
Wikipedia alternates between rationality and irrationality depending on the subject matter.
I can recommend a textbook if you like.
You are assuming that since disputes about truth exist, that truth itself cannot exist.
Your inability to accurately summarize what I'm saying does get tiresome. If I made a claim that universal truth didn't exist, I have exactly the same problem that you do in claiming it does exist. Instead, I'm claiming that we have no way off accessing universal truth, and even if we did, there's no way it can be shared. Note that this is not claiming it's impossible --- I think that's true, but that's a personal belief. For the tools we have available to us now, though, we can't do it, and that's no great tragedy.
I like your sense of humor. Socrates, Plato, Aristotle, Aquinas, Adler, Maritain, Gilson, Copleson, Kreeft, ….I could fill the entire page.
One will do. Socrates and Plato aren't going to be helpful. Strike Aristotle as well. Aquinas is famous for concluding that logic could not be used to show the existence of God, thus leaving room for faith. So pick one of the others and give me chapter and verse (which is to say that I'll be looking it up, so please provide enough information to do so). As a reminder, you're looking for support of this statement:
As all rational people know, truth is the correspondence of the mind to reality
The reason you're not going to find anything is because philosophers and theologians learn in their freshman year (if not before) when not to use the word "all", and to avoid the implied ad hom in "rational people". But you're welcome to try....
—“It’s a really terrible definition: how do you go about ascertaining if something is true?
It’s true if it corresponds to reality.
Congratulations! All your prejudices are now true. (One of the first lessons learned in higher education is that humans suck at perceiving reality. We're tolerably good at basic questions of eating and reproduction. Beyond that, we have common sense, famously defined by Einstein as the collection of prejudices we possess at age 18. Education teaches you to be very, very wary of what you think reality is (especially the self-evident bits). It's almost always wrong (as you're demonstrating at length here).
Unfortunately, he is steeped in Kantian subjectivism and doesn’t understand logic. Why would you want to follow him when there are so many better thinkers around? Begin with Mortimer Adler.
Oh, I have read Adler. If you're not very curious about philosophy, he may be better than nothing. If you are curious, you find you outgrow him in pretty short order.
BarryR
BarryR,
Randolph Smith dedicates and entire chapter to “Documenting your scholarship” in his _Guide to Publishing in Psychology Journals_. He cites Bruner(1942) as follows: a sin one more degree heinous than an incomplete reference is an inaccurate reference; the former will be caught by the editor or printer, whereas the latter will stand in print as an annoyance to future investigators and a monument to the writer’s carelessness.
I mean anything at all being discussed, not research attempting to be published. My point is that you pick and choose which philosophical aspects of StephenB that you'd like to see cited in peer review, but this selection of yours is arbitrary, and you don't have a peer reviewed citation yourself that guides you in how you make these arbitrary requirements of him needing to provide a citation. It's really just another evasion, a red herring, something to pull out of the ol box of tricks as an argument maneuver when you do not or cannot take the argument head-on.
I’ll get a citation and learn something without losing face, or everyone else reading the exchange will note the lack of citation and conclude that the author was expressing a personal opinion.
So what if he was? It's your opinion as to what qualifies as needing a citation in a discussion like this. I don't see a citation that speaks for your opinion either. It's an evasion tactic.
I’ve forwarded your question along to the folks at talk.origins. I’m interested to see if they can unearth if science got the practice from the legal profession of if it work the other way ’round.
I can't wait. :) Clive Hayden
CH@231
If you can make whatever I arbitrarily decide to invent to be true depending on how you choose your axioms, then nothing chosen as an example will have any real meaning. If anything is everything else, then it is nothing in particular, and the whole endeavor becomes meaningless because it is vacuous.
Yes, it is a bit disorienting when you realize that there are an infinite number of true statements out there (and an infinite subset of them can't be proven). That's perhaps some of the reason you don't hear philosophers, mathematicians and scientists describing what they do as "looking for truth". If truth is universal, we have no common way of gaining access to it. If truth is local, then everybody can have their own truth. The solution is straightforward. We don't judge our work based on how many truths we find, we judge it on how many *interesting* truths we find (where "interesting" in on a continuum, not a binary property). So what's interesting? Well, starting with G. H. Hardy:
Beauty is the first test: there is no permanent place in the world for ugly mathematics.
Utility is another test (although not one Hardy would have agreed with>: there has been a huge body of work on how to do arithmetic on a few dozen binary digits, all because we happen to live in a time where that can be done extraordinarily quickly. Years ago, I summed up my work as "The creation and appreciation of elegance". Elegance is much harder to come by than truth, and I've found it to be far more satisfying. BarryR
CH@228
Can you provide a cite that tells you that everything of any consequence whatsoever has to be cited? Peer-reviewed please.
What an interesting question! This is one of those things that are usually taken for granted --- like spell-checking your manuscript before sending it in for review. But sloppy spelling never wrecked a career, and sloppy citing has, so it's far more important than that. Let's see what I can find. Randolph Smith dedicates and entire chapter to "Documenting your scholarship" in his _Guide to Publishing in Psychology Journals_. He cites Bruner(1942) as follows:
a sin one more degree heinous than an incomplete reference is an inaccurate reference; the former will be caught by the editor or printer, whereas the latter will stand in print as an annoyance to future investigators and a monument to the writer's carelessness.
There are two important ideas encapsulated in the above: "monument" and "future investigators". Taking them in turn: I'm establishing a reputation here each time I post (as are you). By adding citations, I'm communicating to readers that I know enough about the topic to be able to cite the literature, and if they're familiar with the topic themselves the can get a very finely calibrated understanding of the limits of my understanding from my cites. So when you cite Lewis, this tells me volumes about what you've been reading (and haven't been reading). Folks who don't cite anything aren't likely to have read widely or deeply and are probably just expressing an uninformed (though sincerely held) personal opinion. So that's the reputation aspect. Second, it's unlikely that I'll ever have anything original to say about foundational mathematics, but what I can do is point readers to people who have said interesting things. I enjoy it when people do this favor for me, and I like returning the favor. There's a third reason that only really applies to this kind of forum. When I read something as silly as:
As all rational people know, truth is the correspondence of the mind to reality.
I need a polite way of expressing my skepticism. Saying "you're wrong" or "that's silly" isn't helpful. But if I ask for a citation, one of two things will happen: I'll get a citation and learn something without losing face, or everyone else reading the exchange will note the lack of citation and conclude that the author was expressing a personal opinion. I've forwarded your question along to the folks at talk.origins. I'm interested to see if they can unearth if science got the practice from the legal profession of if it work the other way 'round. If you like, I'll post a summary of the responses here.
I’m still waiting for your response to Chesterton’s argument in Orthodoxy and Lewis’s argument in De Futilitate. You can start any time.
As I said yesterday (although it may have been obscured by the missing closing blockquote) look for that to be posted on talk.origins on Monday. BarryR
F/N: Meet the mind of CSL, here. That old lion yet roars. kairosfocus
F/N: Since there is an attempt to dismiss the connexions, here are the lyrics: ___________________ >> Redemption Song lyrics Old pirates, yes, they rob I Sold I to the merchant ships Minutes after they took I From the bottomless pit But my hand was made strong By the hand of the almighty We forward in this generation Triumphantly Won't you help to sing These songs of freedom? 'Cause all I ever have Redemption songs Redemption songs Emancipate yourselves from mental slavery None but ourselves can free our minds Have no fear for atomic energy 'Cause none of them can stop the time How long shall they kill our prophets While we stand aside and look? Ooh Some say it's just a part of it We've got to fullfil the book Won't you help to sing These songs of freedom? 'Cause all I ever have Redemption songs Redemption songs Redemption songs Emancipate yourselves from mental slavery None but ourselves can free our mind Woh, have no fear for atomic energy 'Cause none of them-a can-a stop-a the time How long shall they kill our prophets While we stand aside and look? Yes, some say it's just a part of it We've got to fullfill the book Won't you help to sing These songs of freedom? 'Cause all I ever had Redemption songs All I ever had Redemption songs These songs of freedom Songs of freedom >> ______________________ Now, let us look at CSL and GKC kairosfocus
PS: The Marley Song is an object lesson to those who would build a middle wall of partition between philosophy, art and history, even theology. All of these are deeply connected in the context of worldviews. "Minutes after they took I from the bottomless pit." PS: tribute, here is Arrow live, maybe a few years ago. kairosfocus
Just for fun: Arrow's 1982 Hot hot hot. Best way to remember a great -- and verrrry nice -- guy. (Funny, Bob Marley used to love across from my street where I grew up. Redemption Song. Telling contrast on national ethos!) "None but ourselves can free our mind" -- a quite from Marcus Garvey. G kairosfocus
PPS: GKC's companion to Orthodoxy, Heretics. I can't wait till we get reasonable cost kiosk printer-binder machines, to read books that make SENSE, thanks to Project Gutenberg. kairosfocus
PS: Onlookers, in a lot of modern mathematics the key proof technique is to deny the proposition one wishes to prove, i. e. switch from P to NOT-P. Then, show that the latter entails a contradiction. By reversing the denial, one is back at P as true. kairosfocus
Stephen Apparently BR does not recognise that a self-evident argument is one where the truth is evident on understanding what is being claimed, on pain of reductio ad absurdum. And the case here of injecting an arbitrary contradiction by equivocating he meaning of + and ending up denying that 2 + 2 = 4 is a capital example. Here is Wiki -- as a hostile witness making an acknowledgement against interests -- on self-evident truth:
In epistemology (theory of knowledge), a self-evident proposition is one that is known to be true by understanding its meaning without proof. Some epistemologists deny that any proposition can be self-evident. For most others, the belief that oneself is conscious is offered as an example of self-evidence. However, one’s belief that someone else is conscious is not epistemically self-evident . . . . A self-evident proposition cannot be denied without knowing that one contradicts oneself (provided one actually understands the proposition). An analytic proposition cannot be denied without a contradiction, but one may fail to know that there is a contradiction because it may be a contradiction that can be found only by a long and abstruse line of logical or mathematical reasoning. Most analytic propositions are very far from self-evident. Similarly, a self-evident proposition need not be analytic: my knowledge that I am conscious is self-evident but not analytic . . . . For those who admit the existence [i.e. reality] of abstract concepts, the class of non-analytic self-evident truths can be regarded as truths of the understanding–truths revealing connections between the meanings of ideas.
Do you think, above, that BR does not know that to say 2 + 2 = 4 AND 2 + 2 = 147 is not a contradiction? Or, that it is introduced by willfully equivocating the meaning of +? Hardly likely . . . G kairosfocus
Clive Welcome. I hope that he actual substance of the argument is addressed. Including the issue C S L makes on the astonishing correspondence between reason and the world, which is reflected int he remarks on the "unreasonable" power of mathematics, which is symbolised logic. G kairosfocus
---BarryR: “The argument from self-evidence that the twin virtues of being the second-oldest philosophical argument (the first being “Because I said so”) as well as the weakest. To dismiss this argument, simply state “It’s not self-evident to *me*”, and the line of reasoning need not be considered any further.” All reasoning begins with the acknowledgement of self-evident truths. If you don’t believe that, try beginning with something else. With respect to the problem that some say, “it’s not self-evident to me,” there is a simple answer to that objection. Such individuals are simply afraid or unwilling to acknowledge that which they do, indeed, know. All sane people know that a thing cannot both exist and not exist at the same time. All sane people know that universes do not just pop into existence. ---“There’s another, deeper problem with this approach. Once you declare something to be self-evident, you’ve removed both the incentive and the possibility of going back and improving your understanding of it. In the marketplace,” It is possible to improve one’s understanding of basic logic without abandoning its principles. ---“A different approach allows axioms to be introduced, debated and discarded based on the interests and abilities of the mathematician. The approach not only works, but works spectacularly well.” Once one “discards” the law of non-contradiction, he/she loses his capacity to reason in the abstract. Recall Edmond’s logical errors with respect to the syllogism. ---“So, what do we lose if we abandon (most) self-evident truths? Other than a fragile, unearned sense of certainty, I can’t think of anything.” I have already explained the cost of abandoning causality. ---“What do we gain? Modern mathematics and everything that makes use of it.” Modern mathematics is founded on laws, as I have pointed out several times. ---“Axioms used to be defined as self-evident, but it wasn’t true then (think Euclid’s parallel postulate) and is a dead issue now. All of Euclid’s elements are known to be self evident with the exception of his parallel postulate, which is not self evident and has never been proven or disproven. The elements [axioms] represent the basic foundation for all mathematics and have never been invalidated. Contrary to your assumption, no mathematician may add or delete them at will if he wants his mathematics to be rational. ---“The wikipedia article on axioms isn’t too bad, you might want to take a look.” Wikipedia alternates between rationality and irrationality depending on the subject matter. ---“Theological and metaphysical truths are at variance with *each* *other*, You may think you’ve caught hold of the Actual Truth (and you may even think it is self-evident), but there are a couple billion people who are just a certain that they’re self-evidently right and you’re self-evidently wrong.” You are assuming that since disputes about truth exist, that truth itself cannot exist. On the contrary, the principle should be clear. If one religion is in conflict with another, then either one or both of those religions are in error. If science is at war with logic, then either science or logic is misleading us. Only if there is one unified truth with many aspects can there be any rationality. [If, for example, there is no law of causation, [a philosophical not a scientific truth] then there may be some effects that occur without causes. That would mean that science, which is a search for causes, is dead.] ----“This kind of thinking usually comes out out of a very immature faith: God is perfect, thus X; if not-X, then God does not exist. [Law of causality] No, it comes from an understanding that science is a search for causes, and if we cannot know for sure that everything is caused, then we cannot know that anything is caused. I have already made that clear. \ ---“As you said, the idea of causation is a philosophical idea, not a scientific one, and as such it will make no nevermind to science however it ends up being decided.” The metaphysical principles of right reason underlie all science. Science cannot move one step without them. If you doubt this, try evaluating evidence without them. [Edmonds doesn’t even define the truth that he attempts to discuss.] ---“I think it was “truth in general is context-dependent”. That is not a definition. [As all rational people know, truth is the correspondence of the mind to reality.] ---“Can you give a cite to a rational person who has published something to that effect? Peer-reviewed, please.” I like your sense of humor. Socrates, Plato, Aristotle, Aquinas, Adler, Maritain, Gilson, Copleson, Kreeft, ….I could fill the entire page. ---“It’s a really terrible definition: how do you go about ascertaining if something is true? It’s true if it corresponds to reality. It’s the only rational definition that has ever been offered. Do you know of a better one? ---“Or real?” Reason and evidence ---“Since this is just your mind, how do you communicate this to others?” Truth doesn’t just involve the mind; it involves the minds relationship to reality. If there was no reality to know, there would be no truth to obtain. How do I communicate truth to others? I tell them. ----“As to Edmonds, I think he can be fairly summarized as “True statements are statements that are correct in a particular context.” Yes, that is not a bad summary of his position. However, there is no definition of truth there. As I have already made clear, Edmonds does not reason very well. Unfortunately, he is steeped in Kantian subjectivism and doesn’t understand logic. Why would you want to follow him when there are so many better thinkers around? Begin with Mortimer Adler. StephenB
Clive: You have of course identified why it is that we must purge contradictions from our thought life. If A and NOT-A can be both true in the same time and sense of A, then meaning itself collapses into confusion and chaos. For meaning rests on distinctions. And when, for instance some would cite quantum behaviour as a claimed denial of non-contradiction, they find themselves having to implicitly assume what they wish to dismiss. GEM of TKI kairosfocus
kairosfocus, Thanks for those links. Clive Hayden
PPS: Orthodoxy and De Futilitate (excerpt on logic here). kairosfocus
BarryR, I used 2+2=147 just because it came to mind. I could've just as easily used 2+2=9,235, or 2+2=0, or whatever else comes to mind. If you can make whatever I arbitrarily decide to invent to be true depending on how you choose your axioms, then nothing chosen as an example will have any real meaning. If anything is everything else, then it is nothing in particular, and the whole endeavor becomes meaningless because it is vacuous. Clive Hayden
Onlookers: Clive is of course dead right to remind us that the real issue is the balance of a matter on its merits of fact and reasoning [ultimately traceable to first principles], not whether or not today's neo-magisterium has deigned to award its imprimatur of approval. So, when we see above, a claim like . . .
[MF, 209:] I see the following “laws” which Barry has shown can be false given a change of axioms: #103 2+2=4 a+b+c = c+b+a It is not possible to divide by zero #126 a+b gives a unique result
. . . that tells me a lot about just how far wrong we have gone. As has been repeatedly pointed out above [e.g. cf 187 - 191], MF, the notion that 2 + 2 = 4 can be shown "wrong" by shifting "axioms" in midstream is a gross confusion rooted in the blatant error of substituting a novel redefinition of the ""+" operator; creating an unnecessary and blatant contradiction where + is now held to at once mean two different things that deny one another. One more time, onlookers, let us get the old box of matches, and group sticks in pairs:
A: { | | } { | | } B: Regroup by combining, pushing them together: C: { | | | | }
Step B operationally exemplifies -- and we can state a formal definition -- the addition operator, in its fundamental form. Using standard symbols we have known since our early days at school: 2 + 2 = 4, QED This result, once we understand the meaning of the numerals and symbols used based on our living in our common world as conscious, reasoning and understanding creatures, states something that is true, and which must be true, on pain of absurdity. Something that most of us knew full well by the time we were 5 years old. The + operator is indeed extended to other cases, but -- for excellent reason -- that is done in a way that preserves its core meaning instead of contradicting it without warning; worse, insisting on sticking to an embedded contradiction in our reasoning, in the teeth of correction. Indeed, this side-track debate above shows what happens when we deal with the blatant rejection of first principles of right reason, starting with things like non-contradiction. (I still point out, as at 208 above, that the side tracks serve to allow non-discussion of a key issue that some desperately do not wish to discuss.) And when it comes to the proverbial live donkeys kicking the dead lions Lewis and Chesterton, let it suffice to note that they have failed to actually engage the matter on the merits, again. Just as how they would kick rhetorical sand in our eyes to confuse us about the self evident truth that 2 + 2 = 4. Remember that, onlookers: "2 + 2 = ?" has now been reduced to a relativist morass of confusion. Let that be the final demonstration of the absurdity of today's self-referentially absurd evolutionary materialism and its radically relativist ultra/post- modernism, and let that show the fatal error being made by those who willingly go along with the pronouncements of the new materialist magisterium. And sorry, when correction on the basic merits is refused again and again, on the blatantly plain, that is a sign that something is deeply wrong. In unfortunately now necessarily plain words: Reductio ad absurdum, backed up by evident willful obstuseness, disguised with distractive rhetoric. A sadly, utterly telling outcome. If one is trapped in a hole, the first step to escape is to stop digging in deeper. GEM of TKI PS: O/T F/N: Alphonsus, "Arrow" Cassell -- Montserrat's leading Soca entertainer, and a world figure [The song Hot hot hot is his trademark] -- passed away from cancer today. Condolences to all fans, friends and family. I am going to miss our little exchanges in his Manshop. kairosfocus
markf,
I think you also took part in this discussion did you not Clive?
Of course, and I was amused as well. Clive Hayden
BarryR,
Can you give a cite to a rational person who has published something to that effect? Peer-reviewed, please.
Can you provide a cite that tells you that everything of any consequence whatsoever has to be cited? Peer-reviewed please. And please cite something peer reviewed and published that claims that anything published should be peer-reviewed please. And please cite something peer reviewed and published which says that only things peer reviewed and published should be discussed. And please cite something peer reviewed and published that says you should have a discussion at all. And please cite something peer reviewed and published which tells you that you should look to them for guidance in the first place in all endeavors of life. Presumably this would include a peer-reviewed and published work which tells you to trust them in all things to begin with, before you can make sense of how to proceed in any discussion, that is, what should you look to for peer review and what you shouldn't in a discussion (i.e. X should be peer-reviewed but Y doesn't necessarily have to be). And please provide the citation that tells you when it's appropriate to provide a citation. Peer-reviewed please. I'm still waiting for your response to Chesterton's argument in Orthodoxy and Lewis's argument in De Futilitate. You can start any time. And actually, if you would please, make an argument in your own words, that would be great, for that would tell me whether you actually understand A) What you're arguing against, and B) What you're argument itself really is. I don't want vague referrals to other people's works. I want to know how BarryR responds. Clive Hayden
markf @221: If you read the comments carefully, you will discover that it was not the participation but rather the lack of intellectual curiosity of the participants that was being questioned. StephenB
#221 I am also amused by this side discussion about the credentials of C.S. Lewis from pedants I think you also took part in this discussion did you not Clive? markf
@222 Whoops, dropped a closing blockquote. Sorry. BarryR
markf@209
(A) “the addition and multiplication functions on real numbers are commutative and associative” Assuming that is what you meant, then these laws are dependent on the axioms of the arithmetic of the natural numbers. I am not a mathematician so I may be wrong about the detail, but I believe the appropriate axioms are the Peano axioms with addition of the definition of the addition and multiplication functions. It needs a better mathematician than me to prove exactly (A) follows from these axioms – perhaps BarryR can oblige.
Lambda calculus will allow you to derive both addition and its commutative and associative properties using a tiny set of simple axioms, but you will be insanely bored by the time you get there. If you're curious and a masochist, the best presentation I've see in chapter 7 in Ryan Stansifer's _The Study of Programming Languages_. Before you master what lambda calculus does, you have to learn both the formal and informal notation, and for mere mortals it's incredibly opaque. Stansifer is very good at providing BNF grammars so that, if you don't have a life, you can unambiguously pick apart how each of the theorems are derived. Just to give you a taste, let lambda be "&"...
Let f be the lambda expression &z.6. Then Y f reduces as follows: (&h.(&x.h(xx))(&x.h(xx)))f (&x.h(xx))(&x.h(xx))[h:=f] (hx.f(xx))(&x.f(xx)) f(xx)[x:=&x.f(xx)] f((&x.f(xx))(&x.f(xx))) (&z.6)((&x.f(xx))(&x.f(xx))) 6[z:=(&x.f(xx))(&x.f(xx))] 6 Notice that we chose to reduce the outermost redex when there was a choice. The result is 6; exactly what we would expect the fixedpoint of the constant function f to be.
Foundational mathematics is much more fun to think about than to work through. BarryR
StephanB@219 Thanks for reading the paper. I appreciate it.
In effect, Edmonds doesn’t acknowledge the reality of self-evident truths.
The argument from self-evidence that the twin virtues of being the second-oldest philosophical argument (the first being "Because I said so") as well as the weakest. To dismiss this argument, simply state "It's not self-evident to *me*", and the line of reasoning need not be considered any further. There's another, deeper problem with this approach. Once you declare something to be self-evident, you've removed both the incentive and the possibility of going back and improving your understanding of it. In the marketplace, A different approach allows axioms to be introduced, debated and discarded based on the interests and abilities of the mathematician. The approach not only works, but works spectacularly well. So, what do we lose if we abandon (most) self-evident truths? Other than a fragile, unearned sense of certainty, I can't think of anything. What do we gain? Modern mathematics and everything that makes use of it.
Thus, he doesn’t even consider the role that self-evident truths play in the filed of mathematics, reducing all starting points to “axioms.”
Axioms used to be defined as self-evident, but it wasn't true then (think Euclid's parallel postulate) and is a dead issue now. The wikipedia article on axioms isn't too bad, you might want to take a look.
If theological and metaphysical truths are at variance with scientific truths and mathematical truths, we would be living in a cosmic madhouse, and there would be no warrant for trying to reason with anyone about anything.
Ummm.... Theological and metaphysical truths are at variance with *each* *other*, You may think you've caught hold of the Actual Truth (and you may even think it is self-evident), but there are a couple billion people who are just a certain that they're self-evidently right and you're self-evidently wrong. In contrast, science and math are remarkably catholic (in the original sense). It comes from having a neutral moderator pass judgment on our claims (that arbitrator being reality).
If, for example, there is no law of causation, [a philosophical not a scientific truth] then there may be some effects that occur without causes. That would mean that science, which is a search for causes, is dead.
This kind of thinking usually comes out out of a very immature faith: God is perfect, thus X; if not-X, then God does not exist. It amuses God to generate a consistent stream of not-X's. As you said, the idea of causation is a philosophical idea, not a scientific one, and as such it will make no nevermind to science however it ends up being decided. For scientific ideas, exceptions are simply incorporated into the theory until we come up with a more general theory that accounts for them.
Indeed, Edmonds doesn’t even define the truth that he attempts to discuss.
I think it was "truth in general is context-dependent".
As all rational people know, truth is the correspondence of the mind to reality.
Can you give a cite to a rational person who has published something to that effect? Peer-reviewed, please. It's a really terrible definition: how do you go about ascertaining if something is true? Or real? Since this is just your mind, how do you communicate this to others? As to Edmonds, I think he can be fairly summarized as "True statements are statements that are correct in a particular context."
G.K Chesterton, who may well have been greatest writer of the twentieth century
No. BarryR
Don—A professor, a lecturer or a Fellow. I guess I take this for granted, and am baffled I have to explain this.
I don't understand your bafflement. The title of professor means something different in England from what it means in the U.S. It took a number of people to dig up the fact that in England, professor means department head. Petrushka
StephenB,
I am also amused by this side discussion about the credentials of C.S. Lewis from pedants who have never read him and probably wouldn’t do so even on a bet. In order to have something to talk about, his critics should continue to attack him personally because they clearly have no answers to his arguments, of which they are almost always ignorant.
Isn't that the truth. Clive Hayden
Petrushka,
His field seems to have been English rather than Philosophy, except for six months when he was a substitute.
You mean for a year, when another philosophy professor (tutor) went on leave and asked that Lewis take his place.
Don---A professor, a lecturer or a Fellow.
I guess I take this for granted, and am baffled I have to explain this. Clive Hayden
---BarryR: "What did you think of section 3.3 of Edmond’s paper?" Because I respond to so many people on so many threads, I tend to go straight to central issues and abbreviate my answers. Occasionally, I will stretch out with a long post, but most people can get the point if distill the point to its simplest essence, as will be the case here. Edmonds approach is just one of the many postmodernist/subjectivist approaches to philosophy informed by, dare I say, contaminated by, the unfounded skepticism of Immanuel Kant. Among other things, he assumes, like Kant, that analytic and synthetic reasoning are our only tools for obtaining truth. To better understand Edmonds’ confusion, send your search engine in the direction of “Little Errors in the Beginning,” by Mortimer Adler. In effect, Edmonds doesn’t acknowledge the reality of self-evident truths. Because Edmonds does not acknowledge the starting point for all thinking, he immediately sinks into intellectual quicksand. Thus, he doesn’t even consider the role that self-evident truths play in the filed of mathematics, reducing all starting points to “axioms.” On the matter of logic and the role of the syllogism, he appears not to even understand the basic distinction between a valid argument and a sound argument, indicating surprise that a false major premise would produce an unsound conclusion. Even though he articulates a reasonable objection to his philosophy, [that truth must be unified] he appears not to understand WHY it is the case that only a unified truth can be true. If theological and metaphysical truths are at variance with scientific truths and mathematical truths, we would be living in a cosmic madhouse, and there would be no warrant for trying to reason with anyone about anything. If, for example, there is no law of causation, [a philosophical not a scientific truth] then there may be some effects that occur without causes. That would mean that science, which is a search for causes, is dead. If some effects can occur without causes, there would be no way of knowing which effects were caused and which ones were not. Metaphysics illuminates science; science does not illuminate metaphysics. Indeed, Edmonds doesn’t even define the truth that he attempts to discuss. As all rational people know, truth is the correspondence of the mind to reality. If that correspondence doesn’t take place, truth has not been apprehended. What is Edmonds definition of truth? If he had one, this would certainly have been the place to disclose it. Many, not all, of Edmonds assertions qualify as old errors with new labels, which is why I so strongly recommend the work of G.K Chesterton, who may well have been greatest writer of the twentieth century. I feel no hesitancy in saying that you will not be refuting him, especially in the presence of those of us who have really read and understood him. The only way a secularist/naturalist/skeptic can deal with Chesterton is to ignore him. I am also amused by this side discussion about the credentials of C.S. Lewis from pedants who have never read him and probably wouldn't do so even on a bet. In order to have something to talk about, his critics should continue to attack him personally because they clearly have no answers to his arguments, of which they are almost always ignorant. StephenB
#213 CY - thanks for a balanced and helpful comment. It may come down to a difference between UK and US interpretation of the word "professor". I didn't realise that in the USA a professor is anyone who does research and teaching. In the UK you have to be quite a distinguished academic to get that title. In the unlikely event that anyone is interested I did some more research. The Wikipedia article is misleading about the role of tutor and fellow at Oxford. This link is rather wordy but pretty much settles the issue: http://www.exeter.ox.ac.uk/documents/alumni/publications/rectors-and-fellows.pdf To summarise what is in it. At the time there were two types of fellows - tutorial fellows and ordinary fellows. Tutorial fellows had some tutorial and lecturing responsibilities. Ordinary fellows were usually researchers but included, for example, the rector. So some fellows were tutors and some were not. However, clearly you could also be a tutor and not a fellow beause Lewis was not a fellow of University College when he was a tutor there. I think it is high time to draw a line under this! markf
markf, It also appears quite remarkable that a lowly tutor from Oxford should all of a sudden have a professorship created for him at Cambridge in 1954. Clearly Lewis' academic credentials had been well established by that time. CannuckianYankee
Clive, markf, others It's difficult to get reliable information online regarding teaching positions at Oxford in the 1920s. What I was able to find is that Oxford combines "fellow" with "tutor" as one who oversees the teaching of a particular subject (in Lewis' case English) to a group or individual, through lectures or individual study. http://en.wikipedia.org/wiki/Tutor "In the University of Oxford, the colleges fuse pastoral and academic care into the single office of Fellow and Tutor, also known as a CUF Lecturer." So apparently at Oxford "tutor" and "fellow" are the same thing. Also, CUF lecturers are members of the faculty; not merely teaching assistants. Lewis was elected as a Fellow in English in 1925 at Magdalen College, Oxford. I wasn't able to find information regarding professorships specifically at Oxford; However, the title "Professor" was usually given only to the chair of a department, and not to all fellows within a particular department. If you simply take a look at the lists of faculty at Oxford (on their website), you will see that they maintain this structure even today. The department chairs are listed as "Professor," while the fellows are listed as "Dr.," "Mr." or "Mrs." Thus, for 29 years, Lewis was a Fellow; teaching, doing research and publishing, similar to what is expected of a professor at any college or university here in the U.S. Then in 1954 He became a Professor of Medieval and Rennaisance English at Magdalene College, Cambridge; that is, he was distinguished as a chair of a department, and not merely as a fellow. In fact, that position was created for him that year. It's important not to misconstrue terms here. What is meant by "Professor" in 21st Century American academia is far different from what was meant in early 20th Century English academia. Clearly Lewis was not simply a private tutor of English all those 29 years at Oxford. In fact, Lewis was quite influential as a fellow; publishing, establishing the "Inklings," broadcasting lectures on radio, etc. The issue of whether Lewis was a philosopher is a quite different matter. Clearly Lewis' published works were largely concerned with philosophical issues, yet Lewis did not consider himself a philosopher. CannuckianYankee
The following is taken from an email exchange with Stephen Redburn (Cantab/St. Edmund's). I wrote him asking for clarification on a few Oxbridge differences that continue to baffle me: Lecturer: Employed by a Faculty rather than a college in Oxbridge. Usually needs a PhD but some of the old types when I was up 20 years ago still regarded it as a beastly German invention and just had MAs. I don't think you'd get a job without a PhD now though. Usually a member of a College, but doesn't have to be, in Cambridge. (They say it is rather lonely to be in Cambridge and not be in a college.) Expected to lecture and publish papers. Fellow: On the governing board of a College (or Professional Institute like the Institute of Civil Engineers etc. outside academic life). Divided into levels. The top ones are permanent and have full voting rights and attend the management meetings, but I am sure that at St. Ed's they all attended the monthly meeting. Professor: In the UK it is the head of the department or equivalent. The old UK system had only one professor but commonly now they have several in a large department. If the job was funded by the Royal Family who put money into a pot to pay their salary it is called a Regius professor. Regius Professorships are "Royal" Professorships at the universities of Cambridge, Oxford, Dublin, Glasgow, Aberdeen and Edinburgh and each appointment save those at Dublin is approved by the Crown.
http://gmmalliet.weebly.com/oxbridge.html Petrushka
I find it a bit odd, but the official Oxford Glossary doesn't spend any time distinguishing among Don, Tutor, Reader, Professor, Fellow, Lecturer. His field seems to have been English rather than Philosophy, except for six months when he was a substitute. Petrushka
http://www.ox.ac.uk/about_the_university/introducing_oxford/oxford_glossary/index.html Petrushka
#211 Clive This is a rather trivial debate but I am not busy at the moment. I am going to come straight out and say I don't believe you. I see you have changed from "tutor" to "fellow" (the two are different, Lewis was a tutor in philosophy at University College for a year, but not a fellow, then became both fellow and tutor in English at Magdelen) but let's go with fellow. Nowadays a fellowship at Oxford certainly doesn't require you to be a professor. I very much doubt the rules were different in the 1920s. In addition I cannot find any biography of CS Lewis that mentions him being a professor until 1955. Many of them use terms such as "important teaching post". If he had been a professor surely they would have said so? If you can provide a reference to prove your point please do and I will retire shamefaced from this trivial debate. markf
markf,
Sorry but that is not true (I speak as a Cambridge graduate which uses the same tutorial system). A tutor is not a formal position. It just means someone who gives tutorials. It can be of any academic rank. Mine were mostly graduates taking their PhD. CS Lewis did not get a professorship until was 57 and that was not in Philosophy.
Sorry, but that is wrong. Being a Fellow of Magdalen College, Oxford, which Lewis was for 29 years, was a professorship when Lewis had it. I'm not interested how it's done today at Cambridge. Clive Hayden
Barry,
Yep. It lasted a year. He never went back to it. Never took a graduate degree in the field, never published any original research in the field, never held a permanent position in the field and never was elected to a professorship in the field. Based on the above, you want to label him a professor of philosophy. You’re free to do so, but you’re not free to have other people take you seriously.
You said, initially, that he didn't have training in philosophy, not it's that he wasn't a professor of philosophy long enough to suit you. Odd that you move the goal posts.... considering Peter Kreeft, Ph.D., who is a professor of philosophy at Boston College, Victor Reppert, PhD., an American philosopher, Gary Habermas, Ph.D., professor of philosophy at Liberty, Dr. William Dembski, who has two Ph.D's, Anthony Matteo, Ph.D, professor of philosophy at Elizabethtown College, just off the top of my head, consider his works important in philosophy. That's peer review. Indeed, all of these men have endorsed certain portions of Lewis's philosophy. But, BarryR, who almost got a minor in philosophy, doesn't think Lewis was a philosopher. You're right about one thing, Lewis didn't want to be a professional philosopher, but that doesn't mean he wasn't trained in it. Now we've spent too many comments on me just addressing your ad hominem of why Lewis's argument in De Futilitate was wrong. I've yet to see an actual argument.
If you feel I’m being uncivil, then tell me. If I’ve offended you, then tell me. You just might get an apology.
Yes you have been, very jeering and condescending. So.............I'm still waiting as to why Chesterton was wrong and why Lewis was wrong. You can start any time. Clive Hayden
Stephenb #194 (cont) Final comment for this morning .... (I must be mad) Barry has only made the claim, he has not provided any support for that claim. Glancing through the comments above I see the following "laws" which Barry has shown can be false given a change of axioms: #103 2+2=4 a+b+c = c+b+a It is not possible to divide by zero #126 a+b gives a unique result #157 Law of exponents #166 The law of limits with respect to calculus. No doubt you will disagree with his proof in each case but you can hardly say he has not provided any support. In some cases he has written extensively and provided references. On what axioms do the laws of association and accumulation depend? Please be specific, name the axiom or axioms on which they depend and explain the relationship between the two. .... That should read[commutative] and associative laws. This is a bit confusing - as far as I know "commutative" and "associative" are properties which functions may or may not have. A law would have to be something like - (A) "the addition and multiplication functions on real numbers are commutative and associative" Assuming that is what you meant, then these laws are dependent on the axioms of the arithmetic of the natural numbers. I am not a mathematician so I may be wrong about the detail, but I believe the appropriate axioms are the Peano axioms with addition of the definition of the addition and multiplication functions. It needs a better mathematician than me to prove exactly (A) follows from these axioms - perhaps BarryR can oblige. markf
Onlookers: It is often quite revealing to look back at the original post, when a thread goes ever so determinedly off the rails. For, that is telling us just what it is that some of those who come here to comment are ever so desperate not to discuss; nor are they willing to allow others to discuss in peace. Here is Prof Sewell, as excerpted form this thread's original post: ___________________ >> For the layman, it is the last step in evolution that is the most difficult to explain. You may be able to convince him that natural selection can explain the appearance of complicated robots, who walk the Earth and write books and build computers, but you will have a harder time convincing him that a mechanical process such as natural selection could cause those robots to become conscious. Human consciousness is in fact the biggest problem of all for Darwinism, but it is hard to say anything “scientific” about consciousness, since we don’t really know what it is, so it is also perhaps the least discussed. Nevertheless, one way to appreciate the problem it poses for Darwinism or any other mechanical theory of evolution is to ask the question: is it possible that computers will someday experience consciousness? If you believe that a mechanical process such as natural selection could have produced consciousness once, it seems you can’t say it could never happen again, and it might happen faster now, with intelligent designers helping this time. In fact, most Darwinists probably do believe it could and will happen—not because they have a higher opinion of computers than I do: everyone knows that in their most impressive displays of “intelligence,” computers are just doing exactly what they are told to do, nothing more or less. They believe it will happen because they have a lower opinion of humans: they simply dumb down the definition of consciousness, and say that if a computer can pass a “Turing test,” and fool a human at the keyboard in the next room into thinking he is chatting with another human, then the computer has to be considered to be intelligent, or conscious . . . >> ____________________ A very good issue and challenge: our most direct and central experience is of being unified selves, aware, thinking, perceiving, having a sense of conscience, etc. Where does that come from, and can we have a credible evolutionary materialistic account of its origin and character? It should come as no surprise to learn that the evolutionary materialist paradigm has no good answer for this. In wiki's -- testimony against known interest -- summary:
The term [hard problem of consciousness] . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. [I.e. we experience consciously aware life from inside; e.g. we experience the rich redness of a nice dress on a lovely lady] It is contrasted with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[[a]. Hard problems are distinct from this set because they "persist even when the performance of all the relevant functions is explained."
Translating: we cannot simply reduce it on evo mat assumptions, and so it is "hard." (The idea that just perhaps, the real answer is that -- echoing Plato -- Soul is older than what is hard, heavy, hot or cold, dry or wet, is simply put out of mind.] Ilion, in no 5, is apt:
A huge part of the problem (perhaps the entirety of it?) is that trying to “say exactly what consciousness is” generally involves mechanistic/materialistic eliminative reductionism … which is to say, “explaining” consciousness by explaining it away. Consciousness is itself; it’s not made out of something else.
In other words, we are again meeting he a priori materialism that ever so often presentes itself as THE "scientific" view, never mind that the very materialists themselves dierectly experience what it is to be ensouled creatures, and never mind that when they try to explain mind on such premises, they end up in the absurdities of resorting to chance and mechanical necessity as the only relevant causal factors, thus running into any number of absurdities: reducing ground-consequent to physical causal chains and derived accidents of culture and personal life history, ever so soon becomes plainly self referentially absurd. In this context, distractive tangents are unsurprising. (And to see the attempt to dismiss Lewis' argument by undermining the man instead of addressing he issues on the merits, or to dismiss the self-evident truth that 2 + 2 = 4 by injecting a self-contradictory equivocation in the meaning of "+" are all too aptly illustrative of the problem.) So, far, the evidence is on the side of mind being its own self, and the cause of the physical cosmos we as minded creatures, observe. GEM of TKI kairosfocus
BarryR #198 Likewise pleased to meet you. I am not sure how I could link to you on usenet or Google Groups given the only thing I know about you is your Wordpress ID. My e-mail is: mark(dot)t(dot)frank(at)gmail.com Cheers markf
#203 cont To be fair I would say that CS Lewis was trained in philosophy. I understand his degree was in Greats which includes the philosophy of the Greek philosophers and he got a triple first. He was also taken seriously enough to at least have the famous debate with GE Anscombe. However, as far as I know, he never had a philosophical paper published in an academic journal and his name never came up in 3 years of studying philosophy. So, given that there is only time to read a limited number of philosophers in a life time - I am unlikely to get beyond Narnia as far he is concerned. markf
markf@203
Lewis did not get a professorship until was 57 and that was not in Philosophy.
Ah, that was at Cambridge, right?
In June, Lewis accepted the Chair of Medieval and Renaissance Literature at Cambridge.
I had missed that. BarryR
CF@202
Do you not know that a tutor is a professor at Oxford?
No, I didn't know that. The Girlfriend spent a year over there while she was getting her physics degree. She reports the correct title is "Don". These days they tend to have Ph.Ds., but standards were a bit more lax when Lewis was there. Do you have a cite to the contrary?
Lewis was hired in place of another professor, E.F. Carritt, while he was gone for a year, and guess what, E.F. Carritt was also a “tutor”, which means professor.
I taught several semesters for professors while I was getting my Master's degree. It's a pretty common setup. That didn't make me a professor.
Did you have a chance to look up what is entailed in the Greats (Literae Humaniores) as a degree at Oxford?
I cited it back at 155. Glad to see you're getting caught up. You'll notice no dissertation is required, nor did he do sufficient (any?) original research afterwards to be considered for an English professorship, much less a philosophy position.
“His first job at Oxford, then was teaching philosophy”
Yep. It lasted a year. He never went back to it. Never took a graduate degree in the field, never published any original research in the field, never held a permanent position in the field and never was elected to a professorship in the field. Based on the above, you want to label him a professor of philosophy. You're free to do so, but you're not free to have other people take you seriously. (Now you've got me wondering --- there were formal programs in philosophy at the time, right? If Lewis really did want to be a philosopher, why wasn't he enrolled in one of those instead of one of the Western Classic programs? The "Greats" are fine for everything up to, what?, Augustine? Did he ever taken any classes covering philosophy after 1700? I honestly don't know.)
I see these reasons as cop-outs.
I see my posts getting delayed for moderation with no explanation. I see that a post mentioning this and apologizing for the delay got killed. I don't know if this is you or someone else, but this isn't how adults behave. If you feel I'm being uncivil, then tell me. If I've offended you, then tell me. You just might get an apology. BarryR
#202 Do you not know that a tutor is a professor at Oxford? Sorry but that is not true (I speak as a Cambridge graduate which uses the same tutorial system). A tutor is not a formal position. It just means someone who gives tutorials. It can be of any academic rank. Mine were mostly graduates taking their PhD. CS Lewis did not get a professorship until was 57 and that was not in Philosophy. markf
BarryR,
Lewis was hired as a tutor of English in Magdalen college. Later…
Do you not know that a tutor is a professor at Oxford? Lewis was hired in place of another professor, E.F. Carritt, while he was gone for a year, and guess what, E.F. Carritt was also a "tutor", which means professor. Please read page 15 of C.S. Lewis as Professor. Did you have a chance to look up what is entailed in the Greats (Literae Humaniores) as a degree at Oxford?
There’s a bit of a literary trope here that I don’t think you’re familiar with. Using “X as Y” in the title indicates that the author understands that X is not considered to be a Y, and is making the proposition that it might be educational to see what happens if we made that assumption.
It would've been so painless to have just read the introduction the the book that I linked for you, in which case yours is not an argument. The authors are dead serious in taking Lewis as a philosopher. Please, save me from correcting, I really mean this, and read, at the very least, the intro to the book. It will save us a lot of time. "His first job at Oxford, then was teaching philosophy" and "It was only when he received his position as Fellow of Magdalen College in 1925, his first permanent job [at age 27], that his career direction as a literary scholar was set. But even then, part of the reason he may have gotten the job was Magdalen was looking for someone who could teach both philosophy and English" page 15. I know you had almost enough credits to get a minor in philosophy, which a Minor is usually about 26 credits, but the Greats at Oxford is a four year degree in itself. Essentially Lewis got three three degrees at once and got Firsts (the highest grades possible) in all three. I know, I know, you almost got a minor in philosophy, and I'm sure you could translate the Iliad at 14 like Lewis did. Let's put this into perspective, Lewis could actually speak Attic Greek, not just read it and write it. He also knew Medieval Italian, Latin, French, German, Anglo-Saxon and Old Icelandic, as well as being an English Scholar. You might be interested in a book of his called Studies in Words, in which he traces the different usages of words throughout time and throughout different languages and explains their usage in that manner, which gives insight into their ancients' worldview and their philosophy.
Putting that to one side, Lewis isn’t wrong because he’s not a philosopher, he’s wrong because he’s making several mistakes that actual philosophical training would have prevented.
How so?
I’m afraid my answer may contravene the moderation policy, so I’ll have to forbear.
The policy is just being civil, you don't have to be uncivilized to explain to me how Lewis made mistakes a trained philosopher wouldn't have made.
I’m happy to give you an answer via private email or in a non-moderated public forum. I’m a regular at talk.origins, so if you like I can start a thread over there. I can give you the Chesterton critique there as well.
I see these reasons as cop-outs. You're welcome to write to me at my email, but it better not be uncivilized there either. And if it isn't, I see no reason why we cannot continue here. Clive Hayden
#194 You are putting the cart before the horse. We know the universe in part, because the laws of mathematics allow us to provide a quantitative analysis. If we could know the universe and its laws in advance of applying mathematical laws, then the application would be redundant. You are assuming as given the very knowledge that can only be obtained through the laws of mathematics It is true that the use of mathematics allows us to provide a quantitative analysis because some axiomatic systems work well and in that sense helps us get to understand the universe even better. I meant know in the sense of "se connaitre" rather than "savoir". I should rephrase my paragraph as: “It is hardly surprising that the mathematical systems we know best describe aspects of the universe we deal with regularly. That is because we were motivated/inspired to devise mathematical systems that describe the universe we deal with.” The point is that the fact that some mathematics describes some aspects of the universe very closely does not prove that mathematical laws are in some sense necessarily true except in the trivial sense that they follow from their axioms. markf
StephanB@196 What did you think of section 3.3 of Edmond's paper? BarryR
markf@193
That just isn’t true. Euclidean geometry is only an approximation to any real world we have met – see above.
Most mathematicians believe the real world is an approximation of Euclidean geometry. ;-)
The consequences of axioms never change. The consequences of the rules of chess never change. The choice of axioms is continually changing to meet the demands of the real world or just to satisfy the curiousity of mathematicians. The rules of chess change to sometimes to give us a different game.
I can't improve on that. BarryR
markf@186 As to interesting questions:
1) Does Euclidean geometry have any special status relative to other geometries? 2) Are the results discoveries or inventions? 3) Do the laws of mathematics refer to some abstract objects – numbers, circles, triangles etc – which in some sense exist to be discovered? Here I suspect I differ from BarryR. I think he proposes that there are infinitely many systems of such abstract objects including one for each possible geometry. He just doesn’t give any such system any special status. I believe they are just rules for manipulating symbols much as the rules of chess are (unless of course you believe in an abstract world of pawns, kings and bishops etc).
It's a pleasure to make your acquaintance. You might want to drop by talk.origins (via google groups if you don't have a usenet feed) and say hello. My introduction to these questions came via Davis and Hersh's _The Mathematical Experience_. It's not a book about math, it's a book about how math is created / performed / discovered. To say it's the best introduction to serious mathematics for a general audience implies that there exists other such books, and outside of the A Very Short Introduction series I don't know of any. For a quick sampling, check out their definition of the ideal mathematician. I'm just going to address one of your points here:
I think he proposes that there are infinitely many systems of such abstract objects including one for each possible geometry. He just doesn’t give any such system any special status.
That's an excellent summary of the Platonic approach, and you're arguing instead for a more formalist approach. I know intelligent people who hold each view (the professional mathematicians are uniformly Platonists and the Computer Scientists are uniformly Formalists, I tend to swing back and forth). With either approach, though, there's a problem of how to give special status to one over another --- the axiomatic systems that result in 2+2=4 get far more use than those where they don't. I find utility to be a simple enough explanation: we focus on math that maps well onto the world around us and (usually) ignore the math that doesn't. So while I consider all axiomatic systems to be epistemically and ontologically equal, when it comes to relevance, we pick our favorites. G. H. Hardy argued that beauty also plays a role:
There are masses of chess-players in every civilized country—in Russia, almost the whole educated population; and every chess-player can recognize and appreciate a ‘beautiful’ game or problem. Yet a chess problem is simply an exercise in pure mathematics (a game not entirely, since psychology also plays a part), and everyone who calls a problem ‘beautiful’ is applauding mathematical beauty, even if it is a beauty of a comparatively lowly kind. Chess problems are the hymn-tunes of mathematics.
From A Mathematician's Apology Figuring out that there are no universal axioms isn't that difficult --- I managed to pick this up in 8th grade, but then I had an exceptionally good geometry teacher. Deciding whether or not math reflects a different reality or is just symbol manipulation is ultimately unanswerable, and most of the time I don't think it matters. Figuring out where beauty lies in mathematics, though --- that's a hard problem worth solving. BarryR
CH@170
C. S. Lewis as Philosopher by David Baggett, Gary R. Habermas, and Jerry L. Walls
Interesting. Have you read Baggett's other book, Harry Potter and Philosophy: If Aristotle Ran Hogwarts? There's a bit of a literary trope here that I don't think you're familiar with. Using "X as Y" in the title indicates that the author understands that X is not considered to be a Y, and is making the proposition that it might be educational to see what happens if we made that assumption. Other examples: Paul Radin, Primitive Man as Philosopher Robert J. Yanal, Hitchcock as philosopher Anton Pannekoek, Lenin as philosopher and let's not forget Caleb Sprague Henry, Satan as moral philosopher I'd say Baggett is well aware that Lewis is not considered a philosopher.
As I said before, by 26 he was a philosophy professor at Oxford.
I took to heart your suggestion of consulting a biography of Lewis. You are wrong on both counts. Lewis was hired as a tutor of English in Magdalen college. Later...
The number of students reading English at Oxford vastly increased after the war. Whereas during the was Jack [as C. S. Lewis was known to his friends] had easily been able to tutor all Magdalen English students as well as some from other colleges, spending one hour alone with each student every week, he now found that he needed help. It soon became clear to his friends that his work load was excessive. They suggested that a professorship would be less demanding than tutoring. When a Merton professorship of modern English literature became available, Tolkien, who was a Merton Professor of English and, therefore, an elector, rallied to Jack's support, hoping to achieve what had been a prewar ambition, of having Jack and himeslf installed as the two Merton Professors. He suggested this to his fellow electors, H.W. Garrod, C.H. Wikinson, and Helen Darbishire and found to his surprise that all three were against Jack. They felt he had not produced a sufficient amount of scholarly work, pointing out that his most successful books were three novels and some popular religoius or theological books. They thought his election would lower the status of the professorship and even discredit the English School. Jack's former tutor, F.P. Wilson, was eventually elected to the professorship.
From George Sayer and Lyle W. Doreett's _Jack: A Life of C. S. Lewis_ p328-9. He wasn't a professor of philosophy. He wasn't even a professor of English. Occasionally you'll see him referred to a "Professor" by someone outside the college, but this was only an honorific. You could have looked that up yourself, you know.
How about reading it in the original Greek and taking in all of the metaphysics and philosophy at once?
Thank you for reminding me. The first tutorial I took in the Honors College was to have been a survey of Theater Criticism, but I started off with some unkind words about Aristotle's Poetics and Dr. Bill had the good sense to throw out the syllabus and spend an entire semester on that tiny, tiny book. There were only three of us in a tiny, windowless room in the basement of the TComm building. We'd read a sentence, I'd immediately jump in trying to understand it, Bill would try to rein me in, and then the other student, Toni, would reliably interject something profound after we'd gone back and forth for a while. Sometimes we'd make it through a chapter, sometimes we'd spend the entire class period on a single word (yes, in the original Greek). After 17 years of college education across Theater, English, Political Science, Philosophy, Math, CompSci and Biology, that remains my favorite class. It taught me how to read, and I made two lifelong friends as well (Toni even flew across the country to attend my Ph.D. defense.) (It's funny, though, that I considered it to be a theater class instead of a philosophy class when I was thinking about the philosophy classes I had taken.) That being said, translations are fine if you're reading outside of a university and useful when your reading inside them.
You’re not trained in philosophy or literature.
My undergraduate focus was on dramaturgy, so I'll cop to the literature end. I think I had enough philosophy classes to have earned a minor, but that would have required a modicum of paperwork and I've never been good at paperwork. I suppose I could ask if you have any training it either, but I don't think there's any need.
Putting that to one side, Lewis isn’t wrong because he’s not a philosopher, he’s wrong because he’s making several mistakes that actual philosophical training would have prevented.
How so?
I'm afraid my answer may contravene the moderation policy, so I'll have to forbear. I'm happy to give you an answer via private email or in a non-moderated public forum. I'm a regular at talk.origins, so if you like I can start a thread over there. I can give you the Chesterton critique there as well. BarryR
---BarryR @179 "Before you waste you time, I certainly believe there are mathematical laws; they’re just context-dependent." [Which is the argument I have been making all along. Plane Geometry has its laws; spherical geometry has its laws; arithmetic has its laws etc. That is why I objected to your use of the word universal] BarryR @86: Math and logic. There are no “laws of math and logic”. There are axioms and ways of relating axioms to each other, but you’re free to choose your axioms and your relations to suit whatever it is you’re trying to accomplish [Which is the argument you made at the beginning and which you morphed into the previous paragraph.] BarryR pre-interaction: "THERE ARE NO MATHEMATICAL LAWS." BarryR post interaction. "I CERTAINLY BELIEVE THERE ARE MATHEMATICAL LAWS." Obviously, I disagree with your first position and agree with your second position. Now, let's begin to discuss the laws of logic that you claim do not exist. StephenB
@194: Excuse me. That should read[commutative] and associative laws. StephenB
---markf: "It is hardly surprising that the mathematical systems we know best describe aspects of the universe we know very well. That is because we were motivated/inspired to devise mathematical systems that describe the universe we know well." You are putting the cart before the horse. We know the universe in part, because the laws of mathematics allow us to provide a quantitative analysis. If we could know the universe and its laws in advance of applying mathematical laws, then the application would be redundant. You are assuming as given the very knowledge that can only be obtained through the laws of mathematics. Similary, if we could evaluate evidence in the absence of logical laws, we wouldn't need logic. Reason's rules inform evidence, and mathematical rules inform science. ---"BarryR has shown for several examples of mathematical law that you have suggested that they are dependent on certain axioms." Barry has only made the claim, he has not provided any support for that claim. On what axioms do the laws of association and accumulation depend? Please be specific, name the axiom or axioms on which they depend and explain the relationship between the two. StephenB
#190 In some cases the same law was discovered by two people, neither of which was aware of the other’s work. This is only possible if both were discovering a truth that was independent of their capacity to simply conceive a set of axioms for the sake of argument. Two people might independently discover an inevitable consequence of the rules of chess. However, the rules of chess were clearly conceived by men. It is no coincidence that the ordered universe, which is law-like, is synchronized with the laws of mathematics. It is hardly surprising that the mathematical systems we know best describe aspects of the universe we know very well. That is because we were motivated/inspired to devise mathematical systems that describe the universe we know well. Nevertheless we keep on stumbling across aspects of the universe which are not modelled well by current maths. In some cases we devise new maths - non-Euclidean geometry. In other cases we are currently stumped - fluid dynamics. If one establishes a mere rule, as in chess, one has simply set down a certain set of conditions that does not necessarily have any relationship with the real world. There is nothing in the laws of chess that can quantify what happens in ordered nature. On the other hand, the laws of mathematics faithfully reflect what goes on in the world that they measure. That just isn't true. Euclidean geometry is only an approximation to any real world we have met - see above. They never fail and they never change. A rule could change overnight and, for that reason, it would be unreliable as a means of measurement. The consequences of axioms never change. The consequences of the rules of chess never change. The choice of axioms is continually changing to meet the demands of the real world or just to satisfy the curiousity of mathematicians. The rules of chess change to sometimes to give us a different game. In keeping with that point, Chess is simply a game like football or basketball, though a bit more cerebral. There is no such thing as a “law” of baseball. — Not sure about baseball but welcome to the laws of soccer. Also, don’t forget that mathematicians themselves call these laws by their proper name. Snells law, for example, is not called Snell’s axiom or Snell’s assumption or Snell’s proposition. These laws do not depend on some broader axiomatic formulation. If they did, they would not be laws. They would simply be derivatives of the mathematicians imagination. Given the link to the laws of football I don't think we should place too much emphasis on whether they are called laws, axioms, lemmas, results or whatever. I am not quite sure why you introduced Snell's law which is a law of physics not maths and the result of experimental observation rather than deduction. BarryR has shown for several examples of mathematical law that you have suggested that they are dependent on certain axioms. markf
BarryR, C. S. Lewis as Philosopher by David Baggett, Gary R. Habermas, and Jerry L. Walls should be added to your embarrassing amount of books. I don't think you're really appreciating what it means to get a first in Greats at Oxford. As I said before, by 26 he was a philosophy professor at Oxford.
Interesting. I appear to have had more training in philosophy than C. S. Lewis. (Enough classes to qualify for a minor during my undergrad as well as several graduate classes in logic when I was getting my Masters.)
How so?
Putting that to one side, Lewis isn’t wrong because he’s not a philosopher, he’s wrong because he’s making several mistakes that actual philosophical training would have prevented.
How so?
Part of my training was in learning how to spot those kinds of errors. That’s part of the difference between reading Plato as Literature and reading Plato as Philosophy.
How about reading it in the original Greek and taking in all of the metaphysics and philosophy at once? You're not trained in philosophy or literature. I find it frankly irrational to consider that you can know what was in Lewis's mind while he read Plato and every other philosopher that he studied.
Tell you what: remove the moderation block and I will go through and give you a point by point critique of Chesterton. If I’m going to put in the time to do this, I want some assurance that the post will actually show up here.
It'll show up as long as it's not against the moderation policy. Clive Hayden
PPS: Wiki actually goes into some of the chaos, by raising the scene in 1984 in which the party is imagined as saying 2 + 2 = 5: ________________________ >> The phrase "two plus two equals five" ("2 + 2 = 5") is a slogan used in George Orwell's Nineteen Eighty-Four[1] as an example of an obviously false dogma one must believe, similar to other obviously false slogans by the Party in the novel. It is contrasted with the phrase "two plus two makes four", the obvious – but politically inexpedient – truth. Orwell's protagonist, Winston Smith, uses the phrase to wonder if the State might declare "two plus two equals five" as a fact; he ponders whether, if everybody believes in it, does that make it true? Smith writes, "Freedom is the freedom to say that two plus two make four. If that is granted, all else follows." Later in the novel, Smith attempts to use doublethink to teach himself that the statement "2 + 2 = 5" is true, or at least as true as any other answer one could come up with. Eventually, while undergoing electroshock torture, Winston declared that he saw five fingers when in fact he only saw four ("Four, five, six - in all honesty I don't know"). The Inner Party interrogator of thought-criminals, O'Brien, says of the mathematically false statement that control over physical reality is unimportant; so long as one controls their own perceptions to what the Party wills, then any corporeal act is possible, in accordance with the principles of doublethink ("Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once"). >> __________________________ Are we SURE it isn't "really" 1984? Just like, we could simply decide that the new millennium began in 2000 not 2001. (In Jamaica, that famous iconoclast, Mutty Perkins, was heard: one coco, two cocos [= taro etc], . . . highlighting that since there was no year of zero, the 2,000th year marked the close of the second Christian Millennium as counted on the conventional timeline.) How vital is the understanding that Aristotle (and Plato before him) bequeathed to us:
The truth says of what is, that it is; and of what is not, that it is not.
And, sirs, truth is the real target of the mind-benders. Inconvenient but self-evident or objective truth. In that sad cause, they would have the year forever be 1984 . . . but there cometh a day in the which we shall all have to reckon with Him who is Truth and Reason Himself. kairosfocus
---markf: "Are the results discoveries or inventions? I don’t care too much what we call them." Don't you think that if two things are radically different they should go by different names? If someone discovers a law, it means that he/she has uncovered an unchanaging truth. In some cases the same law was discovered by two people, neither of which was aware of the other's work. This is only possible if both were discovering a truth that was independent of their capacity to simply conceive a set of axioms for the sake of argument. It is no coincidence that the ordered universe, which is law-like, is synchronized with the laws of mathematics. If one establishes a mere rule, as in chess, one has simply set down a certain set of conditions that does not necessarily have any relationship with the real world. There is nothing in the laws of chess that can quantify what happens in ordered nature. On the other hand, the laws of mathematics faithfully reflect what goes on in the world that they measure. They never fail and they never change. A rule could change overnight and, for that reason, it would be unreliable as a means of measurement. ---"Consider the consequences of the rules of chess – for example it is true, given the rules, that you cannot mate your opponent if all you have left is a bishop and your king. Is this a discovery or an invention – given that Chess is an invention and you could always alter the rules to make it possible to mate with a bishop?" The rules of chess are not laws. They are not necessarily unchangeable, nor are they synchonized with a lawfully ordered universe in such a way that each makes sense in light of the other. By contrast, the laws of math confirm the laws of science and the laws of science confirm the laws of math. That is one of the main reasons that the universe is a rational place. Lawful mathematics tells us something about the law-like ordering of the universe. If mathematics had no laws, it would be undependable as a means to measure nature. Mathematics may develop, but its laws do not change. In keeping with that point, Chess is simply a game like football or basketball, though a bit more cerebral. There is no such thing as a "law" of baseball. --- "Who cares? Evidently, both you and I care a great deal about the subject matter. There is a truth which needs to be acknowledged: We have rational minds, we live in a rational universe, and there is a correspondence between the two. Also, don't forget that mathematicians themselves call these laws by their proper name. Snells law, for example, is not called Snell's axiom or Snell's assumption or Snell's proposition. These laws do not depend on some broader axiomatic formulation. If they did, they would not be laws. They would simply be derivatives of the mathematicians imagination. StephenB
PS: Maybe we can cite Wiki, as that ever so reliably materialistic reference will at least set a baseline: ___________________ >>Addition is a mathematical operation that represents combining collections of objects together into a larger collection. It is signified by the plus sign (+). For example, in the picture on the right, there are 3 + 2 apples—meaning three apples and two other apples—which is the same as five apples. Therefore, 3 + 2 = 5. Besides counts of fruit, addition can also represent combining other physical and abstract quantities using different kinds of numbers: negative numbers, fractions, irrational numbers, vectors, decimals and more. Addition follows several important patterns. It is commutative, meaning that order does not matter, and it is associative, meaning that when one adds more than two numbers, order in which addition is performed does not matter (see Summation). Repeated addition of 1 is the same as counting; addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. All of these rules can be proven, starting with the addition of natural numbers and generalizing up through the real numbers and beyond. General binary operations that continue these patterns are studied in abstract algebra. Performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers; the most basic task, 1 + 1, can be performed by infants as young as five months and even some animals. In primary education, children learn to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day . . . . Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with N(A) = a and N(B) = b. Then a + b is defined as N (A U B) . . . >> ___________________ To then impose an extraneous and contradictory "definition" on what is plain and simple, to make a rhetorical point and thus to derail a serious discussion, is plainly inexcusable. But then, these days ever so much of that is going on, as our civilisation descends -- at the hands of the materialists and their fellow travellers -- into the chaos Plato warned of in the Laws, Bk X; 2300 years ago. kairosfocus
Onlookers: The above abundantly illustrates what happens when the law of non-contradiction is wantonly disregarded, and equivocations are injected willy-nilly into our reasoning structure. Ever-deepening confusion and absurdity. A sad picture. And ever so revealing. Reductio ad absurdum. On steroids. In fact, the addition operator was introduced precisely to express what happens with the cardinality of the resulting set when disjoint sets of identifiable items are joined in a union. Once that is happening and the cardinality of the original two sets is 2, then the cardinality of the resulting unified set is and must be 4, on pain of absurdity. But, that is simply a fancy way of describing what happens when you:
1 --> Get a match box with enough matches in it. 2 --> Take out two pairs, and set them up in two clusters 3 --> Push the clusters together to form a single unified cluster.
What we keep seeing above is the attempt to inject an equivocation in the symbol for the addition binary operation, and the consequential contradictions and absurdities that follow. Now, you will see that I have confined myself to natural numbers, as that is all we need to show that there are objectively verifiable self-evident, truths in mathematics; that thus have universal character. Now, we may then EXTEND the addition operator to other cases, as we expand the entities we are addressing, but in so doing we must be careful to be consistent with the original definition. So, we speak of adding fractions [and remember a decimal often has a fractional component; the place value notation being a misleadingly "simple" way of expressing that]; reals, negatives, and complex numbers. When we start to get into vectors, matrices and further constructs, the properties change and one has to be careful. But the parallelogram law of vector addition is equivalent to reduction on base vectors and addition of coefficients of the base vectors, e.g. the famous i, j, k vectors of simple mathematics. And complex numbers are a special case of vectors. All such extensions rest on the confident acceptance of the universal validity of the already existing case of the addition of simple wholes. So, here we are as we watch the evolutionary materialistic, selectively hyperskeptical, radically relativistic view in action: intelligent, highly educated adherents are trying to argue against the legitimacy of the fact that 2 + 2 = 4 is self evidently and undeniably true once we understand the meaning of the symbols [which of course is a contextual thing, but the reality the symbols capture is beyond the particular symbols we are using], something that is instantly accessible to the conscious reasoning mind, by simply playing with a box of matches. Utterly and ever so sadly revealing! GEM of TKI kairosfocus
Cabal:
Nobody in his right mind would ever claim that two apples plus two apples mysteriously might reappear as onehundredandfortyseven (147) apples.
Precisely. That is why we define the operations, numerals and symbols to capture these facts of reality. Two match sticks each, in two groups, joined together will -- and must -- give four. 2 + 2 = 4 summarisesw this. What was done above was to inject an arbitrary redefinition of+, and pretend that this changes the truth of 2 + 2 = 4. But to try to do that simply results in a cluster of patent contradictions. And, the same reasoning extends to plane geometry and the like, as well as to trigonometry. When se set out axiomatic systems to integrate such facts that we discover ans substantiate as necessary truths, then we see that axiomatic systems are necessarily incomplete if coherent, and incoherent if completer. Worse, there is no way to set up a system of axioms known to be consistent. (Thence the concept of undecidable propositions relative to given systems of axioms.) But such integrative schemes account for the facts of mathematics, it is not that the facts depend on them. GEM of TKI kairosfocus
#179, #180 Stephenb There is a lot of terminology floating around here which is in danger of turning this into a discussion of semantics. I suggest ditching the words "universal" and "absolute". I would not deny that there are mathematical results, some of which are sufficiently general to be called laws, which flow from agreed sets of axioms. These results always hold if the axioms hold. So pi is always going to 3.14..... given the axioms of Euclidian geometry. I hope you would also accept that any given set of axioms may match the material world to a greater or lesser extent. Euclidean geometry is a good approximation for relatively small areas of the earth's surface - pretty poor for large scale maps. The interesting questions include: 1) Does Euclidean geometry have any special status relative to other geometries? (If not, then we can only say the ratio of the circumference to the diameter is sometimes 3.14 .... and sometimes not, depending on conditions. ) 2) Are the results discoveries or inventions? I don't care too much what we call them. Consider the consequences of the rules of chess - for example it is true, given the rules, that you cannot mate your opponent if all you have left is a bishop and your king. Is this a discovery or an invention - given that Chess is an invention and you could always alter the rules to make it possible to mate with a bishop? Who cares? 3) Do the laws of mathematics refer to some abstract objects - numbers, circles, triangles etc - which in some sense exist to be discovered? Here I suspect I differ from BarryR. I think he proposes that there are infinitely many systems of such abstract objects including one for each possible geometry. He just doesn't give any such system any special status. I believe they are just rules for manipulating symbols much as the rules of chess are (unless of course you believe in an abstract world of pawns, kings and bishops etc). markf
[Pi is a universal constant in Eucledian space.] ---markf: "That is true – but it only holds where the axioms of Eucledian space hold." It is a law for which there are no exceptions. ---In non-Eucledian spaces the ratio of the circumference to diameter of a circle can almost anything." Non-Euclidlian geometry is a different subject matter, which is one more reason why the word absolute applies and the word universal is problematic. To the extent that the laws of non-Euclidian geometry have been discovered, they are as absolute and unchangeable as the laws of Euclidian Geometry. Why would anyone expect the laws of plane geometry to be similar to the laws of spherical geometry? That, by the way, is why BarryR injected the word "universal" in the discussion, to shift the discussion away from the absolute, unchanging laws in the various mathematical discipines and onto the trivial and obvious point that no universal law can speak to all disciplines at the same time. Hence, my insistence on the word absolute. ---"The real question is whether there is anything special or absolute about the statements of Eucledian geometry as opposed to any other." Obviously, mathematical laws are absolute. Do you know of any that have been found to be untrue? They certainly work in the real world where they are applied every day. Do you know of any instances in which a bridge collapsed because the laws of trigonometry failed to hold up? StephenB
Cabal@181 Pi is both irrational (cannot be expressed as a ratio of two integers) and transcendental (not a root of a non-constant polynomial equation with rational coefficients). Different systems handle this differently. Constructivists hold that only accepted proof of a numerical object is the construction of that object, and so while approximations to irrationals are known to exist, there is no proof for irrationals. This ends up giving you an arbitrarily tight bound on pi, but you can't get to pi itself. And, of course, pi won't show up in any number systems that can't handle irrationals (like the mathematics that describe how your computer operates --- that uses a very loose approximation). Also, any system that is sufficiently simple to escape Godel's incompleteness theorem probably isn't complex enough to construct irrationals, but that's just an intuition. BarryR
---markf: "The prescence of rules and deductions called “laws” is surely not the issue. I accept that there are laws of chess. I just don’t think they apply outside of chess." You are making the same mistake that BarryR makes. There are no laws of chess, only rules. Laws are discovered; rules are established. As I have already indicated with my brief reference to the law of sines/cosines, the laws of mathematics were discovered and are unchangeable. The word "absolute" covers that better than the word "universal," though the latter term would also apply. Few people, however, would think to use that word, which is why I objected to BarryR's attempt to inject it into the discussion for purposes of measuring its frequency of use over the internet--as if that means anything. He, like you, are not just making an error in interpretaion. You are both wrong about the facts in evidence. StephenB
#177 I guess the odd symbol was pi? The key point here comes from vividbleau's comment: Pi is a universal constant in Eucledian space. That is true - but it only holds where the axioms of Eucledian space hold. In non-Eucledian spaces the ratio of the circumference to diameter of a circle can almost anything. The real question is whether there is anything special or absolute about the statements of Eucledian geometry as opposed to any other. markf
Onlookers: You will observe how RB has again, sadly, evaded the issue. We deal with a real world, where we can collect objects and group them — go get a match full box for illustration.
Being an onlooker, I suppose i am entitled to make a comment here. It seems to me that much of kf's argument is about counting discrete objects. Nobody in his right mind would ever claim that two apples plus two apples mysteriously might reappear as onehundredandfortyseven (147) apples. That's a feat of biblical proportions. The question of ¶ is another matter; we are dealing with a particular relationship that cannot be expressed with a whole number. I'd have to study the subject first but I suspect maybe nobody knows if an exact number may be found this side of an infinite number of decimals. Hope the last sentence makes sense. Cabal
#175 Inasmuch as I said nothing about “universal” mathematical laws, I can’t imagine what you mean? I said that there are mathematical laws, and you said that there are none. That is the issue on the table If a mathematical law is not universal then presumably it does not apply at all times and all places (or do you mean something else by "universal"? If all you are claiming is that there are some mathematical laws that are true under some conditions, I don't think Barry or I would disagree. We already discussed that 2+2=4 when applied to apples. The prescence of rules and deductions called "laws" is surely not the issue. I accept that there are laws of chess. I just don't think they apply outside of chess. markf
StephanB@176
Inasmuch as I said nothing about “universal” mathematical laws
That's my formulation, yes. I thought you disagreed. Is that correct?
I said that there are mathematical laws, and you said that there are none. That is the issue on the table.
I'm pretty sure you still don't understand my actual position, but I don't think there's anything to be gained by repeating it.
You claim that your authors agree with you, but when I ask you to show me when and where, you opt out.
I think I showed you that, for these laws being universal and all, nobody seems to consider them worth writing about. But since you insist: Bruce Edmonds, What if /all/ truth is context-dependent, draft, 2001.
Pure mathematics aspires to a world of its own. It is concerned with what can be formally proven given certain structures, assumptions, etc. For example, given Peano's axioms for arithmetic, the standard notation and some standard logical inference operations, one can prove the statement "1+1=2". Does this not mean that "1+1=2" is a universal truth, devoid of context? I would argue not.
I don't hold out any great hope that you're going to read the paper. Nor do I hold out any great hope that you're going to be able to supply any citations for your side, much less supply any universal / objective mathematical laws.
On the other hand, I can certainly cite a number of authors and mathematicians who agree that there are, indeed, a number of mathematical laws.
Before you waste you time, I certainly believe there are mathematical laws; they're just context-dependent. I'm pretty sure you don't understand the distinction, and frankly I don't know how to explain it to you.
general LAW of sines
How does the law of sines apply to graph theory? I don't think it does. If you're working with a geometric system, sure, then within that system you can describe a general law. But only within that system. I'm sorry you've put so much time into this and still don't understand what's at issue.
Do you dispute any of this?
I don't dispute that you've found yet another context-dependent law. They're rather thick on the ground, you know. I will dispute that these are context-independent. Tell you what: After you've read Edmonds let me know and we can discuss his formulation, and I'll refrain from any other replies to you until you do so. BarryR
kairosfocus@174 You've demonstrated how addition holds when mapped to discrete objects that remain discrete under addition. As that doesn't even describe all physical systems, I'm not clear why you think this is universal. BarryR
vividbleau@172 Well done. That's the most mathematically sophisticated reply I've gotten in this thread. (No, that's not sarcasm.) But as you say:
Pi is a universal constant in Eucledian space.
I'd go so far as to say pi will be found in any system that supports transcendental numbers. But as not all systems support transcendental numbers, I wouldn't consider pi to be a universal constant. Still, as candidates go, it was much better than "exponentiation". BarryR
---BarryR: "I’d like to think you didn’t make up the idea of universal mathematical laws. Can you give me the citation where you first heard of it?" Inasmuch as I said nothing about "universal" mathematical laws, I can't imagine what you mean? I said that there are mathematical laws, and you said that there are none. That is the issue on the table. You claim that your authors agree with you, but when I ask you to show me when and where, you opt out. Why cite them if you cannot show how they support your claim? On the other hand, I can certainly cite a number of authors and mathematicians who agree that there are, indeed, a number of mathematical laws. Just to make sure that I understand you, are you saying, for example, that the law of sines/cosines is not a law in spite of the fact that its discoverers called it a law and that all mathematicians that I know of would call it a law? According to Wikipedia, "The spherical LAW of sines was discovered in the 10th century. It is variously attributed to al-Khujandi, Abul Wafa Bozjani, Nasir al-Din al-Tusi and Abu Nasr Mansur Al-Jayyani's The book of unknown arcs of a sphere in the 11th century introduced the general LAW of sines." "The plane LAW of sines was later described in the 13th century by Nas?r al-D?n al-T?s?. In his On the Sector Figure, he stated the LAW of sines for plane and spherical triangles, and provided proofs for this LAW." Do you dispute any of this? Frankly, I don't understand how you can continue on with this easily refuted proposition of yours. StephenB
PS: Re Vivid, Pi is the ratio of the circumference of a circle to its diameter. That this is a fixed number has not changed, nor will it change, not so long as circles remain circles, similar to the traces we make by whirling a compass. This is a brute fact of reality, and it is certainly an unchanging mathematical truth. We may not be able to specify exactly pi in any system of fractions, but that does not change the reality that a circle has certain key properties. kairosfocus
Onlookers: You will observe how RB has again, sadly, evaded the issue. We deal with a real world, where we can collect objects and group them -- go get a match full box for illustration. Cluster matches like so: { | | } { | | } Recluster, by pushing together: { |||| } You have just physically instantiated the arithmetic binary operation, + for two two-sets [symbolised by 2], and yielding a four-set, symbol 4. 2 + 2 = 4 Based on the meaning of 2, +, = and 4 as conventional symbols [which can vary, just make sure you are using well thought out symbols], this is not just inductively true, nor is it simply an arbitrary game of symbols in a system that rests on ultimately circular tautologies. No, instead we are here again up against the reality of self-evident truth. That is, given our experience of the world as conscious and reasoning creatures, providing we have a good understanding of 2, +, = and 4 [all of which inextricably interact in their meaning], it does not just happen to be so that 2 + 2 = 4, but that it MUST be so. On pain -- pardon, but this is part of what a self-evident truth is -- of immediate descent into obvious absurdities. In the case where someone attempted to redefine that 2 + 2 = 147, there are a great many absurdities involved. Starting with arbitrary redefinition of symbols used in an evasive equivocation. And, going on to the situation where we are now seeing + being sued in at least two different ways without means to distinguish, i.e. communication and reason are beginning to break down. Hardly less significant, 147 is a notation: 1 x 100 + 4 x 10 + 7 x 1. See how the original meaning of 4, which includes properties like 4 = 2 + 2, or 1 + 3, or 1 + 1 + 1 + 1, has resurfaced? [If, in attempting to deny or dismiss a point you find you are implicitly assuming and using it, then that is a strong sign that you are in error.] Axiomatic systems in mathematics may be constructed, but the facts that they address are real, and in the case of the sort of fairly simple cases like basic arithmetic, are often self-evident. GEM of TKI kairosfocus
typo I know vividbleau
RE 168 "But nothing listing what these universal mathematical laws might be." Pi is a universal constant in Eucledian space. Vivid vividbleau
CH@170
Of course your cats and my dog have consciousness, and know things about the universe, isn’t that obvious?
I sure thought it was. Granville Sewell (who authored the article to which we're responding) doesn't find it obvious at all. For him, human consciousness is special, not just further along on a continuum of consciousness. I'm glad we've found something to agree on.
You can choose axioms depending on what you’re trying to accomplish, as you can choose material for building a house, neither one contradicts or suspends the laws that govern them, such as basic arithmetic and physics.
We may finally be converging. I'm free to build a house out of diet coke cans --- it won't be useful, but it can certainly be done. I'm also free to choose axioms such that 2+2=147 --- also not very useful, but neither is it invalid. Are we in agreement?
I was referring to your false dilemma of “different logics”, as if they were conventional, like choosing to drive on the right side of the road instead of the left.
Yes, which logic you use (and there are several, several of which are mutually exclusive) is your choice. This follows from the fact that logic, as in any branch of mathematics, is derived from axioms, and we can chose our axioms to suit the problem we're trying to solve. In fewer words, just as there is no universal rule stating that 2+2=4, there is also no universal rule defining the logical operators "and", "or", and "not".
If I had to compare a computer programmer and an Oxford don who received three firsts in his education and who Cambridge invented a chair for him, I think I’d choose the latter, if you’re going to play the credentials game. Yours is nil compared to Lewis’s.
Interesting. I appear to have had more training in philosophy than C. S. Lewis. (Enough classes to qualify for a minor during my undergrad as well as several graduate classes in logic when I was getting my Masters.) I hadn't considered taking a few philosophy classes sufficient to be considered "trained as a philosopher". I certainly don't consider myself a philosopher and Lewis didn't consider himself a philosopher either. Putting that to one side, Lewis isn't wrong because he's not a philosopher, he's wrong because he's making several mistakes that actual philosophical training would have prevented. Part of my training was in learning how to spot those kinds of errors. That's part of the difference between reading Plato as Literature and reading Plato as Philosophy. (You'd probably know this: did he ever submit any of his philosophical work for peer review? I can't find anything, but I haven't looked too hard.)
We’re far adrift from your explanation as to why Chesterton was wrong in his critique of scientism and his explanation of the problem of induction, which you never addressed.
Tell you what: remove the moderation block and I will go through and give you a point by point critique of Chesterton. If I'm going to put in the time to do this, I want some assurance that the post will actually show up here. BarryR
BarryR,
It does? Interesting. My cats know many things about the universe, but I didn’t think you be comfortable ascribing “thoughts” to them (as the capacity for thought would certainly imply consciousness, yes?) So I can go either way. Either my cats manage to know the universe without thoughts, and the above is correct; or my cats possess some level of consciousness and use thoughts to know the universe.
Of course your cats and my dog have consciousness, and know things about the universe, isn't that obvious?
I’m not finding anyone who agrees with you, and several people who disagree, so for the moment I’m going to provisionally assume that choice exists. For example: Of course, “the activity o mathematics is not just randomly writing down formal proofs for random theorems”, because “the choices of axioms, of problems, of research directions, are influenced by a variety of considerations — practical, artistic, mystical”…
That's interesting, because I was responding to your first order and second order logic argument, not whether axioms can be chosen with regard to mathematics for certain purposes (which doesn't, by the way, negate that 2+2=4). You can choose axioms depending on what you're trying to accomplish, as you can choose material for building a house, neither one contradicts or suspends the laws that govern them, such as basic arithmetic and physics. But this really just changes the subject, as I was referring to your false dilemma of "different logics", as if they were conventional, like choosing to drive on the right side of the road instead of the left. As if making sense of how big is yellow could really be, depending on my own invention of logic. And you're simply wrong about Lewis not being trained in philosophy: Following the end of the war in 1918, Lewis returned to Oxford, where he took up his studies again with great enthusiasm. In 1925, after graduating with first-class honors in Greek and Latin Literature, Philosophy and Ancient History, and English Literature. From January 1919 until June 1924, he resumed his studies at University College, Oxford, where he received a First in Honour Moderations (Greek and Latin Literature) in 1920, a First in Greats (Philosophy and Ancient History) in 1922, and a First in English in 1923. From October 1924 until May 1925, Lewis served as philosophy tutor at University College during E.F. Carritt's absence on study leave for the year in America. Update wikipedia if you'd like. There is no shortage of actual and true biographies of the man readily available online. So that should clear up your ad himonem against Lewis not being trained in philosophy. And secondly, you're not trained in philosophy, so why should I regard anything you have to say by the same criteria? If I had to compare a computer programmer and an Oxford don who received three firsts in his education and who Cambridge invented a chair for him, I think I'd choose the latter, if you're going to play the credentials game. Yours is nil compared to Lewis's.
When we observe this behavior in others, we usually classify it as instinct.
Instinct is innate, not knowledge about the world, but an impulse, like an appetite. An appetite is not knowledge about food. We're far adrift from your explanation as to why Chesterton was wrong in his critique of scientism and his explanation of the problem of induction, which you never addressed. Real laws of logic and reason and mathematics, necessary relations between things, and why we can see why they are necessary, not just that they exist together, are fundamentally unlike anything we ever observe is nature by virtue of seeing two things together. I'm still waiting for your answer and argument that proves this to not be the case..... Clive Hayden
kariosfocus@167
But in all cases, one is very careful to distinguish contexts so one avoids errors of equivocation leading to gross contradiction.
You've hit on the issue precisely. The universal laws that several posters here wish existed are not susceptible to context. 2+2=4 is not a universal truth, it is a truth within a specific mathematical context formed by a specific choice of axioms.
(BTW, does anyone say “Topology” much anymore?)
I find topology fascinating but, once I get past the introductory examples, completely counterintuitive. This is odd, as there's a pretty good mapping to graph theory and that's pretty easy. So yes, that's the reason for my reluctance to use topological examples. BarryR
StephanB@165
Apparently, you have been so steeped in postmodern subjectivism, that you cannot distinguish between socially constructed rules and objective laws.
It's unfortunate that 19th C. mathematicians who invented abstract algebra couldn't make that distinction either. It's also a distinction that geometers in the 19th C. were unable to make when they came up with non-Euclidean geometry. Carl Friedrich Gauss failed to make the distinction back in 1801 when he published the first work on modular arithmetic. I'm afraid the entire history of mathematics is nothing but a sorry parade of ignoring universal truth in favor of inventing cool stuff that works.
Irrelevant. Mathematicians are not free to pick and choose which mathematical laws they will honor and which ones they will not.
I've provided citations to the peer-reviewed literature showing exactly that in addition to the specific examples of modular arithmetic and non-standard analysis. Why should I believe you over professional mathematicians?
Do you have an example of an unchangeable law from graph theory? I’m not aware of any, but then you may know more about the topic than I do.
Irrelevant.
Your argument isn't helped by the fact that you're not able to come up with any universal laws of mathematics. Exponentiation certainly wasn't one and you don't know any from graph theory. I'd find it much easier to believe in a universal mathematical law if you were able to write down for me.
Perhaps you can provide a quote from one of your authors indicating that mathematics has no laws.
Ah, yes, *I* have to provide citations, but you don't. I think you'd agree that asking me for a citations showing there were no universal piglets of mathematics would be a little difficult to fulfill. You're asking me to provide evidence for a negative statement, and that's usually seen as poor debating technique. So while I cannot give you any evidence that no universal piglets of mathematics exist, I can give you evidence that universal piglets are something that mathematicians don't spend a lot of time thinking about. scholar.google.com returns the following hit counts: 881,000 "calculus" 7,490 "modular arithmetic" 4,820 "nonstandard analysis" 16 "universal laws of mathematics" 10 "universal mathematical laws" 0 "universal piglets of mathematics" Of these sixteen, we have a biography of Descartes, a paper on free will, a review of a Descartes biography, a short piece at answers.com, a book called "The philosophy of left and right"... But nothing listing what these universal mathematical laws might be. As far as mathematicians publishing mathematics goes, "universal mathematical laws" has received exactly the same attention as "universal mathematical piglets". I'd like to think you didn't make up the idea of universal mathematical laws. Can you give me the citation where you first heard of it? BarryR
markf@163
The interesting question is do the axioms of integer arithmetic which we learn intuitively at primary school reflect some kind of metaphysical reality which alternative systems do not? Or do they just happen to be extremely useful?
Let me add a short clarification to that. I'm perfectly happy conceding a metaphysical reality to axiomatic systems that results in 2+2=4. However, I see no reason to think of this as the only metaphysical system out there (especially since a fair bit of my graduate career was spent learning about these other systems). BarryR
StephanB@153
In keeping with that point, I gather that you have never heard of the law of limits with respect to calculus.
Do you remember why we use limits in calculus? It's because there's no smallest positive real number. The proof is pretty simple. By contradiction: let x be the smallest positive real number. x/2 < x, thus the contradiction. But math is all about choosing the axioms you want to work with, and so I can state as an axiom: there exists a smallest positive real number, and I'll call it an "infinitesimal". I can then construct calculus without all of those messy limits. My solutions will have a lot of infinitesimal left lying around at the end, but since they're arbitrarily small, I can disregard them. This is how Leibniz came up with calculus --- later formalizations preferred limits to infinitesimals, and that's probably what you learned. But the infinitesimal approach has been formalized as well, and is called non-standard analysis. It works, and all in all is probably a better way to teach calculus. As I understand your way of thinking, a smallest possible positive real number either exists or it doesn't. That's not how mathematicians think. They ask that question with regard to specific choices of axioms, and based their choice of axioms in part on how they can solve the particular problem they're working on. BarryR
F/N 2: It is a little disquieting to see the same distractive fallacy of equivocation continuing. BR: I am very aware of how the glyph "+" may be given alternative meanings, both from mathematics and from the common use as the inclusive OR in digital logic. But the basic issue is not whether one may construct a set of axioms, hope they are coherent [a la Godel] and that they imply enough to be interesting and useful. Ordinary arithmetic and its extgension to school algebra are very useful, and reflective of commonly encountered reality. The reinterpretation of 1, 0 + and = in Boolean Algebra has significance for digital electronics and even reasoning in logic. But in all cases, one is very careful to distinguish contexts so one avoids errors of equivocation leading to gross contradiction. And, BTW, the act of joining the sets (*, *) and (*, *) to yield the set (*,*,*,*) is a very natural one, and to symbolise it as 2 + 2 = 4 is reasonable, meaningful and reflective of reality. To reason in a digital context that TRUE AND/OR TRUE is TRUE is a different and equally useful context, where 1 + 1 = 1, using 1 for TRUE and + for AND/OR (Vel, not Aut). To see the point that hose who object to your equivocation are making, consider a bottle bearing the string of symbols: GIFT In English, that's fun, but if that bottle came from Germany, watch out! In short, life and death may hinge on the context of symbols and being very caregul indeed not to be equivocal. And that has nothing to do with whether another set of axioms, for Graph Theory, may have utility or even map well to reality as we experience it, in some specialised circumstances; especially when we have to deal with networks of nodes and arcs connecting them in many modern circumstances. (BTW, does anyone say "Topology" much anymore?) The importance of contextual consistency and precision in terminology is underscored. GEM of TKI PS: For those who are wondering, GIFT in German, notoriously, means poison. Talk about a "False Friend" word! kairosfocus
---vividbleau: "I think one could maintain that Lewis could have been a professor of philosphy and not a professor of philosphy or both simultaneously." Very good, vivid. According to our postmodernist friends, there are no laws that would force us to rule out that possibility. StephenB
---BarryR: "When I played baseball in the backyard as a young child, each game was preceded by a ritual construction of the rules. The dogwood tree might be first base, the corner of the sidewalk third base, invisible men advance one base ahead the runner, etc." Apparently, you have been so steeped in postmodern subjectivism, that you cannot distinguish between socially constructed rules and objective laws. That is unfortunate. ---"Mathematicians are likewise free to choose their axioms to suit their fancy. Once the axioms are chosen, then they have to play by the rules (until the next paper)." Irrelevant. Mathematicians are not free to pick and choose which mathematical laws they will honor and which ones they will not. I have already listed six laws [there are many more] completely refuting your misguided claim that there are no laws. You simply ignore the refutation and continue on as sleek as ever. ---"See citations upthread for heuristics mathematicians use to evaluate their choice of axioms." Irrelevant as indicated in the preceding paragraph. ---"Do you have an example of an unchangeable law from graph theory? I’m not aware of any, but then you may know more about the topic than I do." Irrelevant. You could make your question relevant by showing how graph theory supercedes mathematical laws or reduces them to a status of not being laws. Perhaps you can provide a quote from one of your authors indicating that mathematics has no laws. [Good luck with that one]. StephenB
"Because I was free to disregard this universal rule, I was able to solve problems several orders of magnitude larger than what had been done using the discrete approach..." Why Barry, you must be the superman. Perhaps that explains your enthusiasm for Shaw. I remember our fifth grade teacher informing us one day, with a twinkle in his eye, that 2+2 could just as easily mean 3 or 5, by the rules of mathematics. Tell me, do you have a twinkle too? And Mark, you are also my hero. It's quite amazing to find one of our contemporaries inventing nominalism. And we thought time travel was impossible! allanius
Looking over the 2+2=147 discussion I think it is worth restating what the issue is. I don't suppose that BarryR denies that given the axioms of integer arithmetic - incuding the definition of the "+" function - then 2+2=4. And these axioms are really useful for dealing with apples and countless other things. It is also possible to change the axioms (call it redefining the meaning of "+" if you like) so that 2+2=147 - which is practically useless in the real world. The interesting question is do the axioms of integer arithmetic which we learn intuitively at primary school reflect some kind of metaphysical reality which alternative systems do not? Or do they just happen to be extremely useful? No amount of sarcastic comments about bank cashiers is going to throw light on this question. Here is one approach. How do we learn that 2+2=4? Think back to primary school. Teachers do not reveal a metaphysical world of numbers to 5 year olds. They show them apples and such like, and give them techniques for counting them etc. Imagine trying to teach even the most intelligent child that there is this thing called a number, one of the numbers is 2, and there is a thing called addition, and when you add 2 and 2 you get another number called 4. And by the way these things don't exist in space or time - but the law that 2+2=4 is necessarily true for all space and time. markf
#152 and #156 re C S Lewis I don't understand why you both repeated what I wrote? CS Lewis was a philosophy tutor for just under a year. That is a very long way from being a professor of philosophy as claimed in #146 - it doesn't require any original writing at all. He did indeed teach at Oxford for all those years but not philosophy and not as a professor. markf
StephanB, Just for fun, I looked around a bit to see if anyone had defined exponents for graphs. I'm not seeing anything, so let's walk through what this might involve. You can define an unweighted, undirected graph as an adjacency matrix A s.t. the value at i,j is the number of edges from i to j. So we've got a matrix and that has a well-defined exponential function. But what if we have a weighted graph? If it's simple, then i,j can be set to the weight of the edge. (Can you transform a graph with weighted edges and verticies into an equivalent graph with just weighted edges? Yeah, I think so.) So raising this to a power is straightforward as well. But what if it's a directed graph? Now we're going to have to change our adjacency matrix to represent weight as well as direction. Do you order the verticies and use two matricies (one for edges going from high to low and the other for edges going from low to high)? Do you use a single matrix and give each vertex an alias? Do you create a matrix of vectors? You can do any of these things and then operate on the resulting matrix or matricies as usual, or you might get clever and come up with some else. And that will be your "Law of Exponents" for graphs. If this allows you to solve an interesting problem (or even raise an interesting problem), then your convention might be picked up and eventually make its way into textbooks. (Learning to understand what is "interesting" will occupy the first few years of your grad school career.) BarryR
markf, From October 1924 until May 1925, Lewis served as philosophy tutor at University College during E.F. Carritt's absence on study leave for the year in America. Clive Hayden
StephanB@153
Having studied higher math myself in a formal setting where bluffing doesn’t work, I am not buying your argument from authority.
Well you certainly shouldn't take it my authority.
I remember well enough the law of exponents and many other laws which admit of no exceptions.
You're familiar with exponents then, yes? Good. n^{2} is n-squared, right? n^{1/2} is the square root of n, right? n^{-1/2) is 1 divided by the square root of n, right? And you're familiar with i (or j, if you're a physics major) standing for the square root of -1, yes? And that's called an imaginary number. So what is i^{1/i}? If you wrote down what you considered to be the law of exponents, I don't think you'd be able to solve this problem (and it is solvable). But if we're allow to modify and extend the "Law of Exponents", then it's no problem. My reading group just finished this problem in Knuth this past week:
1.2.4.19: (Law of Inverses.) If n is relatively prime to m, there is an integer n' such that nn' is proportional to 1 modulo m. Prove this....
n'=n^{-1}, and now we're well on our way for defining exponents over modulo arithmetic. If you want to pick the "Law of Exponents for simple arithmetic over integers" as an axiom, that's fine --- just like Calvin and Hobbs above can decide to use "twelfth base" in baseball. But I'm not required to use that law --- even when I'm doing simple arithmetic --- and to my mind that doesn't make it much of a law.
So why on earth would I want to discuss your irrelevant foray into graph theory
Because it's your best shot at showing me I'm wrong. And because I thought we might have it as a shared mathematical language and we could elevate this discussion to axioms and theorem instead of discussing what you would prefer math to be like. BarryR
StephanB@147
“Of course it’s undergirded by logical laws! When I use graph theory, I pick and chose which logical laws form that undergirding!”
Now you are saying that you want to choose from among the laws. Earlier, you argued that there ARE NO LAWS from which to choose.
When I played baseball in the backyard as a young child, each game was preceded by a ritual construction of the rules. The dogwood tree might be first base, the corner of the sidewalk third base, invisible men advance one base ahead the runner, etc. Once the game began, we were constrained to adhere to those rules. However, there were no rules constraining how we initially selected our rules. We were free to make the dogwood tree home plate, create additional bases that needed to be visited in arbitrary order, remove the strike count limit, etc. Mathematicians are likewise free to choose their axioms to suit their fancy. Once the axioms are chosen, then they have to play by the rules (until the next paper). See citations upthread for heuristics mathematicians use to evaluate their choice of axioms.
Just so you will know, definitions and assumptions are changeable; laws are not.
Do you have an example of an unchangeable law from graph theory? I'm not aware of any, but then you may know more about the topic than I do. BarryR
There are several errors packed in here. First, we’re perfectly capable of knowing the universe without using thought.
Ummm, Barry, “knowing” requires thought.
It does? Interesting. My cats know many things about the universe, but I didn't think you be comfortable ascribing "thoughts" to them (as the capacity for thought would certainly imply consciousness, yes?) So I can go either way. Either my cats manage to know the universe without thoughts, and the above is correct; or my cats possess some level of consciousness and use thoughts to know the universe. Your choice.
There is no “choice” as I explained in my last comment to you.
I'm not finding anyone who agrees with you, and several people who disagree, so for the moment I'm going to provisionally assume that choice exists. For example:
Of course, "the activity o mathematics is not just randomly writing down formal proofs for random theorems", because "the choices of axioms, of problems, of research directions, are influenced by a variety of considerations --- practical, artistic, mystical"...
Carlo Cellucci, "Introduction to Filosofia e matematica", in Reuben Hersh's _18 Unconventional Essays on the Nature of Mathematics_. (emphasis added). (I think you'd particularly enjoy the first essay in that collection. Might give you some ammunition.)
The views often expressed on this subject by methodologists may be termed the aestheticism or utilitarianism in constructing axiom systems. .... According to the mentioned views the choice of axioms is guided by a personal predilection or by the objective to make the derivation of new theorems easy. What set of axioms is chosen and what is their intuitive content is of no importance, all that matters is the set of all theorems derivable on their basis. .... [However], it is obvious from the history of mathematics that in many branches axioms are chosen on account of their intuitive content, e.g., on account of their generality.
A. Grzegorczyk, "On the validation of the sets of axioms in mathematical theories", Studia Logica Volume 13, Number 1, 202, 1969. (emphasis added)
Abstract: I discuss criteria for the choice of axioms to be added to ZFC, introducing the criterion of stability. Then I examine a number of popular axioms in light of this criterion and propose some new axioms.
Sy-David Friedman, "Stable Axioms of Set Theory", Trends in Mathematics, 2006, Part 2, 275-283. (emphasis added). You may, of course, propose a rule for your own work that the only valid axioms are those that are Universal. As I've mentioned a few times now, there's no rule preventing you from doing so. If you want to apply this rule to everyone else, though, I think you're going to have to make a mathematical argument before you'll be taken seriously.
He was a professor of philosophy at Oxford by the age of 26.
Really? I may have been led astray by wikipedia then. His entry there has:
Lewis then taught as a fellow of Magdalen College, Oxford, for nearly thirty years, from 1925 to 1954, and later was the first Professor of Medieval and Renaissance English at the University of Cambridge and a fellow of Magdalene College, Cambridge.
A quick google search turns up:
C.S. Lewis received his First Class degree in Classical Moderations from University College, Oxford University, in 1920. In 1922, he received another First Class degree in Literae Humaniores, and yet another in English Language and Literature in 1923.
I believe that's considered a humanities degree. If you have a cite to the contrary let me know and I'll update wikipedia.
I’d like you to explain without using explanation (which presupposes logic) how we are capable of understanding the universe without using thought.
When we observe this behavior in others, we usually classify it as instinct. BarryR
153 is to BarryR not to BarryA. StephenB
---BarryA: "You can claim that no other arithmetic except that which you learned in grammar school exists, but since I have firsthand experience with other arithmetics I don’t think I’m going to believe you." Having studied higher math myself in a formal setting where bluffing doesn't work, I am not buying your argument from authority. I remember well enough the law of exponents and many other laws which admit of no exceptions. Indeed, I once studied the my "identities" over a Christmas vacation. I know well enough the relationship between a philosophical syllogism and the laws of geomtery because I have studied philosophy in a formal setting as well. I also remember the pre-calculus laws of sines and cosines. In keeping with that point, I gather that you have never heard of the law of limits with respect to calculus. Indeed, given your misguided notion that the laws of higher math contradict the laws of arithmetic (except for those times that you are claiming there are no laws at all) I am beginning to suspect that you are not even aware of the lower levels of association and accumulation that rule the arithmetic functions that you so disdain. So why on earth would I want to discuss your irrelevant foray into graph theory, which is little more than a distraction and just one more attempt to avoid your horror of the obvious--math has laws, as does logic. StephenB
markf, CS Lewis taught for nearly 30 years 1925-1954 at Magdalen College Oxford before he was a professor of English at Magdalene College at Cambridge. CannuckianYankee
RE 150 I think one could maintain that Lewis could have been a professor of philosphy and not a professor of philosphy or both simultaneously. Why not? Vivid vividbleau
Sorry that should read: "temporary post in 1924 as tutor covering for E. F. Carrit" markf
#146 He was a professor of philosophy at Oxford by the age of 26. I don't think so. I believe the only philosophical position he held was a temporary post as tutor covering for 1924 E. F. Carrit. That lasted less than a year. In fact I don't think he become professor of anything until he was made professor of medieval and renaissance literature at Magdalene College, Cambridge, in 1955 (i.e. aged 57) markf
StephenB and Clive Shame on the both of you!! Here we have BarryR, a real intellectual who has come down from the mount to lecture us the mere mortals and the both of you have the audacity to question him. Don't you realize the sacrifice he is making by dirtying his intellectual hands by even gracing us with his presence? Sniff sniff. I mean really how rude and disrespecful. I can tell how he writes that he is very very smart and he has published papers to boot. Besides all the publishing, to really really smart audiences,he has told us how smart he is and how dumb you are. Shoot what papers have you published, did you guys even get out of grammar school? Chesterton? hmmph not even worth refuting. Lewis? minor leaguer sniff sniff. Heck you guys are so dumb you don't even know that 2 apples plus 2 apples dont equal four apples!! Come on now and show some respect will ya!! Vivid vividbleau
---BarryR: "Of course it’s undergirded by logical laws! When I use graph theory, I pick and chose which logical laws form that undergirding!" Now you are saying that you want to choose from among the laws. Earlier, you argued that there ARE NO LAWS from which to choose. ---"Yes, it contradicts your grammar school arithmetic. I’m not using that arithmetic." So, now you are arguing that the laws of higher math contradict the laws of arithmetic? ---"[The graphs} Is this allowed? Is one of them (or both) wrong?" Do you really want me to go into another discussion about the difference between a working definition and a law? First you say that there are no laws. Then you say that there are laws, but you get to choose from among them. Then you say that the graph is undergirded by laws. Then you say that they are undergirded by definitions. Just so you will know, definitions and assumptions are changeable; laws are not. StephenB
Barry,
There are several errors packed in here. First, we’re perfectly capable of knowing the universe without using thought.
Ummm, Barry, "knowing" requires thought.
Second, while logical thought may not be subjective, the choice as to which logic to use and what facts to operate over remains subjective.
There is no "choice" as I explained in my last comment to you.
Third, logical thought is often irrelevant to the universe
Not sure what you mean here, but the external world is an inferred world, you remove logic and you remove understanding and making sense of the external world as a whole.
Now some of this is due to Lewis speaking to a general audience, but some of it also comes from his speaking about science and philosophy without being trained in either.
He was a professor of philosophy at Oxford by the age of 26.
So by the time he reaches the laws of thought are also the laws of things I have several counterexamples at hand. I work with some of the largest computers in the world precisely because the laws of thought do not follow the laws of things; things are far more complicated than our thoughts, which is why we build supercomputers to help us think about them.
Ummm, Barry, we program the computers, without our logic there would be no computers to begin with, nor any programming. Did you not gather anything else from the essay? I mean, there's a lot to digest, I understand that, and surely you're adept at understanding it, given that it was for the "general audience" of which you consider yourself above, so surely you understand that the quote,
I asked whether /in general/ human thought could be set aside as irrelevant to the real universe and merely subjective. The answer is that at least one kind of thought — logical thought — cannot be subjective and irrelevant to the real universe: for unless thought is valid we have no reason to believe in the real universe.
is valid, and that "First, we’re perfectly capable of knowing the universe without using thought." is not. I'd like you to explain without using explanation (which presupposes logic) how we are capable of understanding the universe without using thought. How are we able to know without thinking? And how would you explain this without logic or thought? Clive Hayden
StephanB@141
I said I know enough about [graph theory] to know that it doesn’t invalidate the principle that mathematics is undergirded by logical laws,
Of course it's undergirded by logical laws! When I use graph theory, I pick and chose which logical laws form that undergirding! From Bondy and Murty's _Graph Theory_:
A graph G is an ordered pair (V(G), E(G)) consisting of a set V(G) of verticies and a set E(G), disjoint from V(G), of edges, together with and incidence function \phi(G) that associates with each edge of G an unordered pair of (not necessarily distinct) verticies of G.
We now have (part of) the undergirding necessary to start reasoning about graphs. This is not the One True Universal definition of graphs. I can modify it to require the unordered pairs of vertices that define the edges be distinct. I can further restrict this to state that the ordered pair must contain two unique vertices. My question to you is: Is this permissible? May I arbitrarily modify the undergirding of a branch of mathematics? Bondy and Murty don't have a problem with this: on the following page they define simple graphs as having no loops or parallel edges, which is the consequence of my changes to the definition. Diestel gives a slightly different definition in his _Graph Theory_ (I'd quote it, but it's set theoretic and I think spelling out the symbols will only lead to more confusion). So now we have two different scary yellow Springer books (with identical titles) giving slightly different undergirding to the idea of graph theory. Is this allowed? Is one of them (or both) wrong? And most important, how do you know? BarryR
CH@133
I suppose “There are no rules about how you pick (and modify) your rules.” is not supposed to be a rule either, then.
I think of it as an observation. A rule would be formulated as "There *cannot* be any rules about how you pick (and modify) your rules." I'm happy to argue for that single rule if you like. I don't see where it makes much difference, though. BarryR
StephanB@141
If they map back to the proposition that 1 + 1 = 147, you are following the wrong map.
How do you know this? Yes, it contradicts your grammar school arithmetic. I'm not using that arithmetic. You can show that it's inconsistent with the arithmetic that I am using --- except I constructed my arithmetic so that wouldn't be the case. You can claim that no other arithmetic except that which you learned in grammar school exists, but since I have firsthand experience with other arithmetics I don't think I'm going to believe you. Or you can show me the One True Universal Correct Arithmetic that you believe exists. But you can't. You've set yourself up a formidable task: demonstrate nearly all mathematics from the 17th C. forward as fundamentally flawed, and do so without any appreciable knowledge of what these mathematics are or how they work. Your best shot so far is observing that these mathematics don't map well to apples. I'll grant you that point. What else do you have? BarryR
CH@139
Please read De Futilitate from Lewis
He write better than I remember (and far better than Chesterton). And nice work in coming up with a cite that's so directly on point. I think this falls apart here:
I asked whether /in general/ human thought could be set aside as irrelevant to the real universe and merely subjective. The answer is that at least one kind of thought --- logical thought --- cannot be subjective and irrelevant to the real universe: for unless thought is valid we have no reason to believe in the real universe.
There are several errors packed in here. First, we're perfectly capable of knowing the universe without using thought. Second, while logical thought may not be subjective, the choice as to which logic to use and what facts to operate over remains subjective. Third, logical thought is often irrelevant to the universe --- G. H. Hardy was quite proud of the uselessness of his mathematics. Now some of this is due to Lewis speaking to a general audience, but some of it also comes from his speaking about science and philosophy without being trained in either. So by the time he reaches
the laws of thought are also the laws of things
I have several counterexamples at hand. I work with some of the largest computers in the world precisely because the laws of thought do not follow the laws of things; things are far more complicated than our thoughts, which is why we build supercomputers to help us think about them. To leap from that flawed conclusion to ruling out materialistic explanations of thinking simply isn't justified even if I believed the conclusion. It's easy to convince yourself you're making sense when you're making arguments like this. It's harder to convince three anonymous reviewers. Unfortunately, I don't think this work was ever subjected to peer review. BarryR
---BarryR: "This is how grammar school arithmetic is taught, and to the best of my knowledge, that’s all the math you understand. And that’s fine — we can’t all be mathematicians." You are the one who said that 1+1 does not necessarily = 2, so I thought it would be wise to bring you back down to earth with an obvious example that refutes your misguided notion. ---"But I had hoped your education would have gotten you to the point where you could grok at mathematics can be divorced from apples and studied in its own right. Once you’ve made that leap, then you have an abstract structure that can be used to derive other abstract structures, some of which are even useful." All this is fine, but irrelevant. The best way to convince people that you are educated is to make relevant and cogent arguments. Granted, simplicity that has not passed through complexity is worthless. On the other hand, reveling in complexity in order to evade basic issues smacks of timidity. The trick is to find simplicity on the other side of complexity. If there is one thing that I have learned in dealing with academic elitists, it is this: Anyone who truly understands his subject can, when called upon, explain it so that a twelve year old could understand it. Everyone else is bluffing. ---"But there’s no guarantee that they’ll map back onto apples." If they map back to the proposition that 1 + 1 = 147, you are following the wrong map. ---You say you know graph theory. Great. What are the universal laws of graph theory and how do you know they’re universal? I’ll bring home my Scary Yellow Springer Book o’ Graph Theory tonight and we can compare notes." I said I know enough about it to know that it doesn't invalidate the principle that mathematics is undergirded by logical laws, just I know enough about other approaches to mathematics to know the same thing. We don't need a discussion about Springer's Graph theory, nor do we need a discussion about your notions about symbols. None of these points have anything to do with the basic issue, which is this: The flexibility that is permitted for granting working assumptions does not translate into a flexibility for ignoring logical laws or first principles. StephenB
CH@139 Well, there's a simple way to settle this: show me your universal mathematics. We know it guarantees that 2+2 always equals 4. What else does it do? I'm quite the mercenary when it comes to mathematics. Show me a more powerful system and I'll drop what I'm currently using and adopt it. So, I'm ready and willing to convert. All you have to do is tell me what I'm converting to. BarryR
BarryR,
As best I can understand what you’ve been telling me, there is a Correct, Universal logic. Which one is it, and how did you arrive at that conclusion? (If you say “Chesterton”, you will be asked some very pointed questions that you don’t have the technical skill to answer.)
This is known as a false dilemma, if it is logical, then there is no "different ones" to be compared, for no logic could be used to compare them. If there were no, as you said, Correct, Universal logic, then there is no comparison between anything that might be gathered together and called First Order, Second Order and so on. You couldn't even say that they themselves were logical or not, only conventional, to be changed at will, and the entire force of your argument, indeed of all arguments, would be broken into nothing. You would just be talking confusedly. You would just be playing with counters. I think you desperately need Chesterton and C.S. Lewis. Please read De Futilitate from Lewis, you can google it. But you seem to be trying to actually make a point, a logical point, I'm assuming, using reason, I'm assuming, which cannot itself deny logic, or else you have no point to make. All argument presupposes it, even yours. Clive Hayden
CH@134
Are you implying that you can make new rules of logic by the analogy of axioms? Surely I can do that too, then.
You may be starting to get the hang of this. Yes, you can make new rules of logic, or entirely new logics. Among the ones that are currently being studied right now are: First Order Logic, aka first-order predicate calculus Second Order Logic Infinitary logic It's not usually used to study logic, but I'd add lambda calculus as well. Each of these broad categories is divided up into several specializations, each with their own "rules of logic". Each of these logics is perfectly consistent. Some are more expressive than others, some are more powerful than others. As best I can understand what you've been telling me, there is a Correct, Universal logic. Which one is it, and how did you arrive at that conclusion? (If you say "Chesterton", you will be asked some very pointed questions that you don't have the technical skill to answer.) BarryR
StephenB@135
If I give you one apple followed by another apple, you will have two apples. You will not have 147 apples.
If we all decided that valid mathematical operations shall be those limited to those can can be carried out on apples, you might have a point. This is how grammar school arithmetic is taught, and to the best of my knowledge, that's all the math you understand. And that's fine --- we can't all be mathematicians. But I had hoped your education would have gotten you to the point where you could grok at mathematics can be divorced from apples and studied in its own right. Once you've made that leap, then you have an abstract structure that can be used to derive other abstract structures, some of which are even useful. But there's no guarantee that they'll map back onto apples. You say you know graph theory. Great. What are the universal laws of graph theory and how do you know they're universal? I'll bring home my Scary Yellow Springer Book o' Graph Theory tonight and we can compare notes. BarryR
---BarryR: "In grammar school this is called “rounding”, and if you can think back that far, you probably had to memorize that 0.5 was to be rounded up, not down, and that this was a completely arbitrary convention." So you really think that flexibility in mathematical expressions is a good example of how mathematics has no inflexible laws. I gather you also believe that if I round off Pi from 3.14159 to 3.14, the relationship between a circle's diameter and its radius is also up for grabs. You should not be sneering at Chesterton. On the contrary, you desperately need to absorb his message. StephenB
---BarryR: "In a paper I published last year, I had words to the effect of “Define graph G as a directed, acyclic, weighted graph…”." ---"(Oh, wait — you don’t know graph theory. Don’t worry, it’s pretty easy. Have a look here. It won’t hurt. I promise.)" I know enough about graph theory to know that appealling to it will not help your case. ---"There are no universal laws or rules that required I solve my problem using this particular kind of graph." You are confusing operating assumptions and methods, which are flexible, with the laws of logical and the related laws of mathematics, which are not. Mathematical principles are related to logic principles, which explains why you disavow the rules not for one, but for both. Thus, you reject mathematical laws, the law of non-contradiction, and the law of causality all in one sweep. That is no coincidence. On the matter or relative rules, researchers can posit anything they like for the sake of argument, or they can even do if formally in the form of a hypothesis. They are also free to choose their methods, unless, of course, they are ID scientists who are forbidden by your secularist comrades to do so under the tyrannical rule of methodological naturalism*. *(No doubt you corrected your comrades and re-educated them to the point where they now understand that since the scientist is the only one who knows what problem he is trying to solve, only he/she can choose the appropriate method. [Just a little humor there. I have no doubt that, in spite of your protests on behalf of the INAPPROPRIATE freedom to abandon causality, you would refuse ID the APPROPRIATE freedom to choose its own methods.]) We are not free, however, to abandon the law of causality or the law of non-contradiction which informs all intelligently conceived arguments, whether they be philosophical, mathematical, or scientific. ---"There are no rules about how you pick (and modify) your rules. You’re free to believe otherwise, but I don’t think you’re going to be able to do so while having a successful research career." We are not discussing how you pick YOUR operating assumptions or axioms, we are discussing THE transcendental rules that cannot be chosen. They must, for the sake of rationality, be apprehended, understood, and honored. If I give you one apple followed by another apple, you will have two apples. You will not have 147 apples. If an apple exists, it cannot also not exist at the same time, nor can it come into existence without a cause. These are a few, not all, of the priciples of right reason, and no matter how many graphs you draw or how many letters you scramble, the rational conclusions are inescapable. StephenB
BarryR,
There are no universal laws or rules that required I solve my problem using this particular kind of graph.
So what?
I can also add new rules, in this case mapping the graph onto the Cartesian plane.
A good middle-school geometry class should have been enough to clue people in that axioms are tools and we’re free to pick our tools — the constraints come with how those tools are able to interact.
Are you implying that you can make new rules of logic by the analogy of axioms? Surely I can do that too, then. I'd like to know how big yellow is. And I'd like to compare that to how far London Bridge is from Christmas Day. If logic can be so manipulated, depending on whatever axioms I choose, or choose to invent, this should be no problem, right? I define that colors are certain sizes, then compare that size to the axiom I invent for distance between a place and a date. My answer to this problem is "cell phone." Are you saying I'm wrong or right? Does it depend on what axioms I choose? No, because it's nonsense. But it's only nonsense if you have sense to begin with. You will not know that a line is crooked unless you have some idea of a straight line. But maybe I can choose my axioms which will make choosing axioms impossible and possible at the same time. This shouldn't be a problem, because there is no such thing as logic, right? By the way, I'd still love to see your response to the problem of induction as so eloquently categorized by Chesterton. This time without the a hominem. Clive Hayden
BarryR,
There are no rules about how you pick (and modify) your rules. You’re free to believe otherwise, but I don’t think you’re going to be able to do so while having a successful research career.
I suppose "There are no rules about how you pick (and modify) your rules." is not supposed to be a rule either, then. Your entire comment amounts to the trivial conclusion that different rules apply depending on what you want to build, such as building a house made of glass instead of wood would have different methodologies. Of which, the house made of glass, you of all people shouldn't build. That would be my first rule. But choosing how you want to build anything, what sorts of materials to use, how large, how tall, etc., does not equate to suspending laws of reason and logic or mathematics. Your comment, quite frankly, changes the subject from real laws to conventions---which, the latter, can be altered for your own purposes. And I find it notable that you use the word "embarrassing" so often. Clive Hayden
BarryR, I should like to impress upon you the fact that transcendent 'logical' information, which is completely separate from matter and energy, runs the show as far as the operation of this universe is concerned, and that this fact is empirically verifiable by work that has been accomplished in quantum mechanics. Yet after reading your snobby condescending posts towards others, I think that the only thing that would be sure to impress you would be whenever you looked at yourself in the mirror. Thus I will refrain from wasting my time. bornagain77
BarryR,
I’ve been resisting a crack about how Dembski’s math only makes sense if you’re a mathematical illiterate, but I don’t know if I can hold out much longer….)
Go ahead, I'd like to ban you for your jeering. And quite frankly, discussing "rounding up" doesn't give me much confidence that you really understand his mathematics. John Lennox, you know who I mean, the Oxford mathematician, endorses Dembski's mathematics. But BarryR, the one who talked about whether one is inside or outside of school walls determines mathematical truths, and banks rounding up, doesn't agree with Dembski's mathematics. Pardon me if I'm not impressed. Clive Hayden
"Borges, Orwell and Shaw"? Somebody please pick me up off the floor. allanius
StephanB@118
By arguing that logic and mathematics have no rules...
In a paper I published last year, I had words to the effect of "Define graph G as a directed, acyclic, weighted graph...". (Oh, wait --- you don't know graph theory. Don't worry, it's pretty easy. Have a look here. It won't hurt. I promise.) Now that you have an idea what those words mean.... There are no universal laws or rules that required I solve my problem using this particular kind of graph. Once I've identified the kind of graph I'm going to be using, my readers (including my reviewers) will have an expectation of the rules I'm going to follow. For example, acyclic graphs should really be free of cycles. There are a number of useful operations you can perform on acyclic graphs that only work in the absence of cycles (such as determining the critical path). If I want to break one of these rules, that's perfectly fine (although it's best that I do so explicitly). I might say "As the cycles in this graph represent a finite number of program iterations, we may trivially transform this graph to one free of cycles." My reviewer will read that and decide whether she believes it or not, and if she doesn't it's unlikely that the paper will be published. I can also add new rules, in this case mapping the graph onto the Cartesian plane. Once I've gone through establishing exactly what the rules are that I'll be using, I can make an argument that under these rules my conclusion becomes inevitable. I then need to convince my reviewer that this particular set of rules is a reasonable model for the real-world process I'm interested in, that it's novel, is something more than an incremental contribution, etc. If I've done all that, then I can have a reasonable expectation that the paper will be published. Another example: clock cycles in a computer are discrete --- as far as the computer can tell, there's no such thing as half a clock cycle. Let's say this is a universal rule. I published a paper in 2007 where I treated clock cycles as continuous, not discrete. Because I was free to disregard this universal rule, I was able to solve problems several orders of magnitude larger than what had been done using the discrete approach, and the error I introduced to do so was negligible enough not to be worth measuring. There are no rules about how you pick (and modify) your rules. You're free to believe otherwise, but I don't think you're going to be able to do so while having a successful research career. BarryR
Hold out? What an interesting position to explain yourself. Here all this time I thought you had been obvious. Rather like a ladle as oppossed to a spoon. Upright BiPed
BarryR,
So yes, you are correct that, if you limit yourself to grammar-school arithmetic, 2+2 can only equal 4. You’ll have to take it on faith that beyond the walls of grammar schools this is not considered a universal law.
So it is a universal law in school, but not one outside of school? What does a building have to do with anything? Either 2+2=4 or it doesn't, no matter where it is found, inside or outside of a building.
Inference is just another tool, like logic, math, and experiment. Inferences are often wrong. Logic is often irrelevant, math is often intractable, and don’t get me started on experiments.
Do you have a total skepticism at bottom with regard to the cogency of these tools? I mean, surely, you're sophisticated enough to see that there is no hope for a total skepticism, that in the last regard, inference using the laws of logic must be absolute.
This is philosophy for people who won’t read philosophy, written by someone who didn’t read philosophy either. As rhetoric, this is good enough to be notable, but compared to contemporaries like Borges, Orwell and Shaw it’s frankly quite shallow.
When do you reckon you'll start making an argument outside of ad hominem? I'm not interested in talking about the man or the style in which he wrote, but rather the argument in The Ethics of Elfland about real laws, such as laws of reason and math, and weird repetitions in nature we call laws for shorthand.
For example:
Thus when Mr. H. G. Wells says (as he did somewhere), “All chairs are quite different”, he utters not merely a misstatement, but a contradiction in terms. If all chairs were quite different, you could not call them “all chairs.”
As wit, this works. As argument, nothing further need be said to dismiss it.
That must be why you didn't dismiss it with argument......When Chesterton says:
Other vague modern people take refuge in material metaphors; in fact, this is the chief mark of vague modern people. Not daring to define their doctrine of what is good, they use physical figures of speech without stint or shame, and, what is worst of all, seem to think these cheap analogies are exquisitely spiritual and superior to the old morality. Thus they think it intellectual to talk about things being "high." It is at least the reverse of intellectual; it is a mere phrase from a steeple or a weathercock. "Tommy was a good boy" is a pure philosophical statement, worthy of Plato or Aquinas.
He is exactly spot on. As wit this works, as argument, this works. Wit is argument. These elementary truths do not cease being truths because they are elementary. I notice a contempt for basic truths in your comments. As if being on a top floor means you don't need the foundation or the first floors. You cannot suspend yourself. If fundamentals go, all else falls. No matter to what degree of complication mathematics may reach, if the concept of numbers and the multiplication table go, all else will be in ruins. You have to have something as a basis or you will not continue in progress. Progress itself means that there is a basis unchanged. If an acorn grows into a beechwood it isn't progress, but mere change. The same is true for mathematics and moral philosophy and logic. If Logic and Inference are wrong, you can only discern this by more or better inference and logic. To scrap them won't get you any closer to any truth, because you won't be going anywhere. Indeed these are both presupposed in even determining that either of them was used in a wrong way by a faulty and tired human brain. But this can never be an indictment against inference and logic itself without being self referentially incoherent. Clive Hayden
markf@122 Exactly, and you also run into problems when trying to map a potentially infinite number of discrete objects onto a finite amount of silicon. #119 thru #121 The banking example also works. If you go to close out your account and you have $1.00 of principle and $0.002 dollars of interest, how much money does the teller hand you? Depending on how nice the bank is, this will either be: 1.00 + 0.002 = 1.00 or 1.00 + 0.002 = 1.01 OH NOES!!!!! TEH UNIVERSIALITIFULLNESS OF ADDING STUFF HAS BEN VIOLATED!!!!11!!! In grammar school this is called "rounding", and if you can think back that far, you probably had to memorize that 0.5 was to be rounded up, not down, and that this was a completely arbitrary convention. An equivalent way of stating this is to define "+" in such a way as no rounding after the fact is necessary. For the tellers counting out change, the former is easier. For the people who are writing the bank's software, you have to deal with the latter whether you want to or not. (I'm fascinated --- in a horrified kind of way --- that you'd define mathematics as "that which is appropriate to use in a bank" and still not get it right. I've been resisting a crack about how Dembski's math only makes sense if you're a mathematical illiterate, but I don't know if I can hold out much longer....) BarryR
jstanley, it is perfectly clear that the point you make in 116/119 is completely lost on mark. mark, its a scholarly neurosis, you should have it looked after. Upright BiPed
bornagain77: There is indeed an ongoing real-world situation in which an application of my illustration is anything but absurd. Namely, how banks worldwide are being allowed by their governments to mark their non-performing assets. Which according to the mathematical axioms of those governments, they are being allowed to mark as if they were performing. Those non-performing assets have already come down once on the financial system like a sledgehammer. If because of the flawed mathematical axioms with which they are being dealt with, they come down a second time in the same manner, the real-world results will be anything but trivial. (See Karl Denninger's A Round-Up Of Current Idiocy for more information.) jstanley01
LOL Stephen and jstanley01,, Hey, I found a place where the banks use 'flexible axioms': http://www.youtube.com/watch?v=wpJQ-HX3F8g bornagain77
#119 thru #121 But surely this is the point. Integer arithmetic as normally practiced is a really useful tool for discrete objects like money. It works less well for adding wind speeds together. An arithmetic that defines the addition operation with a weird exception is virtually no use. None of this shows that the axioms of integer arithmetic are necessarily true in any deep sense. They happen to be true of discrete objects. markf
Bank Teller follow up: "Of course I also dispute the law of non-contradiction." It is entirely possible that I gave you the wrong amoung of change and that I also didn't." Therefore, your accusations are both true and false and I am both guilty and not guilty of withholding the proper amount." StephenB
---"Group Theorist: “Hold on a sec. I am not sure why, but this doesn’t look right. I think I need to call my philosopher.” Bank Teller: "There are no objective matematical laws. According to your axioms, subjectively conceived, I didn't give you back enough change, but according to my axioms, subjectively conceived, I did. Now run along." StephenB
Bank Teller: "And here is your change: One, two, three, one hundred and forty-seven." Group Theorist: "Hold on a sec. I am not sure why, but this doesn't look right. I think I need to call my philosopher." ...sorry... jstanley01
---BarryR: "If you prefer to think of this as a mathematical law — one may choose one’s axioms — it’s perfectly permissible for you to do so." You are contraditing yourself again. A law, by definition, is a universally binding principle. An axiom, the way you are using it, is personal and individual. If logic and mathematics have laws, those laws can be violated and the violator can be in error. If logic and mathematics are individually conceived, there is no universal standard by which one can be said to be in error. By arguing that logic and mathematics have no rules, you are arguing that no one can, in fact, be wrong. At most, one could be internally inconsistent. Thus, by declaring that your adversaries are wrong about math and logic, you refute your own philosophy with every correspondence. StephenB
gpuccio@113
You have often referred to peer reviewed philosophical literature. What are the principles according to which a paper is peer reviewed?
You'd have to ask a philosopher. I can give you an expert opinion for computer science and an informed opinion for the sciences in general, but I don't know the liberal arts wells enough to be able to summarize how they define quality.
Or are those criteria based, at least in principle, on objective rules (or, if you want, laws, or principles)
No, the process is subjective. In this regard it's no different from the sciences. Given enough time, the subjective opinions of experts will (usually) converge and good works gets published (eventually). If the process were objective, we wouldn't need peer-review --- we'd be able to tell for ourselves when a manuscript was of high enough quality to warrant publication, and the review process would only involve making sure the margins and typeface were of the correct sizes. BarryR
Does group theory give any guidance at all about which axioms are the best to use when dealing with a bank teller? jstanley01
kairosfocus@111 You're almost there. Group theory gives us the tools to:
arbitrarily redefine the + operator such that x + x –> 147, is to use the symbol in a radically different and inconsistent way
although the only inconsistency here lies in expectations -- the math is perfectly consistent. The reason that I'm able to this is because "+" is not some universal truth, but rather a well-defined mathematical relation that I'm free to replace with another well-defined mathematical relation. I'm really surprised so many people are having difficulty with this concept. A good middle-school geometry class should have been enough to clue people in that axioms are tools and we're free to pick our tools --- the constraints come with how those tools are able to interact. BarryR
Apologies for replying so late after the conversation has moved on. 10 JDH the ability to disobey the Creator is the essence of consciousness. Otherwise it’s just complicated programming with random choices. 19 jurassicmac your ‘analogy’ is also an “Argument from Incredulity.” As BarryR has it "Arguments from incredulity are almost always arguments from ignorance without the admission of ignorance". Given the background and experience JDH has it seems presumptuous to accuse him of not knowing what he's talking about. I also believe there are certain things that simply can not be coded. My belief is also backed up by years of writing software. how do you know that humans aren’t following their ‘program’? We don't really know with certainty, but I would argue because we sometimes do follow a "program" it's clear that we don't always follow a program. For example, we likely all have a program for getting ourselves from home to work. If home or work changes, it could happen, if we're not paying attention, we find ourselves following the old program. Similarly, anyone who’s ever tried to break a bad habit will know when they are following their “program” and when not. 24 gpuccio Point 3 is well made although I would add that for the analogy of a computer to fit better we need to see that a computer has hardware, software, and a user who causes the software to execute. Sometimes the user is hardly needed so it's easy to imagine that only hardware and software is at work. But other times what the software does changes dramatically based on a prior user action. The reason this whole issue is so tricky is that given _any_ sequence of user actions we can write software to perform those actions. IOW if a user sits down and writes an email with contents X, it would be trivial to then write software to produce X, but the software to produce X would also contain X. The results are the same but the processes are totally different. With consciousness it seems to be that for every conscious decision I make that results in action, the same action could be taken by another man based on habit. But by making a conscious decision I've done something immeasurably different. Based on that here is my attempt at a definition of consciousness. We have a physical nature which sends signals to, and receives signals from, the brain. The brain contains pre-defined sets of instructions which can be issued automatically in response to different kinds of input. Consciousness monitors this input and output and, most importantly, is capable of creating new sets of instructions based on the input and output. Consciousness also senses; that is it senses good and bad. Once created, instructions can be re-used automatically by the brain. I would describe the instructions as CSI and I would describe the ability to create new CSI as the ability which we can not pass on to machines (although I suspect that sensing good and bad is also something we can not pass on). 6 BarryR 'If I were to participate in a Turing test, I’d be asking questions like “Of the two most recent questions you were asked, which was the more difficult to answer and why?”' That is indeed the sort of question to ask in a Turing test. What you would be attempting to do is create a question for which the creator of the machine did not think to provide an answer. That is, a machine may contain in it's software the answer to a billion questions, so the way to defeat it is to create a new question. IOW A human can create CSI to which the only correct response is more CSI. A machine can not create CSI; therefore a machine can not answer such a question unless the answer was pre-programmed. Dunsinane
BarryR: Just a simple question, to understand better your thought. You have often referred to peer reviewed philosophical literature. What are the principles according to which a paper is peer reviewed? I mean, if we agree that not anything deserves to be published in a philosophical journal, are the criteria for peer review only conventional agrrements? In that case, would'nt such a sygstem just enhance conformism, and censor true creativity? Or are those criteria based, at least in principle, on objective rules (or, if you want, laws, or principles), and in that case, what are they, and how are they found, or judged, or shared by the reviewers? Just to know. gpuccio
This may interest some here: Ravi Zacharias addresses an audience at Harvard regarding the law of non-contradiction: Jesus Christ and the Exclusive Claim http://www.youtube.com/watch?v=szr7hPuh81c bornagain77
F/N: Above BarryR indulges int eh classic fallacy of equivocation. The + operator symbol has a normal meaning in mathematics, and this meaning is what is entailed when we observe that 2 + 2 = 4. To arbitrarily redefine the + operator such that x + x --> 147, is to use the symbol in a radically different and inconsistent way, imposing the result that [2 + 2 = 4] AND [2 + 2 = 157], i.e. we see the contradiction. Since mathematics is fundamentally about reasoning on the implications of structured defined operations on sets and properties of set members, it is accountable to both logic and realities of definable collections of objects. In short, the exercise above is an evasive one. GEM of TKI kairosfocus
I didn’t see any 2?s there, and I specifically asked you not to change the number 2 into anything else, including symbols.
I apologize. I had made an assumption that you were comfortable with basic algebra and a bit of set theory. This is not the case, and as such there's simply no way you're going to be able to understand the point I'm making. So yes, you are correct that, if you limit yourself to grammar-school arithmetic, 2+2 can only equal 4. You'll have to take it on faith that beyond the walls of grammar schools this is not considered a universal law.
I said all knowledge, that is, things learned, including knowledge of science, requires inference. If inference goes, science goes with it.
If inference goes, your proof that science goes if inferences goes would be the first (and last) thing to go. Inference is just another tool, like logic, math, and experiment. Inferences are often wrong. Logic is often irrelevant, math is often intractable, and don't get me started on experiments. If science required any of these be infallible, it would have disappeared in the 17th C. Because they are only tools, we use the best set of tools for a particular job, and despite the imperfections of our tools, the error bars keep getting smaller. As to Chesterton: I've just read the first 30 pages of _Orthodoxy_. This is philosophy for people who won't read philosophy, written by someone who didn't read philosophy either. As rhetoric, this is good enough to be notable, but compared to contemporaries like Borges, Orwell and Shaw it's frankly quite shallow. For example:
Thus when Mr. H. G. Wells says (as he did somewhere), "All chairs are quite different", he utters not merely a misstatement, but a contradiction in terms. If all chairs were quite different, you could not call them "all chairs."
As wit, this works. As argument, nothing further need be said to dismiss it. If you'd be so kind as to return the favor, please read this 8-page excerpt from Davis and Hersh's _The Mathematical Experience_. BarryR
Barry,
Define G over Z s.t. \forall x \in Z, (x+x)=>147. Or, a bit more verbosely, define a group called G over the integers such that for every element in G, we define the ‘+’ operator such that the operation of adding a number to itself maps to the number 147. Welcome to group theory.
I didn't see any 2's there, and I specifically asked you not to change the number 2 into anything else, including symbols. I noticed you ignored truncating the decimal of pi, also.
Interesting. Assuming that you aren’t asking that I take that on faith, how would you demonstrate this without using inference?
You have to take it on faith at the very bottom. And even if this conclusion were based, itself, on inference, what of it? I said all knowledge, that is, things learned, including knowledge of science, requires inference. If inference goes, science goes with it. Indeed the entire external world goes with it, for the external world is an inferred world.
A citation to the relevant literature will suffice. If Chesterson is the best you can do, then I think I’m going to remain unpersuaded.
I think, based on previous comments, you need to actually read it, not just be given a citation. And this is your response to Chesterton's argument? An ad hominem? You're not going to refute the actual argument? Typical. Pragmatism didn't work as a valid response, and now your ad hominem isn't fairing any better. Clive Hayden
StephenB@107
I thought you said there were no mathematical laws.
I think "thought" is a little strong. In the English-speaking community, "permissible" can mean both positive permission and the absence of prohibition. If you prefer to think of this as a mathematical law --- one may choose one's axioms --- it's perfectly permissible for you to do so. BarryR
---BarryR; "That being said, it’s perfectly permissible to define a group over the integers that’s closed over division, simply by making an arbitrary mapping for the divide-by-zero case." Permissible? I thought you said there were no mathematical laws. Will you ever stop contradicting yourself? StephenB
I think the 'tension' has to do with much of what is going on with the 2 = 2 = 147 fiasco; For one camp math can mean anything you want it to mean and ends up proving just about anything you want to prove, much like string theory and currently Hawking's even more contrived imaginary god of M theory, with scant empirical evidence to back them up,,, and even some strong evidence to go against the conjectures and for the other camp, they realize the truth that a stable universe is impossible unless some mathematical framework is indeed true!: In fact the 'real world' is the bane against these imagined mathematical many world's of M-theory now being touted by Hawking as his god: notes; “The multiverse idea rests on assumptions that would be laughed out of town if they came from a religious text.” Gregg Easterbrook Another escape that materialists have postulated was a slightly constrained 'string-theoretic' multiverse. The following expert shows why the materialistic postulation of 'string theory' is, for all intents and purposes of empirical science, a complete waste of time and energy: Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law: Peter Woit, a PhD. in theoretical physics and a lecturer in mathematics at Columbia, points out—again and again—that string theory, despite its two decades of dominance, is just a hunch aspiring to be a theory. It hasn't predicted anything, as theories are required to do, and its practitioners have become so desperate, says Woit, that they're willing to redefine what doing science means in order to justify their labors. http://www.amazon.com/Not-Even-Wrong-Failure-Physical/dp/0465092756 Though to be fair, a subset of the math of the string hypothesis did get lucky with a interesting 'after the fact' prediction of a already known phenomena: A first: String theory predicts an experimental result: Excerpt: Not to say that string theory has been proved. Clifford Johnson of the University of Southern California, the string theorist on the panel, was very clear about that. http://www.symmetrymagazine.org/breaking/2009/02/16/a-first-string-theory-predicts-an-experimental-result/ Despite this seemingly successful 'after the fact' prediction/description of a physical phenomena, string theory is suffering severe setbacks in other areas, thus string theory has yet to even establish itself as a legitimate line of inquiry within science. Testing Creation Using the Proton to Electron Mass Ratio Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,, For the first time, limits on the possible variability of the electron to proton mass ratio are low enough to constrain dark energy models that “invoke rolling scalar fields,” that is, some kind of cosmic quintessence. They also are low enough to eliminate a set of string theory models in physics. That is these limits are already helping astronomers to develop a more detailed picture of both the cosmic creation event and of the history of the universe. Such achievements have yielded, and will continue to yield, more evidence for the biblical model for the universe’s origin and development. http://www.reasons.org/TestingCreationUsingtheProtontoElectronMassRatio As well, even if the whole of string theory were found to be true, it does nothing to help the materialist, and in reality, only adds another level of 'finely tuned complexity' for us to deal with without ever truly explaining the origination of that logically coherent complexity (Logos) of the string theory in the first place. Bruce Gordon, after thorough analysis of the entire string theory framework, states the following conclusion on page 72 of Robert J. Spitzer's book 'New Proofs For The Existence Of God': 'it is clear that the string landscape hypothesis is a highly speculative construction built on shaky assumptions and,,, requires meta-level fine-tuning itself." - Bruce Gordon This following article illustrates just how far string theory would miss the mark of explaining the fine-tuning we see even if it were found to be true: Baron Münchhausen and the Self-Creating Universe: Roger Penrose has calculated that the entropy of the big bang itself, in order to give rise to the life-permitting universe we observe, must be fine-tuned to one part in e10exp(123)?10^10exp(123). Such complex specified conditions do not arise by chance, even in a string-theoretic multiverse with 10^500 different configurations of laws and constants, so an intelligent cause may be inferred. What is more, since it is the big bang itself that is fine-tuned to this degree, the intelligence that explains it as an effect must be logically prior to it and independent of it – in short, an immaterial intelligence that transcends matter, energy and space-time. (of note: 10^10^123 minus 10^500 is still, for all practical purposes, 10^10^123) http://www.evolutionnews.org/2007/06/baron_munchausen_and_the_selfc.html GRBs Expand Astronomers' Toolbox - Nov. 2009 Excerpt: a detailed analysis of the GRB (Gamma Ray Burst) in question demonstrated that photons of all energies arrived at essentially the same time. Consequently, these results falsify any quantum gravity models requiring the simplest form of a frothy space. http://www.reasons.org/GRBsExpandAstronomersToolbox Systematic Search for Expressions of Dimensionless Constants using the NIST database of Physical Constants Excerpt: The National Institute of Standards and Technology lists 325 constants on their website as ‘Fundamental Physical Constants’. Among the 325 physical constants listed, 79 are unitless in nature (usually by defining a ratio). This produces a list of 246 physical constants with some unit dependence. These 246 physical constants can be further grouped into a smaller set when expressed in standard SI base units.,,, http://www.mit.edu/~mi22295/constants/constants.html “If we modify the value of one of the fundamental constants, something invariably goes wrong, leading to a universe that is inhospitable to life as we know it. When we adjust a second constant in an attempt to fix the problem(s), the result, generally, is to create three new problems for every one that we “solve.” The conditions in our universe really do seem to be uniquely suitable for life forms like ourselves, and perhaps even for any form of organic complexity." Gribbin and Rees, “Cosmic Coincidences”, p. 269 etc.. etc.. etc.. bornagain77
Define G over Z s.t. \forall x \in Z, (x+x)=>147
I am not so certain this has occurred to you, in fact, it seems quite certain that it hasn't - but your work is not "two plus two equalling one-hundred and forty-seven". Perhaps you just got in a hurry, and forgot the question:
Can you make 2+2=147?
If you clear your head, then maybe you can go back and do it again? Upright BiPed
markf@98
I first realised that maths is not a description of some abstract world but a tool when I was introduced to complex numbers. i is an invention not a discovery. Then I gradually realised that is true of all of maths.
There's an interesting tension here. Neoplatonist mathematicians (and they are the majority) believe their mathematics exists a priori and their work consists entirely of mapping out this existing terrain. However, as you point out, to the rest of us it does appear that mathematicians are creating new mathematics and then exploring the consequences of them. I've argued both sides, and I've found both models useful at different times in my own work. What gets folks like CH into trouble is thinking that because they have a firm grasp on arithmetic, they can declare 2+2=4 to be some sort of universal truth. You don't see working mathematicians making this argument --- they know that in some systems this statement is consistent, in other systems it's not, and in still other systems it's simply meaningless. When a mathematician argues for a universal truth, it's usually in the form of entire systems of systems (Turing's solution to the halting problem comes to mind). Given CH's level of mathematical sophistication, I'd rather make the argument for arbitrary axioms using group theory and geometry rather than argue for discovered axioms using nonrecursive functions. So, there are good arguments on both sides; CH just doesn't know where they are. BarryR
CH@91
Show me how it’s done please. Show me how 2+2=147.
Define G over Z s.t. \forall x \in Z, (x+x)=>147. Or, a bit more verbosely, define a group called G over the integers such that for every element in G, we define the '+' operator such that the operation of adding a number to itself maps to the number 147. Welcome to group theory. This particular group isn't very interesting, but neither is it wrong. Once you've chosen your axioms and defined your systems, *then* you can start saying whether or not arbitrary statements can be derived from them. The more common example is parallel lines. You can construct useful geometries where they exist and other useful geometries where they are forbidden. Neither type of system is wrong.
I’d like to see how you can divide by zero, while we’re at it
32-bit computers handle this as follows: define a group G over the rationals F defined by IEEE 754 such that for all x in F, x/0 is defined to be within the range of numbers from 0x07F800000 to 0xFFFFFFFF. That being said, it's perfectly permissible to define a group over the integers that's closed over division, simply by making an arbitrary mapping for the divide-by-zero case. You may have to be clever in order to preserve the other properties of the group that you want to make use of, and I'd consider the IEEE 754 solution to be pretty clever.
I mean, if the particulars of math depend on man’s convention, this should be no problem.
Indeed.
Logic is “nice and useful” makes it sound as if it is not the whole show in science.
I'm glad that came across clearly.
If our ability of inference goes, science goes with it, for all knowledge whatsoever depends on our powers of inference.
Interesting. Assuming that you aren't asking that I take that on faith, how would you demonstrate this without using inference? I think the "all knowledge" bit is going to give you some trouble there. Assuming you're not planning on demonstrating this for every particular bit of knowledge, you're going to need a method to infer from the cases you do prove to the rest of the cases, yet this inferring cannot use inference (otherwise, your conclusion shows your proof is invalid). Didn't /quite/ think that one through, yes?
I really don’t want to have to explain the difference between descriptions of nature and real laws of logic and reason a third time.
A citation to the relevant literature will suffice. If Chesterson is the best you can do, then I think I'm going to remain unpersuaded. BarryR
correction, I mistakenly said that Dr Bradley did not imply that math determined 'reality', when in fact he did make that point in a fairly strong fashion. bornagain77
markf: I am sure Barryr can explain this much better than me – but the point is that you can throw in almost any axiom in mathematics and get consistent results because you also get to define what consistency is. I have to disagree. Non contradiction is fundamental for all historical mthemathics, and Godel still based his fundamental theorem on the concepts of completeness and consistency, relating them in his well known formulation. I am aware that, in the last few decades, some have tried to develop forms of "Inconsistent Mathematics", but again what has not been done by philosophers in the last few decades? gpuccio
markf you state in regards to math, i is an invention not a discovery. Then I gradually realised that is true of all of maths. Actually the square root of negative 1 (i) was developed (invented) to solve problems, I believe, in algebra, but is now known to be essential for describing certain actions in quantum mechanics: Michael Denton - Mathematical Truths Are Transcendent And Beautiful - Square root of -1 is built into the fabric of reality - video http://www.metacafe.com/watch/4003918 In fact Dr. Sewell also points out at the end of following audio that Schrodinger's equation is also built into the fabric of reality, as it were, dictating exactly how 'reality' will behave: Finely Tuned Big Bang, Elvis In The Multiverse, and the Schroedinger Equation - Granville Sewell - video http://www.metacafe.com/watch/4233012 Dr. Bradley also ways in here, but not to the point of saying that math determines reality: The Underlying Mathematical Foundation Of The Universe -Walter Bradley - video http://www.metacafe.com/watch/4491491 The Five Foundational Equations of the Universe and Brief Descriptions of Each: http://docs.google.com/Doc?docid=0AYmaSrBPNEmGZGM4ejY3d3pfNDdnc3E4bmhkZg&hl=en markf, myself I hold that a line cannot be consistently straight within the space-time of this universe unless the universe is consistently 'geometrically flat, which it is: Did the Universe Hyperinflate? - Hugh Ross - April 2010 Excerpt: Perfect geometric flatness is where the space-time surface of the universe exhibits zero curvature (see figure 3). Two meaningful measurements of the universe's curvature parameter, ½k, exist. Analysis of the 5-year database from WMAP establishes that -0.0170 < ½k < 0.0068.4 Weak gravitational lensing of distant quasars by intervening galaxies places -0.031 < ½k < 0.009.5 Both measurements confirm the universe indeed manifests zero or very close to zero geometric curvature,,, http://www.reasons.org/did-universe-hyperinflate The extraordinary degree of fine-tuning that ensures the universe is consistently flat is of no small wonder. I also hold that a circle could not be consistently round unless the space-time was spherical, which is exactly what it turns out to be: The Known Universe by AMNH http://www.youtube.com/watch?v=17jymDn0W6U Again the stunning degree of fine tuning to ensure that the expansion of space-time is spherical is of no small wonder: Proverbs 8:26-27 While as yet He had not made the earth or the fields, or the primeval dust of the world. When He prepared the heavens, I was there, when He drew a circle on the face of the deep, Thus instead of holding the absurd position of mathematics 'being an invention' instead of a discovery, as you currently do, I hold that 'transcendent math' as laid out by the infinite Mind of God, actually does dictate how 'material' reality will behave. "The reason that mathematics is so effective in capturing, expressing, and modeling what we call empirical reality is that there is a ontological correspondence between the two - I would go so far as to say that they are the same thing." Richard Sternberg - Pg. 8 How My Views On Evolution Evolved In fact, though not proved rigorously yet, I hold that the following equation cannot be true unless this equation actually does dictate how reality is constructed at its most foundational level: Euler's Number - God Created Mathematics - video http://www.metacafe.com/watch/4003905 This following website has the complete working out of the math of Pi and e in the Bible, in the Hebrew and Greek languages respectively, for Genesis 1:1 and John 1:1: http://www.biblemaths.com/pag03_pie/ bornagain77
I notice that some bloggers are, once again, trying to argue that the laws of logic do not apply to the real world. On the contrary, a sound premise [one which reflects the real world] followed by valid reasoning [the mind's process faithfully applied] will always lead to a sound conclusion [which also reflects the real world]. Thus, if it appears that the conclusion is not sound, then the premises or the conditions have been changed, morphed, or tampered with. StephenB
#96 Fair enough. I didn't realise he was just saying some things are true by definition. To say "if A is older than B then B is younger than A" is just to say something about the meaning of the words. It says nothing about the world. I thought he was trying to say that "if A is older than B then B will be younger at A necessarily at all times" #97 gpuccio I am sure Barryr can explain this much better than me - but the point is that you can throw in almost any axiom in mathematics and get consistent results because you also get to define what consistency is. I first realised that maths is not a description of some abstract world but a tool when I was introduced to complex numbers. i is an invention not a discovery. Then I gradually realised that is true of all of maths. markf
This is really funny: “We define addition of any two identical integers to be equivalent to 147.” Yes, and we can define consciousness as a third order model, and free will as a form of determinism. What a pity that, at least in mathematics, when you build a system on some arbitrary axioms you need to be careful not to get inconsistent results. What an exaggerated restriciton to our freedom with words... gpuccio
markf,
This is a good example. According to special relativity theory if the ugly sisters went a long way at a very fast speed and returned centuries later they would find Cinderalla had aged a lot more than they had. What appeared to be logically necessary turns out not to be.
Of course what he meant was in the context of the story of Cinderella, where there is no mention of traveling at fast speeds and long distances at that speed. He also knew about the Heisenberg uncertain principle, which bodes even worse for the materialist. "If there are two staring and outstanding facts about science and religion at this particular moment, they are these. First, that science is claiming much less than it did to show us a solid and objective reality. And second, that religion is claiming much more than it did (at least for centuries past) that its miracles and marvels of mystical experience can be proved to exist as a solid and objective reality. On the one side, the Atom has entirely lost the objective solidity it had for the nineteenth-century materialists. On the other side, the Ascension is accepted as a case of Levitation by many who would not accept it as an Ascension. On the one hand, the science of physics has almost become a science of metaphysics. For it is not merely, as is often said, that the Atom has become an abstract mathematical formula; it is almost as true to say that it has become a mere algebraic symbol. For the new physicists tell us frankly that what they describe is not the objective reality of the thing they observe; that they are not examining an object as the nineteenth century materialists thought they were examining an object. Some of them tell us that they are only observing certain disturbances or distortions, actually created by their own attempt to observe. Eddington is more agnostic about the material world than Huxley ever was about the spiritual world. A very unfortunate moment at which to say that science deals directly with reality and objective truth." G. K. Chesterton, The Well and the Shallows Clive Hayden
The example in 93 removes the conditional, so that is is no longer a logical necessity that Cinderella is younger than the Ugly Sisters. At the point in the "very fast speed experiment" that the Ugly Sisters are no longer older than Cinderella, Cinderella is necessarily not younger than the Ugly Sisters. Logical necessity still holds true. How could it not?!?!? If you remove the conditional, the conclusion no longer necessarily follows. CJYman
93 In other words, "all that is required to prove this example wrong is that which has never happened, and perhaps never can. Nice. Upright BiPed
#91 For instance, if the Ugly Sisters are older than Cinderella, it is (in an iron and awful sense) NECESSARY that Cinderella is younger than the Ugly Sisters This is a good example. According to special relativity theory if the ugly sisters went a long way at a very fast speed and returned centuries later they would find Cinderalla had aged a lot more than they had. What appeared to be logically necessary turns out not to be. markf
In this scientific area, [molecular biology and genetics] we don’t get to pick our own laws (either mathematical or moral). We have to deal with the evidence in front of us. For this domain, our knowledge does converge on what is actually out there." You certainly take an awful lot for granted. Evidence must be intepreted. Inasmuch as you disavow reason's rules, what standards do you use to interpret evidence? If, as you believe, something can come into existence without a cause, how do you discern which effects were caused and which ones were not? How could you do science if there are some things that appear to have been caused but were really not caused at all? StephenB
BarryR,
Mathematics is relative? Depends. Relative to what? Can you make 2+2=147? Yes. Trivially. That’s the nice thing about math — you get to pick your axioms. So, if you like, you can treat 2+2=147 as an axiom and off you go. Here’s how: “We define addition of any two identical integers to be equivalent to 147.” There’s nothing in mathematics that prevents you from doing this.
Show me how it's done please. Show me how 2+2=147. Not any other numbers, nor any other integers, nor any other symbols, but 2+2=147. I'd like to see how you can divide by zero, while we're at it, and can you truncate the decimal of pi for me please? I don't mean by just cutting it off or stopping it short, I mean by actually finishing it in just a few decimal places. I mean, if the particulars of math depend on man's convention, this should be no problem. I'd also like to see a physical representation of the square root of two, that is, I'd like to see the actual physical object of it. The digression to pragmatism, the need to remind me of "using whatever works" as a philosophy, is usually the default position of people that either don't really grasp what I'm saying, or if they do, they don't care for it, and are trying to, in a round about way, claim that since these descriptions of nature are useful, therefore they trump any other difficulties that may arise in our real knowledge of these descriptions. As if being useful were supposed to be taken as being therefore true or better or worthy or some such parallel that is implied. This is a typical response. It has all of the marks of someone trying to maintain their preferred position of elevating our descriptions of the natural world above even our own real understanding of metaphysical things that we actually understand as a chain of reasoning. Logic is "nice and useful" makes it sound as if it is not the whole show in science. If our ability of inference goes, science goes with it, for all knowledge whatsoever depends on our powers of inference. I really don't want to have to explain the difference between descriptions of nature and real laws of logic and reason a third time. But I think that is what it takes sometimes to pull someone out of their philosophical scientism. It's ironic, because scientism is itself not scientific, it is a metaphysical philosophy, which is itself not a physical object, and therefore it can actually be understood, no matter how misguided it is. Which brings me to the point. "It might be stated this way. There are certain sequences or developments (cases of one thing following another), which are, in the true sense of the word, reasonable. They are, in the true sense of the word, necessary. Such are mathematical and merely logical sequences. We in fairyland (who are the most reasonable of all creatures) admit that reason and that necessity. For instance, if the Ugly Sisters are older than Cinderella, it is (in an iron and awful sense) NECESSARY that Cinderella is younger than the Ugly Sisters. There is no getting out of it. Haeckel may talk as much fatalism about that fact as he pleases: it really must be. If Jack is the son of a miller, a miller is the father of Jack. Cold reason decrees it from her awful throne: and we in fairyland submit. If the three brothers all ride horses, there are six animals and eighteen legs involved: that is true rationalism, and fairyland is full of it. But as I put my head over the hedge of the elves and began to take notice of the natural world, I observed an extraordinary thing. I observed that learned men in spectacles were talking of the actual things that happened--dawn and death and so on--as if THEY were rational and inevitable. They talked as if the fact that trees bear fruit were just as NECESSARY as the fact that two and one trees make three. But it is not. There is an enormous difference by the test of fairyland; which is the test of the imagination. You cannot IMAGINE two and one not making three. But you can easily imagine trees not growing fruit; you can imagine them growing golden candlesticks or tigers hanging on by the tail. These men in spectacles spoke much of a man named Newton, who was hit by an apple, and who discovered a law. But they could not be got to see the distinction between a true law, a law of reason, and the mere fact of apples falling. If the apple hit Newton's nose, Newton's nose hit the apple. That is a true necessity: because we cannot conceive the one occurring without the other. But we can quite well conceive the apple not falling on his nose; we can fancy it flying ardently through the air to hit some other nose, of which it had a more definite dislike. We have always in our fairy tales kept this sharp distinction between the science of mental relations, in which there really are laws, and the science of physical facts, in which there are no laws, but only weird repetitions. We believe in bodily miracles, but not in mental impossibilities. We believe that a Bean-stalk climbed up to Heaven; but that does not at all confuse our convictions on the philosophical question of how many beans make five. Here is the peculiar perfection of tone and truth in the nursery tales. The man of science says, "Cut the stalk, and the apple will fall"; but he says it calmly, as if the one idea really led up to the other. The witch in the fairy tale says, "Blow the horn, and the ogre's castle will fall"; but she does not say it as if it were something in which the effect obviously arose out of the cause. Doubtless she has given the advice to many champions, and has seen many castles fall, but she does not lose either her wonder or her reason. She does not muddle her head until it imagines a necessary mental connection between a horn and a falling tower. But the scientific men do muddle their heads, until they imagine a necessary mental connection between an apple leaving the tree and an apple reaching the ground. They do really talk as if they had found not only a set of marvellous facts, but a truth connecting those facts. They do talk as if the connection of two strange things physically connected them philosophically. They feel that because one incomprehensible thing constantly follows another incomprehensible thing the two together somehow make up a comprehensible thing. Two black riddles make a white answer. In fairyland we avoid the word "law"; but in the land of science they are singularly fond of it. Thus they will call some interesting conjecture about how forgotten folks pronounced the alphabet, Grimm's Law. But Grimm's Law is far less intellectual than Grimm's Fairy Tales. The tales are, at any rate, certainly tales; while the law is not a law. A law implies that we know the nature of the generalisation and enactment; not merely that we have noticed some of the effects. If there is a law that pick-pockets shall go to prison, it implies that there is an imaginable mental connection between the idea of prison and the idea of picking pockets. And we know what the idea is. We can say why we take liberty from a man who takes liberties. But we cannot say why an egg can turn into a chicken any more than we can say why a bear could turn into a fairy prince. As IDEAS, the egg and the chicken are further off from each other than the bear and the prince; for no egg in itself suggests a chicken, whereas some princes do suggest bears. Granted, then, that certain transformations do happen, it is essential that we should regard them in the philosophic manner of fairy tales, not in the unphilosophic manner of science and the "Laws of Nature." When we are asked why eggs turn to birds or fruits fall in autumn, we must answer exactly as the fairy godmother would answer if Cinderella asked her why mice turned to horses or her clothes fell from her at twelve o'clock. We must answer that it is MAGIC. It is not a "law," for we do not understand its general formula. It is not a necessity, for though we can count on it happening practically, we have no right to say that it must always happen. It is no argument for unalterable law (as Huxley fancied) that we count on the ordinary course of things. We do not count on it; we bet on it. We risk the remote possibility of a miracle as we do that of a poisoned pancake or a world-destroying comet. We leave it out of account, not because it is a miracle, and therefore an impossibility, but because it is a miracle, and therefore an exception. All the terms used in the science books, "law," "necessity," "order," "tendency," and so on, are really unintellectual, because they assume an inner synthesis, which we do not possess. The only words that ever satisfied me as describing Nature are the terms used in the fairy books, "charm," "spell," "enchantment." They express the arbitrariness of the fact and its mystery. A tree grows fruit because it is a MAGIC tree. Water runs downhill because it is bewitched. The sun shines because it is bewitched. I deny altogether that this is fantastic or even mystical. We may have some mysticism later on; but this fairy-tale language about things is simply rational and agnostic. It is the only way I can express in words my clear and definite perception that one thing is quite distinct from another; that there is no logical connection between flying and laying eggs. It is the man who talks about "a law" that he has never seen who is the mystic. Nay, the ordinary scientific man is strictly a sentimentalist. He is a sentimentalist in this essential sense, that he is soaked and swept away by mere associations. He has so often seen birds fly and lay eggs that he feels as if there must be some dreamy, tender connection between the two ideas, whereas there is none. A forlorn lover might be unable to dissociate the moon from lost love; so the materialist is unable to dissociate the moon from the tide. In both cases there is no connection, except that one has seen them together. A sentimentalist might shed tears at the smell of apple-blossom, because, by a dark association of his own, it reminded him of his boyhood. So the materialist professor (though he conceals his tears) is yet a sentimentalist, because, by a dark association of his own, apple-blossoms remind him of apples." G.K. Chesterton, The Ethics of Elfland, Orthodoxy. http://www.gkc.org.uk/gkc/books/ortho14.txt Clive Hayden
---BarryR: "Based on several hundred hours of sitting at the feet of the masters, I can tell you that law and justice are elastic, inexact, and approximate. I prefer them that way — it makes the practice of law a very human activity." The issue is whether civil law and civil standards of justice are based on objective moral standards or whether they are based on the whim of popular opinion or judicial tyrants. If it is the latter, then justice is whatever the tyranny of the majority or judicial fiat says it is. StephenB
---BarryR: "As to the rest, I don’t have any difficulty avoiding self-referential incoherence in my arguments." The problem is that you don't recognize the incoherence. ---"It’s not hard. When I write a journal article my reviewers and I share a common set of assumptions that don’t have to be demonstrated ab initio." What assumptions do you share with your reviewers? Do they, like you, think that something could come from nothing? Do they, like you, think that a cause can give something that it does not have to give? Do they, like you, think that there is no "law" of non-contradiction and, therefore, a thing could, in principle, exist and not exist at the same time? Do they, like you, agree that 2+2 can equal anything you like? If so, you certainly enjoy a homogeneous audience of radical hyperskeptics. ---"The reason we share those assumption is because they work well for what we’re trying to accomplish." I am reading you right now and I do not share your assumptions. Presumably, there are others. How do you know your assumptions are working well if you have no reasonable, objective, and non-negotiable standards for defining "well." Who decides whether or not your assumptions are working? I submit that they are not working well at all. How could someone reasonably arbitrate or adjudicate our disagreements if there are no non-negotiable rules of right reason to oversee the arbitration? StephenB
Mathematics is relative?
Depends. Relative to what?
Can you make 2+2=147?
Yes. Trivially. That's the nice thing about math --- you get to pick your axioms. So, if you like, you can treat 2+2=147 as an axiom and off you go. Here's how: "We define addition of any two identical integers to be equivalent to 147." There's nothing in mathematics that prevents you from doing this. It's not a very interesting example; with the freedom to choose your axioms comes the responsibility to choose interesting, productive ones. Here's a less trivial example: Do you believe that a+b+c = c+b+a? That might be a reasonable-looking axiom, but it's not one your computer is able to handle. So your computer uses a set of mathematical axioms that allows the order of the terms in addition to affect the outcome. That's not right or wrong, it's just a different system. As to the rest, I don't have any difficulty avoiding self-referential incoherence in my arguments. It's not hard. When I write a journal article my reviewers and I share a common set of assumptions that don't have to be demonstrated ab initio. The reason we share those assumption is because they work well for what we're trying to accomplish. Induction is one of those assumptions. We don't need to be able to provide a proof that it works before we are able to make use of it. If you read Hume on induction (and the problem of induction is credited to him) you'll find that he was equally pragmatic. Logic is nice and often useful, but it's just one tool among many that I can bring to bear on my research. Pragmatism is another tool. Intuition is another. All that being said, I couldn't make a whole lot of sense out of your post. Can you sum up your point in a couple of sentences? Is it anything beyond "science doesn't provide certainty"? BarryR
BarryR, Mathematics is relative? Can you make 2+2=147? I don't think you're really understanding the full weight of the problem of induction in relation to all things material and the actual understanding of laws of reason and why we don't see the two the same way. I know you think that you've not only observed things physically connected, but you've been fooled that you understand the connection of material things by description in the same way that you understand the connection of metaphysical things philosophically. As if two black riddles put together makes a white answer, as if seeing things connected physically means you understand them as ideas or philosophically, you don't. Finding a set of facts connected in the material world is not the same as finding a truth connecting those facts. This is what I mean when I say it is not logically impossible that birds give live birth or that the sun be further away, or whatever you like in nature to change; this doesn't mean that 2+2 can equal 947. We do not understand material things in the same way. I think you've got it exactly backwards. And all argument implies a real truth the arguer is approximating to, used when trying to convince the other that they are "right", there is no hope in arguing in any other way or of denying this without becoming self referentially incoherent. Any argument against it has to presuppose it. But, it should be noted, which is the very crux of the difference, there is no equivalent ability to argue about the ideas behind the material connections, for we are only able to argue from observations of descriptions, not material or physical laws that we perceive as, itself, a chain of reasoning; because we do not perceive them in that way, we only observe them. We cannot get behind nature's curtain in that respect. It is not logically impossible, so far as we can tell, for anything in nature to have been otherwise, it is logically impossible for 2+2 to equal 9,076.42. We believe in bodily miracles, we do not believe in mental impossibilities. Clive Hayden
CH@85 I'm sorry, but I think you've got this completely backwards. I can understand why you would hold these ideas, but I think if you were to study any of them in depth you'd come to see them very differently. 1. Math and logic. There are no "laws of math and logic". There are axioms and ways of relating axioms to each other, but you're free to choose your axioms and your relations to suit whatever it is you're trying to accomplish. I agree that this is not what is taught in high school mathematics, or even undergraduate mathematics for non-math majors. But once you hit the mathematics that mathematicians do for a living, I don't believe you're going to find any "actual laws". Davis and Hersh's _The Mathematical Experience_ is a layman's introduction to the philosophy of mathematics as well as the practice and sociology of mathematics. For a bit less money, OxfordUP puts out an "A very short introduction" series, and the one on mathematics touches these issues as well. 2. Law and justice. Several years ago I discovered oyez.org, a repository of oral arguments before the US Supreme Court. I fell in love. I no longer listen to music when I program; instead, I start up an MP3 and listen to an hour's worth of learned but fragmentary debate about anything from Guantanamo Bay to the legal definition of gravel. (They're also great for driving, but after seven years my girlfriend still can't make it through half of an argument without falling asleep.) Based on several hundred hours of sitting at the feet of the masters, I can tell you that law and justice are elastic, inexact, and approximate. I prefer them that way --- it makes the practice of law a very human activity. If you'd rather not dive into the oral arguments, Justice Breyer gave two marvelous lectures at Yale where he described his views of what the court does and how it does it. Searching under his name at Youtube should bring them up in short order. 3. However, we have an embarrassment of riches when it comes to explaining why and how birds lay eggs. We can start at the chemical level and work up to the proteins and carbohydrates involved. We can zoom out a bit and explain what's happening at a subcellular and cellular level. We can detail the hormonal regulatory systems and nutrition pathways. We can examine the details of the skeletal, muscular and sexual organs. We can observe mating pairs and the larger populations. Finally, we have a tools in molecular biology and genetics that allows us to zoom out and reason about not only the entire population, but the species as a whole and how the species relates to all other life on earth, past, present and future. In this scientific area, we don't get to pick our own laws (either mathematical or moral). We have to deal with the evidence in front of us. For this domain, our knowledge does converge on what is actually out there. BarryR
BarryR,
Still, you do have a point. We can’t perceive the world, we can only perceive models of it. But since we can improve these models to get a continuously improving approximation of the world, this doesn’t turn out to be a practical problem. (I think of this position as ontological realism — there really is a world — combined with epistemological anti-realism — we can only know the world through our models of it.)
Maybe I'm not making myself clear. In actual laws of logic and reason, and in things like justice, freedom, dignity, morality, mathematics, we can understand why one thing leads to another, and therefore why they are connected. We can understand and explain the reasonableness of the connection. We can see why it is reasonable to send a pickpocket to jail; we take liberty away from a man who takes liberties. We can see the mental connection between the pickpocket and the prison as a set of facts. Keep in mind, all of what I have just described, laws of reason and logic, justice and so on, are metaphysical and immaterial, we perceive the reasonableness of these things philosophically. However, we do not have the same insight as to why a bird that flies also lays eggs, or why any two things connected physically must be connected philosophically. We do not have the same insight into why one natural material thing, taken as an idea, must lead to another, either as a continuation of that idea, or for some other reason, because we cannot understand them in terms of reason, or their connection in those terms, in the same way as we can understand and explain the reasonable connection between metaphysical things. In laws of reason and logic and mathematics and morality, there are real laws, laws that we can perceive the reasonableness of with our reason. We have no equivalent insight into what we call the laws of nature, or why any two things must be connected that follow one another in the natural world, all we can do is describe them. But a description doesn't amount to a proscription. The material world is a mystery on the level of such explanations, all we can hope to do is describe it, whereas the vision of life (reason, purpose, morality, etc.) is the reality. It is mentally impossible that 2+2=147, but it is not mentally impossible that a bird would give live birth, or that grass be a different color. We can believe in physical oddities, but not in mental impossibilities. Clive Hayden
CH@80
We can have knowledge of a description of matter, we never have an actual understandable explanation of matter because of the problem of induction in studying anything material or physical.
That's true if you treat explanations as binary, but no one does. We have better explanations and worse explanations, and scientific explanations (and others) have the property that they can be improved. Still, you do have a point. We can't perceive the world, we can only perceive models of it. But since we can improve these models to get a continuously improving approximation of the world, this doesn't turn out to be a practical problem. (I think of this position as ontological realism --- there really is a world --- combined with epistemological anti-realism --- we can only know the world through our models of it.) BarryR
mullerpr@78 Do you remember this xkcd? This:
both the Zombie and “5 minute old reality” notions have no logical impact on epistemology
I rank right up there with "Have you tried logarithms?" Yes, I am a physicalist when it comes to philosophy of mind and philosophy of science. Not so much philosophy of mathematics. I suppose you'll be launching into your potted jeremiad against physicalism now... I'll read it if you promise not to use the words "logical" or "epistemology". BarryR
CharlesJ@76 Ok, pointers into the Philosophy of Mind literature: The two best places to start are Dennett's _Consciousness Explained_ and Jaegwon Kim's _Physicalism, Or Something Near Enough_. Dennett is unusual for a couple of reasons: he knows how to write for a popular audience, and he's grounding his philosophical views in the most recent neuroscience. He probably going to be shown to be wrong in most of his details, and this is something he'll happily admit to --- and he suggests a series of feasible experiments than could be used to disconfirm his hypotheses (not something philosophers are known for). Aside from that, he'll give you a good grounding in the current state of the several debates in the field, particularly the "Cartesian Theater" and the zombie problems. He's also quite good on the implications of the Turing Test, but that essay is in _Mindchildren_. To counterbalance Dennett, read Jaegwon Kim. His book is unusually well-written but targets the advanced undergrad or new graduate student --- you'll need to pay more attention, but it's worth it. He does more of a historical survey, and this puts the debates that Dennett identifies in a context that informs why they're considered important. I'd like to be able to recommend a good dualist, but I don't know of one. I find wikipedia to be competent in this area, but not much more. For quick general information, the Stanford Encyclopedia of Philosophy is embarrassingly good. BarryR
---BarryR: "The idea of a “I” separate from the physical brain that can nevertheless influence the brain has fallen out of favor because we’ve not be able to locate any mechanism that would allow this to happen, and we’ve located many (perhaps most) of the physical mechanisms that we associate with conscious behavior." The idea of an "I" separate from the physical brain has not fallen out of favor with rational people who understand that matter cannot reflect on or investigate itself. Physicalists embrace their world view because they disdain the prospect of a non-physical realm, not because there are any sound arguments to support their position. StephenB
BarryR,
I’m afraid this looks like the beginning of yet another argument for epistemological nihilism: because we can’t have perfect knowledge of matter, we have no knowledge.
We can have knowledge of a description of matter, we never have an actual understandable explanation of matter because of the problem of induction in studying anything material or physical. Clive Hayden
mullerpr@74 As to utility and elegance: I'm here on this earth to figure out how the world works (fide G. H. Hardy: it's what I do well). Other things are important too, but I'm generally happiest when I'm doing research. (I'm particularly blessed to have a girlfriend who is a physicist who understands and sympathizes with this.) As per my previous post, I can't have absolute certainty about anything, so I'm going to have to make a set of assumptions and treat them as true. How should I go about choosing those assumptions? There are a bewildering number to choose from. I tend to pick those assumptions that are useful. Since I want to find out how the world works, it helps to assume that the world exists and is explicable, and it's more elegant to assume that the appearance of age indicates age rather than a deceitful deity. If you start with a different goal --- to be a member of a community, perhaps --- then it's possible to end up with a different set up assumptions. Lastly, I'm afraid the following isn't persuasive:
Binding our ability to know to a state of matter has consistently been proven irrational.
We're able to perceive most states of matter. The fact that there are states we can't perceive (or can't perceive simultaneously) doesn't change this. I'm afraid this looks like the beginning of yet another argument for epistemological nihilism: because we can't have perfect knowledge of matter, we have no knowledge. I find it more useful to consider knowledge of matter (and most other things) to be imperfect and improvable. BarryR
Hi BarryR, At this stage I can only say that I hope you impress me with your response to my actual question. On you response to my PS... You have just proven your ability to stretch logic and mix classes. The argument is simple ...both the Zombie and "5 minute old reality" notions have no logical impact on epistemology... ergo of no consequence to epistemology. Jumping from "5 minute old reality" to epistemological nihilism... Why? There are no relationship. P.S. So... are you a physicalist? mullerpr
---jurrasicmac: "To say that God created something is not meaningful statement if you’re talking about proximate explanations." It is if you put God as the cause of the proximate causes. --"It is not meaningful to say that God made my house, radioactivity, the earth, or the dinosaurs, if you are trying to explain how those things came about." If God created the conditions that produce radioactivity or if God programmed the evolution that produced dinosaurs, then God is the logical explanation for those conditions. ---"(unless you think that God miraculously made your house, or radiation, or the earth, or dinosaurs – in which case you have abandoned science)" I wasn't talking about science. I was talking about metaphysics and theology, which explain science just as science explains and measures movement. Your argument is that only scientific explanations qualify as explanations. On the other hand, God didn't make your house, the builder did. That is why your original example along those lines was inappropriate. At the same time, God was responsible [and is the explanation] for the raw materials that the builder used to build it. Thus, God is the only rational explanation for time, space, and matter inasmuch as all are finite substances needing a cause. By extension, the human builder or architect is the only rational explanation for the existence of the house. ---"I use the term ‘emergent’, not just ‘emerged’. ‘Saltiness’ is an emergent property of sodium chloride; neither sodium nor chloride are ‘salty’; ‘saltiness’ is only a property of the whole, and not of the individual components. Saying that saltiness is an emergent property of sodium chloride isn’t attempting to “explain” the property, it is just making the observation that the whole can be more than the sum of the parts. Just because the property of ‘saltiness’ isn’t found in the individual parts of a sodium chloride molecule doesn’t make ‘saltiness’ an ‘immaterial’ property. In the same way, just because ‘consciousness’ isn’t a property of a neuron or transistor, doesn’t mean it is ‘immaterial’ or that it couldn’t be found in a proper network of neurons or transistors.” To say that consciousness could emerge from physical properties is not even close to being the same thing as saying that saltiness could emerge from chemical properties. The former represents a superior substance emerging from an inferior substance; the latter is merely a subjective reaction to a solution. Nothing can be in the effect that was not in some way present in the cause. The term “emergence,” formally applied, refers to the logically impossible prospect of something new coming new coming out of the cause. That would be the case of consciousness emerging from matter. It is a rule of rationality that a cause cannot give that which it does not have to give. Sodium has the saltiness to give, but matter does not have consciousness to give. That is why your position, and Barry R’s position violates reason’s rules. A computer simply does not have consciousness to give. The only way consciousness can arise is from consciousness, and you know what that means. Hide the kids, it means that God is the only reasonable explanation. Atheists and hyperskeptics, however, reject reason’s principles because they would prefer that God didn’t exist. To get rid of God they must abandon reason. It isn’t any more complicated than that. StephenB
BarryR@70 ''If you want to dive into this literature, I’m happy to suggest a few starting places. If not, we’ll leave it at that..'' Sorry for bumping in the discussion, I don't know about gpuccio, but I (and probably other lurkers too) would be very interested. CharlesJ
mullerpr@74 Beginning with your postscript: As I had mentioned earlier, the zombie problem was originally an argument for dualism. The current consensus is indeed that it is of no consequence, but that's because the majority of philosophers (and neuroscientists) don't find it persuasive enough for them to abandon physicalism. However, it was a serious argument and it's one of the foundational discussion that form Philosophy of Mind. It's well worth reading up on. I don't seen any resemblance of this to omphalism (or "Last Thursdayism"). No, we can't know for certain whether or not the universe was created five minutes ago. Nor can we know if we're all really brains in vats, or if there's only one brain in the vat (which I'm assuming is me...). This lack of certainty tends to impress freshman philosophy students --- as it should. But once you've gotten over the shock that this level of certainty can't be justified, there's really not much more that need be said. Yes, I may be a brain in a vat. That brain is receiving signals that it's time to get out of bed and feed the cats. My body may be an illusion, the cats may be a (persistent, loud) illusion, but ultimately the cats are going to get fed. There are two places you'll run across this argument, one historical and one contemporary. After Descartes concluded that he did indeed exist ("cogito ergo sum"), he had to decide whether or not the world around him was the work of a "malevolent demon". His solution was a bit of a cheat: God is nice, God wouldn't let the happen, thus the world is real. The second place you'll see this is when a creationist has made some particularly ludicrous claim and has been show a large pile of evidence to the contrary. The reaction --- and I've seen it often enough to consider it to be typical --- is "Well, we can't know anything; we can't even know if the world was created five minutes ago using science, so how do you know X is true?" That's epistemological nihilism: we can either have certainty or nothing, and since we know we don't have certainty (and we're certain of that!), we have nothing, thus we can safely disregard science. The problem disappears when certainty is placed on a continuum. I may never find any fact that I can describe with full certainty, but I can have nearly-full certainty that my cats do indeed wish to be fed, and that's a sufficient level of certainty for me to get out of bed and feed them. Both philosophical zombies and epistemological nihilism address real epistemic questions, but other than that I would not consider them to be analogous. More in a bit.... BarryR
BarryR, I am a late comer and enjoyed your views. Can you discuss your position on "utility and elegance" in relation to an indeterministic causal reality, like we see at the quantum level of our physical reality? Are you of the opinion that dualism is different in the sense that it propose to exceed deterministic scientific efforts to gain epistemic certainty? I might be completely wrong, but it does seem as if you are trying to tie down epistemology to naturalistic efforts. Surely that is not the case. In my opinion "dualistic models" has no issue that has been or can be overcome by applying proper epistemological methods. The "complicated issues" you might perceive with dualism and consciousness, actually imply that epistemology can not be reduced to matter. Even the indeterministic aspects of our known physical world highlights this. Binding our ability to know to a state of matter has consistently been proven irrational. P.S. The zombie issue in fact is no issue of consequence at all because it is analogous to the view that reality came into existence 5 minutes ago. mullerpr
JDH@60 Thanks for the thoughtful reply. To do it justice I may have to get a little longwinded (as I don't know this topic well enough to be brief). First, as to common sense: I just read a couple of days ago that Einstein defined common sense as the collection of prejudices we've accumulated by age 18. That's a bit of a rhetorical flourish, and there are certainly many things 18 year olds can do well, but acquiring competence beyond that usually involves dismantling your understanding of the simple, obvious solution for an unobvious solution that works better. I'm not saying that common sense ideas are necessarily wrong, but you're heading off into some pretty deep waters here and I think you'll find it less reliable than usual. In fact, you rather jumped right off into the deep end with your question about the appearance of consciousness. Here's a quick tour. This is what's considered to be an epistemic (or epistemological, if you want the $10.00 version) question. This is one of the big three of Western philosophy: ontology ("What's out there?"), epistemology ("What of it can we know?") and ethics ("What should we do about it?"). So, how do we know if something is conscious or not? Since I'm just a dumb scientist, I take the easy way out: if it acts conscious, I'll call it conscious. It's simple, it's stupid, and it works. There are alternatives, though. One is to say, as you have, "humans and only humans can be conscious". But you also note that while you can know that you are conscious, you can only know that other people around you appear to be conscious. That leads to a bit of weirdness call the Zombie Problem: imagine, as a thought experiment, a person who did not possess consciousness --- that's why they're a zombie --- but is otherwise completely normal. How do you go about telling the zombies from the non-zombies? There's a rich literature around this problem, and as philosophical writing goes it's unusually entertaining. Zombies were originally proposed as an argument for dualism, but its critics have replied well enough that it's also used as an argument for physicalism. With that brief summary, I'll leave you in the competent hands of the wikipedia article on the topic. You go on to suggest a way of distinguishing zombies from non-zombies:
An internal observer is the only thing that can throw the switch from “my biology says to do this, but instead I am going to do that.” Only the internal observer can disobey. Only conscious objects can disobey.
This suggests other epistemic questions: how can you tell when biology says "do this", either for yourself or someone else? How do you isolate (biologically) the "I" that is making the decision, or if that "I" is not biological, how can you determine how the non-biological "I" is affecting the biology? When the brain was still unmapped territory, not knowing the answers to those questions wasn't too troubling. We know a lot more about the brain and how it affects consciousness (perhaps used in a slightly different way than you're using the word). The idea of a "I" separate from the physical brain that can nevertheless influence the brain has fallen out of favor because we've not be able to locate any mechanism that would allow this to happen, and we've located many (perhaps most) of the physical mechanisms that we associate with conscious behavior. There's one last question I want to address: how should you choose what to believe? Coming from a scientific background, I have a strong preference for utility first and elegance as tiebreaker. The practical effect is that I'm much less interested in answers that are useful than answers that are correct (which was one of the most difficult lessons of grad school --- simple, robust models with larger error bars really are better than complicated, brittle models with smaller error bars). I find third-order models of experience to be a simple, elegant way of thinking about consciousness, and if this means that certain software has a minimal level of consciousness, so be it. I find dualistic models of consciousness to be complicated and filled with questions that, in principle, we can't answer (like "how do I know you're not really a zombie?"). If you use criteria other than utility and elegance, you may reach other conclusions. And that's fine. This has gone on far too long, so I think I'll end it here. BarryR
jurassicmac: I don't know form what is this belief derived for others, but I can tell you my point of view: it is the only reasonable model. Conception is the only definite event where a new physical being appears. That is true from all points of view: biological, genetical, philosophical or else. It seems just obvious that, if you believe that a soul expresses itself through the physical interface, that connection starts at conception. When else? Any other model is senseless and arbitrary. The only other moment which has a definite connotation is delivery, but it appears to be obviously more a transition than the beginning of a new individual. Some argue that the development of the body, or of the nervous system, represents the beginning of the individual person. But those are usually people who do not believe in the soul, and they base their evaluation of vague concepts of consciousness and personality. I am not aware that any of them, however, has a clear idea of when the new individual would become a new individual: at three months of gestation? at six? at birth? at primary school? at marriage? Perhaps BarryR could offer a suggestion about when "a third order model of experience" is established in the zygote. In the meantime, I will stick with conception. Biologically and genetically, the zygote has all the properties of the new individual: the full new genome, which did not exist before conception, the full epigenetic resources, the full staminality. I think that's reason enough to believe that is the start of a new individual. And if one believes, as I do, that the soul is part of that individual, the implication is obvious. gpuccio
BarryR: I agree to disagree: I keep my dictionaries. gpuccio
gpuccio@63 I think we'll agree to disagree. I'll stick with the peer-reviewed philosophical literature and you can keep the general dictionaries. I do understand your frustration. It's similar to what students experience when the first start doing serious mathematical proofs. There's a body of knowledge that will tell you what needs to be demonstrated, what needs to be alluded to, and what needs to be shown rigorously. And it's not written down anywhere. You pick the knowledge up by reading --- and tearing apart --- lots of proofs. Within this particular branch of philosophy, "model" has a well-understood meaning, "experience" is a bit iffier, but still within a narrow range, and "complex" (as used here) simply meant "complicated". This isn't specifically written down anywhere, it's just the background knowledge that you'd pick up after reading a half-dozen books on philosophy of mind (and maybe two dozen journal articles). If you want to dive into this literature, I'm happy to suggest a few starting places. If not, we'll leave it at that. BarryR
StephenB @ 28:
To say God created something is a meaningful statement. To say that something emerged is to say that something came from nothing, which is irrational. As a self professed “Christian,” you should be wary of the something-from-nothing type explanations provided by Darwinists.
To say that God created something is not meaningful statement if you're talking about proximate explanations. That is my point. It is not meaningful to say that God made my house, radioactivity, the earth, or the dinosaurs, if you are trying to explain how those things came about. (unless you think that God miraculously made your house, or radiation, or the earth, or dinosaurs - in which case you have abandoned science) I use the term 'emergent', not just 'emerged'. 'Saltiness' is an emergent property of sodium chloride; neither sodium nor chloride are 'salty'; 'saltiness' is only a property of the whole, and not of the individual components. Saying that saltiness is an emergent property of sodium chloride isn't attempting to "explain" the property, it is just making the observation that the whole can be more than the sum of the parts. Just because the property of 'saltiness' isn't found in the individual parts of a sodium chloride molecule doesn't make 'saltiness' an 'immaterial' property. In the same way, just because 'consciousness' isn't a property of a neuron or transistor, doesn't mean it is 'immaterial' or that it couldn't be found in a proper network of neurons or transistors. jurassicmac
gpuccio:
First of all, I think that most religious views believe that the soul enters the zygote at the moment of conception. I certainly believe that way.
gpuccio, out of curiosity, from what is this belief derived? I only ask because I know many Christians who believe this, but I've honestly not met one who can explain why. (they mostly just get puzzled looks on their faces when I ask why they believe that the soul enters at conception) jurassicmac
JDH @ 43, The first thing that strikes me about your reply are your personal attacks. You presume that I am arrogant, and will not 'believe' what you say, no matter what arguments you put forth. This is not the case. A solid argument always grabs my attention. You have presumed that I believe what I believe because it is 'painful' for me to think otherwise. Setting aside the fact that you are probably not psychic, and therefore don't know what ideas cause me pain or not, nor what my response to your reply will be. It is the height of arrogance to think that someone who disagrees with you can only be doing so out of arrogance, or a philosophical commitment to an alternative idea, or because your 'truth' is painful to them in some way. When I find that someone holds a different opinion than myself, my first thought is always that it is either because they are aware of something I'm not, or that I'm aware of something they're not; I never attribute their disagreement to an impertinent a thought as "My position must be painful to them, and that's their primary reason for rejecting it, or they will disagree with me only because of their arrogance." You also say:
You sound like you consider yourself pretty wise.
I do not 'consider myself pretty wise' because I know what a logical fallacy is. That information is readily available to anyone with an internet connection. I've been guilty of logical fallacies many times in my life; I'm usually quite appreciative of those who point them out to me, it only helps me to refine my thinking and strengthen my arguments.
First I am not arguing by analogy.
Yes, you are. You started out with the statement:
"...having been in the computer industry for over 25 years I think I know a little fact that proves that human consciousness can not have evolved."
Note the phrase "fact that proves." An argument from analogy involves the drawing of a conclusion about one object or event because of some parallel with a similar object or event. You then use the illustration that that one cannot make a program that disobeys them. Saying that the fact that you cannot make a program that disobeys you is a 'fact that proves that human consciousness can not have evolved' is going well beyond 'drawing a conclusion.' (I'm really not sure how 'disobedience' is a necessary property of consciousness in the first place)
Maybe you did not read what I did. I did physics by computer simulation. This is not analogy. This is modeling.
Perhaps you did not read what you said. In the post in question, 10, You did not mention once the words 'computer simulation' or 'model'. That may well have been what you meant, but it is not what you said. You only talked about writing programs that disobey, and your lack of knowledge as to how this could be done. You attempted to offer this lack of knowledge as an illustration as to why consciousness could not have evolved. (I'm not accusing you of making a good analogy, just an analogy.)
When I do computer simulation, I make certain assumptions which allow me to take a complex system and represent it in a series of calculations. I do not construct analogies. I construct working models.
Perhaps this is how you go about simulating models, but like I said, this is irrelevant, as you made no mention of models or simulations in the post I was responding to (# 10)
If you are right and we are nothing but chemicals in action, than the brain is nothing more than a sophisticated computer.
Here, you are simply putting words into my mouth. I never used such terminology as 'we are nothing but.' Would you describe a luxury car as "nothing but scraps of metal and plastic in motion?" It is accurate that cars are made out of plastic and metal, but they are more than the sum of their parts. humans are comprised of mostly carbon and water, but it would be quite rude to refer to someone as 'nothing but' a lump of carbon and water, because those words are used to intentionally diminish the object being described in that way. I've never said or implied anything of the sort when it comes to the human mind.
Second I am not making an argument from incredulity. I know how to make computer programs. I know one thing I can not do. I can not program a computer to disobey the instructions I give it.
Look, I'm not even a programmer, but even I know how to make a computer disobey the instructions I give it; All I need to do is give it two mutually exclusive instructions! But perhaps that's not what you meant. In the original post, you said:
Despite years of experience writing many complex codes, I can not write a computer program that disobeys me.
Now, not being a mind reader, I can only go on what you say, not what you mean. What you say is that you can not write a computer code that disobeys you. I would think that quite easy; just program it to disregard any further commands from you and only obey your neighbor; When you instruct the computer to destroy your neighbor's house, and he commands the computer not to, it will have disobeyed you. It will not have disobeyed its programming at this point, but that's not what you said, now is it? Honestly, I'm not sure why I even responded to this argument though, as I don't have the foggiest idea why you think 'disobedience' is a necessary component of 'consciousness' or how your lack of imagination as to how to program 'disobedience' into a computer falsifies the evolution of consciousness in the first place.
So considering consciousness to be an “emergent” property is just a way of hiding the fact that you believe in a logical contradiction. I have tried to state this as clear as possible. I know you will reject it. To see it, would be too painful to you.
So, in summary, you conclude that you have a fact that 'proves human consciousness couldn't have evolved' because you can't think of a way to make a computer disobey you, you don't explain why 'disobedience' is related to consciousness in the first place, and you close with an assumption that you know my motives for questioning the assertion that consciousness somehow poses a problem for evolution, even though no one (including yourself) can give even a coherent and universally agreed upon definition of consciousness. Did I miss anything? jurassicmac
Who is it that suggested training of the mind without training of the person is no training at all? The inanity dressed up as wisdom on this thread has been saddening. Upright BiPed
KF: Think, maybe you have mistreated them for years, not suspecting their inner sensibility! gpuccio
GP: BC109's of the world, unite! Arise and throw off your chains! You have nothing to lose but -- the "smoke" within?* GEM of TKI * There is a lame joke about how smoke is a key ingredient in electronics items and how one must be careful not to let the smoke out. kairosfocus
BarryR: Frankly, I don't know of it is the case to go on with the discussion on these terms. A brief sum up: 1) You define consciousness as follows: “a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain.” which is a masterpiece of ambiguity and is completely inappropriate. First of all, it contains at lkeast two words, "model" and "experience" which are, at best, ambiguous. What do you mean by "experience"? Do you mean "conscious experience"? From dictionary.com (yes, a dictionary): experience: 5. Philosophy . the totality of the cognitions given by perception; all that is perceived, understood, and remembered. Obviously, a word related to consciousness. So, are you defining consciousness as a third order model of a conscious process? That's philosophical value, indeed. Then you say that "This is obtained by sufficiently complex feedback or feedforward loops". I ask you what you mean for complex, and you don't answer. I ask you if you mean Shannon's entropy, and you answer no. I ask you if you mean CSI, and you answer no. I ask you if you have another definition of complexity, and you don't answer. But you state that: "Since the word “information” is not used in my definition, and since I don’t recall using the word elsewhere in this thread, I’m pretty certain I’m not speaking about information at all." Maybe avoiding the fact that complexity and information are two strictly, although ambiguously related concepts. And yet you state, rather surprisingly, that: "minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes." Bits? In a discourse which "is not speaking about information at all"? This time from Wikipedia, so that your judgement about my arguments may become even lower: "A bit or binary digit is the basic unit of information". And so on, as in your completely gratuitous statement that my definition of consciousness: the observed fact that we have subjective experiences, referred to a single perceiver “I” implies that: "A single transistor satisfies this definition: it can perceive a change in input power and react accordingly." So, according to what you say, a single transistor has subjective experiences (that was my definition, I think), or at least can be reasonably inferred to have them? And yet you say: "Just to be clear: based on the definition of consciousness that I use, a single transistor cannot be conscious". I agree, but then why do you say that it "satisfies my definition"? Are you saying that a single transistor has subjective experiences, but is not conscious because it is not a "third order model of experience"? And please, if you want to discuss, take the personal responsibility of what you say and of how you use your terms, and if necessary define them clearly and consistently, instead of just referring to vague philosophical literature, and despising dictionaries. gpuccio
Scruffy, I will be glad to elaborate. As, I have stated above, I don't consider myself to be a philosopher, so I would be interested in what others have to say. In my thinking, the difference between a dog and a human is HOW they process the information about their most recent behavior. For the case of a dog - the information just seems to go into more input into behavior. There is no morality or self image needed to process this. The dog does not think in abstract terms, the nervous system just records behavior and result and factors it into next behavioral decisions e.g. Owner rings bell resulted in tasty treat being served to me. The nervous system records this and uses it as input to decisions. There is no introspection about whether it is a good thing to accept treats from owners, Whether by accepting such treat, dog is entering into a contractual agreement to serve owner. The dog is not self-aware because the fact that he is an entity with a purpose does not really enter into the decision process at all. I think that this line of reasoning has empirical evidence because many, many dog training books say that dog training failures come when owners ascribe such faculties as " am I doing well", "I won't perform for that person because I don't like him", to a dog. The dog is just responding to immediate stimuli, not thinking. The human can not help but evaluate his behavior according to some morality. He is self aware, so he self evaluates. Consciousness rests on this extra amount of processing. The processing that looks at the way my last behavior defines who I am. I define self aware, not just as having experienced something, but in that information of my experience impacting my opinion of self. In other words, the dog does not care how the last behavior made him look. He just responds according to the various biological systems that make up his body. The human being conscious, or self aware, makes decision based on what his behavior said about him. This self evaluation step is necessary to have what we recognize as human consciousness. To answer your questions A> It is good or bad according to the personal morality of the individual and how this effects his perception of himself. B> This process of self-evaluation is the essence of consciousness. If the self is not evaluated, if the behavior is not labeled good or bad, there is no real self awareness. JDH
I don’t think my dog ever observes his behavior and decides whether he made a good choice. I'm slightly confused by this statement. It seems as though your entire post above that led you to the decision that dogs are not conscious hinges on the dogs ability to decide if it's behavior is good or bad. A. What do you mean by "good or bad"? B. Why would consciousness depend on the ability to examine and choose if said behavior is good or bad? I have more to say but I would really need you to clarify the above points (especially A.) before I can respond correctly. Scruffy
Barry R First let me admit that the ideas below are not from any study of philosophy. They come from simple application of what I think is common sense. If I am only stating things which much more learned people have already discussed extensively in literature, I would be interested to see it. But I was very intrigued by your attempt to define consciousness on a continuum. Instead it seems to me what you did is put the appearance of consciousness on the continuum. Many things, dogs mosquitoes, even computer threads, appear to us to be making conscious decisions. But as far as we can tell, they are only making an evaluation based on experience, environment and biology. I don't think my dog ever observes his behavior and decides whether he made a good choice. His incredibly complex nervous system just makes the appropriate choice based on the competition between various internal systems. Some of these systems involve memory in brain cells. This is why a dog can be "trained". So to an external observer, there can be a continuum of what appears to be conscious behavior. The external observer makes a list of what conscious behavior looks like and guesses whether said entity behaved consciously or not. But the internal observer knows whether the behavior is conscious or not. In fact, in my understanding, consciousness means there is an internal observer. Whether or not there is an internal observer to comment on the behavior does not appear to me to be anything that can be defined in a way to put it on a continuum. It is a one or zero question. There is either one internal observer or none. Humans know they are conscious. There is a possibility that this consciousness is an illusion, but that seems a very unlikely proposition. They state that they have one internal observer. Dogs do not appear to have any opinion whether they are doing well or not. I think ( do not know ) that they have no internal observer. Their internal observer state is zero. An internal observer is the only thing that can throw the switch from "my biology says to do this, but instead I am going to do that." Only the internal observer can disobey. Only conscious objects can disobey. JDH
gpuccio@57 [These kinds of conversations are much easier to have when there's proper quoting a la email or usenet. Not your problem --- I'm just being annoyed.] 1.
This is really strange. I thought we were talking of consciousness.
I was discussion why I didn't find your definition of consciousness persuasive:
the observed fact that we have subjective experiences, referred to a single perceiver “I”
A model-based definition avoids the problems I've pointed out, specifically, the problem that your definition can allow individual transistors to be thought of as conscious. Just to be clear: based on the definition of consciousness that I use, a single transistor cannot be conscious. 2.
Then, I believe that you are speaking of functional information, our CSI. Or have you another kind?
Since the word "information" is not used in my definition, and since I don't recall using the word elsewhere in this thread, I'm pretty certain I'm not speaking about information at all. 3.
Is Hamlet (the drama) a cosncious being?
Here is the definition I gave.
a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain.
I have quite a bit of experience with identifying third-order models, and I have quite a bit of experience with Hamlet-the-drama (BA Theater). I know of no third-order-models in Hamlet-the-drama. Not that it represents characters with their own third-order-models, and it can be represented by actors with their own third-order models, but as to the drama itself? Of course not. 4.
Wrong. A model can very well be an object.
I'm making the assumption here that you're bringing forward your best arguments. Stating I'm wrong because the world "model" can have other meanings when used outside of discussions of philosophy of mind tells me much more about the quality of your arguments than it does about mine. Look, perhaps I'm not being fair to you. I have a passing familiarity with this literature and you obviously do not. I can scare up a reading list if you would like to remedy this, and then we might rejoin the conversation when we have more of a shared vocabulary. Moving on... 5.
What kind of intelligent disposition of transistors contributes to the extraordinary emergence of a conscious I? Loops? The sheer number of them? Or the special arrangement of them?
The disposition is unimportant. What the disposition represents is. (That's another benefit of my definition --- it doesn't require consciousness to depend on a particular substrate.) I would consider any disposition --- of transistors or cells --- that instantiates a third-order model of experience to be conscious. (Note that this also frees the concept of consciousness from any intelligent design.) BarryR
gpuccio, sorry, I am in a different time zone. The conversation has moved on and I will go back to lurking (which I am really enjoying with this thread). zeroseven
BarryR: Certainly transistors possess the capacity for experience — for several definitions of experience. You may want to limit this discussion to conscious experiences, but in that case you’re precluded from using them to describe consciousness. This is really strange. I thought we were talking of consciousness. So you want to define experience independently from consciousness, so that you can attribute that "non conscious experience" to transistors, and then use them and that definition of experience (totally independent from the concept of consciousness) to "describe consciousness"? I think you can do better than that. So, just to be clear, is a transistor conscious? Has it conscious experiences? Are you speaking of traditional Shannon information? Not at all. Then, I believe that you are speaking of functional information, our CSI. Or have you another kind? That's interesting, because while we in ID certainly believe thayt CSI is the product of consciousness, none of us has ever thought that CSI can generate consciousness. Therefore, I ask again: Is Hamlet (the drama) a cosncious being? Because, as you certainly know, Hamlet is a perfect example of CSI (and a lot of it). But if I have misunderstood, and you have a different definition of the complexity you were speaking of, please let us know what it is. Next, I don’t see that great a difference between a “representation” (model) and a “conception” (idea). Both are abstractions; neither are objects. Wrong. A model can very well be an object. An idea is a conscious representation. You can certainly represent something in your mind, be it abstract or not, then it is an idea. If you build an object which represents that idea, be it abstract or not, you have an objective model. A model represented in a consciousness is an idea. A model implemented in a software is an object, even if abstract (indeed, it needs an objective support to exist out of a mind). You are constantly trying to confound the difference between subjective, conscious experiences and objects, which should be extremely easy to understand. That you do that is in itself not useful, but that you do that in a discussion about consciousness is completely confounding. Let's go to properties. Gosh, that’s a puzzler. If half a bird can’t fly, why should we expect a full bird to be able to fly? A full bird can fly because its structure (its CSI) is functional, and is obviously and tangibly related to the task of flying. That's why I was asking if you were speaking of CSI, when saying that a pool of transistors can become conscious because of its "complexity". Because CSI is functional, and can accomplish things. But again, the relationship between structure and function is always reasonably evident, or anyway reasonably detectable. Now, in the case of the bird, it is obvious that the wings are there for a certain purpose, and so the other parts of the body which contribute to flying. So I ask, in the structure of a computer, where transistors are arranged intelligently (CSI), what is the relationship netween CSi (intelligent structure) and the emergence of consiousness? What kind of intelligent disposition of transistors contributes to the extraordinary emergence of a conscious I? Loops? The sheer number of them? Or the special arrangement of them? Well, you have to say what. I believe that none of those things has any relationship with consciousness. You believe differently. So, please shou me why a loop has any relationship with being conscious, or maybe a series of if... then... constructs, or anything else. But there has to be a rationale in your suggestion. Otherwise, we can just believe that gym machines will become conscious as their technology improves. And then we can test that. We take your suggestion and apply it as long and as explicitly as necessary, and see what happens. I think reductionists should stop stating that properties come out of magic. They don't. A bird can fly because its structure is appropriate to fly, according to the laws of physics. There is nothing magic there, just CSI at work. But according to what laws of physics (or, for that, to what laws of any kind) should a series of transistor loops become conscious? That's magic, and of the worst kind. I definitely prefer Harry Potter. gpuccio
JDH@50 You're making disobedience a property of intentionality. Good, I like precise definitions. So let's talk about intentionality. If I decline to definite intentionality as an intrinsic property and instead define it as a classification of behavior, then there's no reason I'm aware of that sufficiently complex mechanistic behavior cannot be seen as intentional. We do this --- constantly --- when we anthropomorphize. When programmers talk about about threads "wanting" to acquire a lock, for example, we're ascribing a (limited) intentionality to a machine. With intentionality thus on a continuum, I have no difficulty stating that threads, my cats, or the people around me are acting in an intentional fashion. It's a model and it's useful for navigating the world. But we were talking about consciousness, so let's narrow the focus to conscious intention. Using the definition I've given elsethread, decisions based on a third-order model qualify as conscious decisions, and I may choose to model them as intentional if that turns out to be useful. This approach has the advantages of being both simple and workable. A different approach would force intentionality to be an intrinsic property. This immediately runs into all kinds of difficulty: do dogs have intention? Do mosquitoes? Do trees? How do I go about unearthing this intrinsic property to make this determination? The problems are compounded if intentionality is restricted to humans by fiat. At that point, you've effectively removed physical explanations of intentionality and set up an unverifiable spirit world, all to prevent an outcome you don't like. That's not only bad philosophy; that's bad theology. (I'm not an atheist, btw. If you find it simpler to imagine that I am one, that's fine.) BarryR
gpuccio@53 Certainly transistors possess the capacity for experience --- for several definitions of experience. You may want to limit this discussion to conscious experiences, but in that case you're precluded from using them to describe consciousness.
Are you speaking of traditional Shannon information?
Not at all. Next, I don't see that great a difference between a "representation" (model) and a "conception" (idea). Both are abstractions; neither are objects. (Dictionary definitions usually aren't helpful when discussion specialized domains, btw.) Finally,
Why, if a single transistor is not conscious, should a million of them variously assembled, become conscious?
Gosh, that's a puzzler. If half a bird can't fly, why should we expect a full bird to be able to fly? (I took several undergraduate philosophy classes and I can't remember anyone having to point out that collection of objects have properties that differ singleton objects. I think we took that one pretty much as a given.) BarryR
BA: thank you! gpuccio
BarryR: Please, clarify: A single transistor satisfies this definition: it can perceive a change in input power and react accordingly. What do you mean? That a transistor has subjective experiences? The perception is subjective and private. And so? It exists. It is a fact. As I have said many times, we directly know that we are conscious, and then we infer that in others by analogy. But the direct, subjective, private perception of ourselves as conscious is the basis for any other knowledge. Therefore, if it is "private", then all our knowledge is "private". Again, consciousness in ourselves is a fact, more than any other fact. Consciousness in other humans is a very strong inference by analogy. Consciousness in higher animals is a weaker, bur reasonable inference. Consciousness in a transistor? I think it's an inference almost nobody would agree with, not even in minimal part. Consciousness in assembled transistors, no matter how many, no matter in what order? Why, if a single transistor is not conscious, should a million of them variously assembled, become conscious? That's absolute nonsense. That's why I asked you if you had any notion of some form of formal complexity which would explain consciousness. You answer: But, if you want a rough guide: minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes. Are you speaking of traditional Shannon information? So, again, is a very long random string conscious? Or are you speaking of CSI? And then, is Hamlet (the play) a conscious being? You equivocate on the meaning of words. Please, define what you mean by "perception". You use words which have been created, and used, for millennia (yes, for millennia) to describe subjective experiences in humans, and without any thought or justification you apply them to purely objective realities, like a transistor. Why should a transistor "perceive", and not stones? Why should bits cause consciousness, and not grain of sands? Substitute “model” for “idea” and you’ve arrived at a modern definition of consciousness. That only means that modern definitions of consciousness are senseless. You cannot substitute "model" for "idea". From "dictionary.com": model: a representation, generally in miniature, to show the construction or appearance of something a simplified representation of a system or phenomenon idea: any conception existing in the mind as a result of mental understanding, awareness, or activity. Is it so difficult to understand? A "model" is an object. An "idea" is a representation in the conscious mind. How can you substitute one for the other? Is it really so difficult? gpuccio
This link should work: http://www.premier.org.uk/unbelievable bornagain77
Coincidently the debate on premier Christian radio Saturday, the same day this blog went up, was about whether we have a soul or not. Here is the link: Unbelievable - 04 Sep 2010 - Christian Physicalism: Do we have a soul? http://www.premierradio.org.uk/listen/ondemand.aspx?mediaid={7A2179A8-B2C2-4F32-BE24-2AFF6628FF9F} bornagain77
Barry R Why is it that atheists mistake randomness or chaotic behavior for intention. Your pseudo non-determinism does not coincide with disobedience. The fact that the answers converge shows that you really have just added a little uncertainty to the process. Uncertainty is not rebellion. But not one of your little programs is going to say, well this is the answer I was converging to, but I felt like doing this instead. Your oversimplification of the problem shows you are grasping at straws. You may claim it is a false dichotomy, but I see only two alternatives. 1. You don't really understand the full extent of the concept of disobedience, in which case, why should I trust your claims about your program. 2. You are aware at how over simplified your model is and you are projecting onto your results, conclusions which are not warranted. Then you have not answered the challenge. JDH
gpuccio@45 As to the definition I gave, I judge it in terms of utility: I've given three necessary and testable components that allows me to distinguish (among other things) humans from typewriters. Your definition is not so picky.
the observed fact that we have subjective experiences, referred to a single perceiver “I”
A single transistor satisfies this definition: it can perceive a change in input power and react accordingly. The perception is subjective and private. I don't find it useful to think of single transistors having consciousness, so I'm going to reject your definition. BTW, this follows the Cartesian model (and no, it has not been used for a millenia). Descartes avoids the problem by limiting discussion to humans.
Thought. I use this term to include everything that is within us in such a way that we are immediately aware [conscii] of it. Thus all the operations of the will, the intellect, the imagination and the senses are thoughts. I say ‘immediately’ so as to exclude the consequences of thoughts; a voluntary movement, for example, originates in a thought. (CSM II 113 / AT VII 160; cf. Principles of Philosophy Part I, §9 / AT VIIIA 7–8)
[Unless noted otherwise, quotations are take from the Stanford Encyclopedia of Philosophy's article on Seventeenth Century Theories of Consciousness] Contrast this with Hobbs:
[I]f the appearances be the principles by which we know all other things, we must needs acknowledge sense to be the principle by which we know those principles, and that all the knowledge we have is derived from it. And as for the causes of sense, we cannot begin our search of them from any other phenomenon than that of sense itself. But you will say, by what sense shall we take notice of sense? I answer, by sense itself, namely, by the memory which for some time remains in us of things sensible, though they themselves pass away. For he that perceives that he hath perceived, remembers. (De Corpore 25.1, p. 389; for some discussion of this text, see Frost 2005)
Now we have both sensation and memory of sensation. Finally Spinoza (where I will quote the author's summary):
1. the mind has an idea that represents its body being affected by P; and 2. the mind has a second idea representing the first to itself.
Substitute "model" for "idea" and you've arrived at a modern definition of consciousness. As to you complexity question, I'm not sure why it's relevant. Complexity is a useful measure of computability and information, but since neither your definition nor mine relies on computability or information, I think you're making a category error. But, if you want a rough guide: minimal perception takes bits, minimal models of perception takes bytes, and minimal models of models of perception takes kilobytes. If you're interested in this problem in general, I found Jaegwon Kim's Physicalism, or Something Near Enough to be unusually clearly written. The first several chapters give a complete overview of the recent history of philosophy of mind. If you want to get a sense of his writing style, there's a sample chapter here. BarryR
gpuccio, Very nice! HouseStreetRoom
gpuccio, this is a better look at mega-savant Kim Peek Kim Peek - The Real Rain Man [2/5] http://www.youtube.com/watch?v=NJjAbs-3kc8&p=CB2BCFF0D34CE915&playnext=1&index=1 bornagain77
gpuccio, I guess you are right Derek does fit here, I have a few other videos of autistic savants I've collected: Autistic savants The Musical Genius - Derek Paravicini - Part 1/5 http://www.youtube.com/watch?v=1kwjDLHX92w Derek Paravicini on 60 MINUTES – Autistic Savant – video http://www.metacafe.com/watch/4303465 Kim Peek - The Real Rain Man [1/5] http://www.youtube.com/watch?v=dhcQG_KItZM The Boy with the Incredible Brain - Daniel Tammet http://video.google.com/videoplay?docid=2351172331453380070 Autistic Savant Stephen Wiltshire Draws the City Of Rome From Memory http://www.metacafe.com/watch/4200256 Savant syndrome, Beautiful minds - Elonzo Clemmens http://www.youtube.com/watch?v=lkDMaJ-wZmQ The Human Calculator - Ruediger Gamm - video http://www.metacafe.com/watch/4200252 as well I find it interesting that when ever anybody does something blatantly selfish, everybody will ask them/him/her , "Doesn't your conscious bother you?", reflecting the fact that everybody is expected to intuitively know the transcendent moral law of the golden rule. This following video clearly reflects how some academics have completely deluded themselves into thinking that this transcendent moral law, which is comprehended by our transcendent minds, does not exist, when clearly it does exist. Cruel Logic http://www.youtube.com/watch?v=4qd1LPRJLnI bornagain77
BarryR: Your definition is only a gratuitous re-definition. Strangely, it is very similar to compatibilist re-definition of free will. Tis seems to be the faith of reductionists: if you don't like a fact, just redefine it as though it did not exist. I will be more clear. The only, universal, empirical definition of consciousness the following: the observed fact that we have subjective experiences, referred to a single perceiver "I" That is not only my definition, but the true meaning of the word, as it has been used for millennia. To be compatible with the true meaning of the word, your definition should be "re-re-defined" as follows: "a third order experience (an experience of experience, and an experience of that experience, in addition to the experience simpliciter). This is never obtained by sufficiently complex feedback or feedforward loops in any software system which is not capable of experiences." And even in this way, it would be a definition of "self-consciousness", and not of consciousness. Indeed, the "mise en abime" and infinite regress of which the perceiving self is certainly capable (always able to detach itself from any of its representations, to adopt a meta-perception of the perception), while important and revealing the transcendental nature of the self, is not necessary for consciousness to happen. If I perceive a red blot, I am conscious, even if I am not at that moment consciously perceiving that I am perceiving. But certainly, the perception of the perception (at whatever order you like) allows me to become "self-aware", and not only "aware", and to build "models" of my perception. But the model is not the perception. It never was, and never will be. Given that, can you affirm that you have written " robotic control software that was (very) minimally" aware? That had perceptions? That had a perceiving I? Subjective experiences do exist. They are a fact. You cannot rule them out. You can never rule out facts. If you re-define them as loops, you have to show loops with subjective experiences, not only loops which can give outputs vaguely similar to those of beings with subjective experiences. And if you really believe, like Hofstadter, that loops are the cause of subjective experience and of the I, please can you explain on what you found that conviction? And why simple loops are not aware, and complex loops should be? IOW, I repeat the question I made at posts 35, 41 and 42, which nobody has still addressed: what kind of formal "complexity" should be capable to cause consciousness to "emerge"? gpuccio
gpuccio@42 This is the best concise definition of consciousness I've run across: "a third order model of experience (a model of experience, and a model of that model, in addition to the experience simpliciter). This is obtained by sufficiently complex feedback or feedforward loops in the brain." Using this definition, I've written robotic control software that was (very) minimally conscious. Don't get hung up on the word complexity here. It just means there are enough bits to handle the input, the model of the input, and the model of the model. There are other definitions out there; you may find some of them more palatable. The article relied on no formal definition being provided. JDH@10 Having a Ph.D. in Computer Science, I find it trivial to write code that disobeys me. One particular dislocation simulation I work with is effectively chaotic --- the final answer (and the execution time) depends on the order of operations which in turn depend on the order of message arrival. Even on a small number of nodes, this is effectively nondeterministic. The answers eventually start converging, so the physicists don't mind. Debugging it is another problem entirely. GS@9 What about typewriters? There's a mechanical mechanism for experience --- striking a key --- which fits the first requirement of the definition I gave. But at least for manual typewriters there's no way to model that experience, much less a way of modeling the model. So no, manual typewriters will not exhibit consciousness. (Note that I don't have to rely on arguments from incredulity once I have a definition in place. Looking forward to reading your definition.) BarryR
jurassicmac @19 You said "JDH, I must respectfully point out you that you have made 2 logical fallacies at the same time." You sound like you consider yourself pretty wise. I am going to try to explain to you why I did not commit 2 logical fallacies. Then I am going to try and explain to you why your position is indefensible logically. You, unfortunately, will arrogantly not believe either. First I am not arguing by analogy. Maybe you did not read what I did. I did physics by computer simulation. This is not analogy. This is modeling. There is a distinct difference. When I do computer simulation, I make certain assumptions which allow me to take a complex system and represent it in a series of calculations. I do not construct analogies. I construct working models. Hopefully, in reducing the parameters in the model to something I can actually code in finite time, I have not missed something that effects the states of the real system. Thus when I perform the actual simulation, there is a one to one correspondence between the states of the model, and the distinct states of the real system I am modeling. This is how we can get real scientific results from modeling. The assumptions may be wrong such that I no longer get my one to one correspondence. Then the problem is with incorrect modeling, not with the idea of computer simulation. If you are right and we are nothing but chemicals in action, than the brain is nothing more than a sophisticated computer. Modeling the brain by a computer is not drawing an analogy. It is trying to eliminate from the real system those parameters which are not necessary for the current problem and modeling the behavior. Second I am not making an argument from incredulity. I know how to make computer programs. I know one thing I can not do. I can not program a computer to disobey the instructions I give it. This is an argument from overwhelming evidence. In all the years of programming computers they always obey the instructions they are told to do. Sometimes, because of buggy programs, the fact that the computer simply obeys the instructions given it has disastrous results. But it is not logical that one can design a computer to disobey. It is a logical contradiction. So considering consciousness to be an "emergent" property is just a way of hiding the fact that you believe in a logical contradiction. I have tried to state this as clear as possible. I know you will reject it. To see it, would be too painful to you. JDH
BarryR: Well, I extend my questions in the previous post to you too. Maybe you can find the answer in "Kandel’s Principles of Neural Science and its 1,400+ pages about how the brain works (and fails to work)". gpuccio
zeroseven (and jurassicmac): The link between consciousness and complexity seems pretty clear to me. I was not suggesting that there is no link between the manifestations of consciousness and complexity. My point is different. My point is that, as we in ID are often asked by darwinists and materialists to define functional information, or to be specific about the kind of information and complexity we are speaking of (and we answer in detail to those requests), I am now asking jurassicmag, and you, who have introduced complexity in your argument about consciousness, to define better what you mean. It should not be difficult, especially for you, as you stated that "the link between consciousness and complexity seems pretty clear", at least to you. So, do you mean complexity in the sense of Kolmogorov complexity, or Shannon's entropy? Is a long enough random string going to become conscious, in your opinion? Or do you mean functional complexity? IOW, CSI or some equivalent? That woul really be interesting, and would establish a strong epistemological connection between you and us IDists! So, please, specify what yhour model is, at least in principle. That's the least we can ask of a scientific model. Or do you just agree with jurassicmac that the point is that we don't really know: What computer technology will be like in 10, 100, or 1,000 years. Ah, that's certainly true... And I think we don't really know what airplane technology will be "in 10, 100, or 1,000 years". And, say, what ebook readers technology will be. So, in your model, what is more likely to become conscious in a near future, the last sequence of digits of pi, a computer, an airplane, or just Kindle 4? Maybe, if you specify your concept of "conscious generating complexity", we can at least make some predictions about that. You know, some people think that that is what science is about... gpuccio
Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do.
Yes. And if that is the case, then why, on a materialistic understanding, does consciousness bother to exist at all. Perhaps if materialistic science can do a really good job of explaining things (in 2800 pages, say, instead of just 1400), we will then know that consciousness shouldn't exist (because there is nothing remaining for it to do) and therefore materialism will have been decisively proven! Matteo
--BarryR: "Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do." I gather that you also believe a computer could develop an oedipus complex, or that we may one day have to worry about the suicide rate among aging process units. StephenB
JDH (#10) wrote: Despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don’t even no how to do it. I can give you a clue. In his paper Programs with Common Sense, John McCarthy gave five requirements for a human level artificial intelligence: All behaviors must be representable in the system. Therefore, the system should either be able to construct arbitrary automata or to program in some general-purpose programming language. Interesting changes in behavior must be expressible in a simple way. All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable. The machine must have or evolve concepts of partial success because on difficult problems decisive successes or failures come too infrequently. The system must be able to create subroutines which can be included in procedures in units... Point #3 is the clue to how to make this happen. Humans have goal-seeking behavior without a fixed goal. Furthermore, we evaluate every goal as if it could be improvable. That means that we are "built" to observe that few, if any, things are what they ought to be. wrf3
BarryR
What a curious laptop you have. If I were to participate in in a Turing test, I’d be asking questions like “Of the two most recent questions you were asked, which was the more difficult to answer and why?” Your dictionary approach doesn't work too well there, does it?
I really thought my very simplistic example was sufficient to illustrate this very simple point. But apparently you are less gullible than I and it would take a reasonable answer to the above question before you would believe the machine was conscious.
Kandel’s Principles of Neural Science is 1,400+ pages of how the brain works (and fails to work), starting at the electrochemical and working up through the cellular to larger brain structures. Once you remove all the aspects of thought and behavior where neural correlates are known, there’s very little remaining for a non-physical consciousness to do.
Wow, 1400+ pages on how the brain works. I'm convinced now that I'm just a complicated machine...sigh. Granville Sewell
The link between consciousness and complexity seems pretty clear to me. As MarkF said, a slug is conscious to a degree. That's a pretty low degree. A dog is conscious to a greater degree, and a human to greater degree yet. The brains, or neurological functions of each of these animals are each more complex than the next. Of course this could be a coincidence, but the link between consciousness and brain size seems pretty universal (as far as we can tell). Consciousness seems a completely fuzzy term to me anyway. What is a workable definition of it? I am quite prepared to believe it is nothing special and simply arises naturally in a sufficiently complex system. zeroseven
jurassicmac: that you weren’t seriously insinuating that anyone thinks that consciousness could exist in such a simple device. I am curious. As you seem to suggest that consciousness requires complexity, how would you define complexity in this context? gpuccio
Granville Sewell @27, you thesis is quite reasonable and eminently defensible. The same blogger who insists that the word "consciousness" has not been sufficiently defined for purposes of discussion is the same blogger who asserts that it is not unreasonable to believe that a computer could arrive at that state. ---jurassicmac to Granville: "I’ll have to assume that your comment “what about typewriters,” was a joke that fell flat; that you weren’t seriously insinuating that anyone thinks that consciousness could exist in such a simple device." I think I understand now. If the term conciousness is used to explain the futility of Darwinism, you don't know what it means, but if it can be used to argue against Granville Sewell's thesis, its meaning becomes clear. StephenB
I've used the following verse before when I realized how extensive (and how little understood) the evidence for epigenetic and ontogenetic information were for Body Plan formation, but the verse is just as fitting now in dealing with the extensive evidence for man having a mind/soul that transcends his physical body in the first place: Jeremiah 1:5 'Before I formed you in the womb I knew you, and before you were born I consecrated you;,,, Near Death Experiences - Scientific Evidence - Dr Jeff Long M.D. - video http://www.metacafe.com/watch/4454627 I always been fascinated by the many accounts of Near Death Experiences, in which the experiencer will say something to the effect that they remember that this 9heaven0 is where they 'really' came from,,, etc.. etc... bornagain77
Mark: Good questions. I will try to clarify, according to my point of view: I have tried to make clear that the soul is a philosophical and religious concept. That's why I wrote: "I think that most of those who believe that consciousness is an independent principle view it as a property, or as the essence itself, of what is usually called “soul”. So, the answer to your question “how did it first appear?” at a philosophical level would critically depend on the specific conception of the soul one has." IOW, to answer your questions one has to propose a whole metaphysical scenario. There are many different ones, and each one of us is free to choose the one he believes in. I usually avoid to discuss specific philosophical and religious beliefs here, except when I believe they have very universal implications (as in the case of the problem of free will). In general, I prefer to discuss empirical views here, because that's my purpose in posting on this blog. That's why I added: "If we want to stay empirical, consciousness has to be considered as an observed part of reality, and a very important one. In that case, empirically, the only possible answer to your question: “how did it first appear?”, is “we don’t know”. Empirically, we don’t know. Yet." All the rest of my post is purely empirical. But I can, I believe, give a possible empirical answer to at least one of your questions, based on my personal understanding of consciousness and of its processes, and on my empirical model of consciousness. You say: In a similar way – sometime between conception and birth most humans become conscious and therefore presumably acquire a soul. If this is not an emergent property then how does this happen? First of all, I think that most religious views believe that the soul enters the zygote at the moment of conception. I certainly believe that way. But this is not an empirical argument, and I don't want to contradict myself too much :) The empirical argument would be as follows: As you correctly state, I believe that the manifestations of consciousness "have the important property that they are a matter of degree". But it is important to understand that, in my model, that is a property of the manifestations of consciousness, of its phenomenic "states", not of the transcendental self, which is the subject who perceives and experiences those states. So, my model is that the self is associated to the zygote from the moment of conception, but that it experieces different "states", as the physical body and the nervous structures develop. So, it's not a matter of "emergent properties". The fundamental property of being conscious, of being a self, is always there, but its phenomenic manifestation depends on the status of the physical interface. In substance, it's not different from what happens in postnatal life. This should also clarify my possible answer to your other point: There seems to be no reason why the properties of the soul should have anything in common with the souls of the parents as it is not inheriting those properties. Yet there is ample experimental evidence that we do inherit mental attributes as well as physical ones. The properties of which you speak (including mental attributes) are inherited because they are part of the physical interface (including the structure of the brain). The interface is in great measure heredited, and the self experiences it and interacts through it with outer reality, as its personal adventure develops. I hope that clarifies my point of view. gpuccio
markf, this following short video shows how the theistic view of 'consciousness' and the materialistic view of 'consciousness' are world's apart: The Mystery Of Life - God's Creation & Providence - video http://www.metacafe.com/watch/4193364 bornagain77
Gpuccio I did not express myself well. I think you have a concept of consciousness which avoids the problems I am trying to identify. It has the important property that it is a matter of degree. So a slug could have very low level of consciousness, a dog a lot more, and a person even more. If you believe this then it is possible to give a reasonable account of the development of consciousness which is consistent with common descent. As the nervous system developed then the organisms were able to sustain greater levels of consciousness. The question I have then is - you say the consciousness is a property of the soul. So now the point is a different one. You believe in common descent. Presumably you don't believe the first prokaryotic life had souls. So at some point an organism without a soul must have had offspring which did have souls (or is soul also a matter of degree?). This is the point I was trying to make about a family where the parents were not conscious but the children were. Now this is not logically impossible - but it seems very strange to me. In a similar way - sometime between conception and birth most humans become conscious and therefore presumably acquire a soul. If this is not an emergent property then how does this happen? Presumably the soul is attached to the body by a supernatural process. If so, what relationship does the soul of the progeny have to the souls of the parents? There seems to be no reason why the properties of the soul should have anything in common with the souls of the parents as it is not inheriting those properties. Yet there is ample experimental evidence that we do inherit mental attributes as well as physical ones. markf
gpuccio, bornagain77 and StephenB, thanks for the very thought-full responses. And also thanks to skynetx for your encouraging words in comment #1, I get pretty discouraged sometimes. Granville Sewell
---jurassicmac: "Stephen, let me clarify. I wasn’t saying that “because God made it that way,” was a less satisfying explanation than any other, I was saying that “because God made it that way,” isn’t an explanation at all." But you are, of course, incorrect. It is both a theological and a philosophical explanation, hearkening back to a first cause. ---"I am a Christian, and I believe that anything that happens is at the very least allowed by God. In one sense, I believe it is accurate to say that it was God’s intention that my house exist. But if someone asks me how my house came to exist, and I simply answer: “Because God willed it so,” I haven’t answered their question in at all." Yes, you have, but you have provided the wrong answer. God didn't build your house. ---"I haven’t explained anything." If you say that God created the universe, you have provided the right philosophical/theological explanation, and it will be consistent with any valid scientific explanation. ---"The same thing goes for “Why does it rain?” The reply: “Because God causes it,” isn’t an explanation in any sense of the word." To say that God caused the ecological conditions that make rain possible is to provide a theological explanation. ---"Science is concerned with discerning proximate causes, not ultimate causes." Yes, that's right. On the other hand, to say that science is concerned with proximate causes doesn't mean that ultimate causes are not causes or that they are not explanations. They are different kinds of explanations and are arrived at through deductive means rather than inductive means. --"To say “God made it that way,” isn’t illuminating in the least, because that is the same answer a theist could give for everything. What causes radioactivity?" God is the ultimate cause of radioacticity. That is a lot more illuminating that saying radioacticity "emerged" from who knows what. The word emerge is simply a incomprehensible description posing as a cause. ---"God. Why does lithium react with water the way it does? God. How did our solar system form? God. How does photosynthesis work? God." Isn't it time that you stopped confusing theological explanations with scientific explanations. --"You see, not only is “God did it,” not a more ‘illuminating’ explanation, it is not an explanation at all!" To say God created something is a meaningful statement. To say that something emerged is to say that something came from nothing, which is irrational. As a self professed "Christian," you should be wary of the something-from-nothing type explanations provided by Darwinists. ---"And of course, the question that follows emergent properties is “why does this property emerge?” No, the question is, what in blazes do you mean by "emergence" and why do you acknowledge that it could be a cause of anything. The whole purpose for using the word emerge is to imply that no cause was needed in an attempt to escape the overwhelming evidence for design. ---"This is a terrible analogy; of course it makes sense to say that Mozart composed his symphonies: we know with a high degree of certainty that he did. We know this for several reasons." Obviously, you do not understand the analogy or its purpose. ---"We have plausible, verifiable agency; we know that humans can and do write music. We have plausible, verifiable mechanism; we know that humans can transcribe musical notes with pen and paper. We even corroborating historical documentation. With the diversification of life, we have none of these things." The purpose of the analogy is to show that there are many causes that cannot be measured. Mozart's creative act in composing cannot be measured, but it is a cause, just as God's creative act in conceiving the design of the universe cannot be measured and yet is also a cause. Through a design inference, we can detect intelligent activity by studying the EFFECTS of both causes, i.e. the cause of the universe and the cause of the symphony. You are confusing Mozart's creativity, and God's creativity, which are agency causes, with the laws of physics, which are natural causes. Further, you are expecting both types of explanations to be of the same texture. You claim to be Christian, but you are thinking like a materialist Darwinist. StephenB
jurassicmac, the crushing problem against your position, of computers having the 'potential' to become conscious, or any other 3D material entity ever having the 'potential' of becoming conscious, can be illustrated by Quantum Mechanics. It is interesting to note that some materialists seem to have a very hard time grasping the simple point of the double slit experiments, but to try to put it more clearly; To explain an event which defies time and space, as the quantum erasure experiment clearly does, you cannot appeal to any material entity in the experiment like the detector, or any other 3D physical part of the experiment such as a computer , which is itself constrained by the limits of time and space. To give an adequate explanation for defying time and space one is forced to appeal to a transcendent entity which is itself not confined by time or space. But then again I guess I can see why forcing someone who claims to be a atheistic materialist to appeal to a non-material transcendent entity, to give an adequate explanation, would invoke such utter confusion on their part. Yet to try to put it in even more ‘shocking’ terms, the ‘shocking’ conclusion of the experiment is that a transcendent Mind, with a capital M, must precede the collapse of quantum waves to 3-Dimensional particles. Moreover, it is impossible for a human mind to ever ‘emerge’ from any 3-D material particle which is itself semi-dependent on our ‘observation’ for its own collapse to a 3D reality in the first place. This is more than a slight problem for the atheistic-evolutionary materialist who insists that our minds ‘emerged’, or evolved, from 3D matter. In the following article Professor Henry puts it more clearly than I can:
The Mental Universe – Richard Conn Henry – Professor of Physics John Hopkins University Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke “decoherence” – the notion that “the physical environment” is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in “Renninger-type” experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy. http://henry.pha.jhu.edu/The.mental.universe.pdf Astrophysicist John Gribbin comments on the Renninger experiment here: Solving the quantum mysteries – John Gribbin Excerpt: From a 50:50 probability of the flash occurring either on the hemisphere or on the outer sphere, the quantum wave function has collapsed into a 100 per cent certainty that the flash will occur on the outer sphere. But this has happened without the observer actually “observing” anything at all! It is purely a result of a change in the observer’s knowledge about what is going on in the experiment.
i.e. The detector is completely removed as to being the primary cause of quantum wave collapse in the experiment. As Richard Conn Henry clearly implied previously, in the experiment it is found that ‘The physical environment’ IS NOT sufficient within itself to ‘create reality’, i.e. IS NOT sufficient to explain quantum wave collapse to a ‘uncertain’ 3D particle. thus jurassicmac, we are clearly dealing with a transcendent entity that is not limited be time and space, and yet you act as if we keep rearranging 1's and 0's in a computer's 'physical' memory, which is clearly based in time and space, we will somehow someday have a computer that will produce that which is transcendent of time and space?!? Your conjecture is of the same type of problem as those who try to explain the origination of the universe with no 'material' entity to work with. You simply don't have anything to work with in this instance save your 'faith' that it may be possible someday. bornagain77
GP: Excellent. As usual. G kairosfocus
Mark: One other point. If you do not believe consciousness has a physical basis then why should it be heritable? It can hardly be in our genes if it is not physical. And how did it first appear? If you accept common descent and believe consciousness to be a unique thing rather than a matter of degree then there must have been one or more families where the parents were not conscious but the children were! I am not sure I understand your points. In what sense consciousness should be (or not be) "heritable"? I certainly believe that it is not "in our genes", and so would probably do all those who, like me, believe that it is "not physical". And so? I think that most of those who believe that consciousness is an independent principle view it as a property, or as the essence itself, of what is usually called "soul". So, the answer to your question "how did it first appear?" at a philosophical level would critically depend on the specific conception of the soul one has. If we want to stay empirical, consciousness has to be considered as an observed part of reality, and a very important one. In that case, empirically, the only possible answer to your question: "how did it first appear?", is "we don't know". Empirically, we don't know. Yet. What we know is that, in us humans, which is where we directly observe consciousness, it is certainly "interfaced" with the physical body and brain. There is no question about that. Consciousness, in us, interacts with the outer world, in both directions, through the interface of our physical body and brain. Now, I would say that "the interface" is certainly in our genes. I think we can agree on that. Finally, I really don't understand your last concept. I certainly "accept common descent and believe consciousness to be a unique thing". The transcendental self is, IMO, "a unique thing". But I also believe that the manifestations of consciousness (for instance, its different "states") are "a matter of degree": waking consciousness is different from dreams and from deep sleep, or form other subconscious states, such as hypnosis, but all of those states are manifestations of the conscious, transcendental self. And why should there be "one or more families where the parents were not conscious but the children were!"? Humans are conscious, as far as we know. To what are you referring? gpuccio
jurassicmac: Just a few comments on what you say: 1) How could one possibly say that the ‘explanation’ of consciousness poses a problem for any theory, if it is not currently understood at all to begin with? Here there is some confusion in terms and epistemology, I believe. Consciousness is what it is, because it is a fact, not a theory. It is indeed directly observed by each one of us (our personal consciousness), and then, and only then, inferred in others by analogy. Facts are not necessarily "understood". Sometimes, they can be "explained" in the sense that we can build a more or less valid model which relates some facts to other facts or to other concepts. But we do not "understand" what matter is, or what energy is, or what gravitation is, any more than we "understand" what consciousness is. We can build theories about the properties of matter, or of energy, and the relations between those properties, and the same can be done for consciousness. 2) The assumption of strong ID theory, and of reductionist materialism, is that consciousness (the fact) can be "explained" as a consequence of some properties of matter and of the known laws of physics. Well, anybody is free to believe what he likes. But it is a fact that there is absolutely no basis for that assumption. 3) On the contrary, there are very strong reasons, both logical and empirical, to disagree with that assumption and to give it no real credit. The form that the assumption takes usually is that consciousness is a consequence of some property of the software. That is logically inconsistent, and indeed is not even a credible concept at all. Software is only a more or less complex arrangement of very simple computations. According to AI theory, the results of those computations are independent from the hardware. This concept is very important, because it means that if some property of the software is the cause of consciousness, then those computations should generate the same consciousness on any hardware, including an abacus. Who is really ready to believe that a very complex abacus, manually operated, will become conscious while the computations are accomplished? Moreover, all the possible properties of software which have been invoked as a substrate for consciousness, such as paralle computing or loops, have no intrinsic reason to be more related to consciousness than, say, a simple addition. Why should parallel computing be more prone to be conscious than serial computing? Did we observe any raise in consciousness in last generation processors? The simple fact is: software is softarwe; it is a series of organized simple computations. Affirming that it becomes conscious becasue of its structure is no more reasonable than affirming that a well painted woman portrait will begin to speak. There is no sense in it, and that's all. 4) The empirical reasons against strong AI are very simple: there is absolutely no sign that any specific software structure contributes to consciousness, even less determines it. But there is another very important argument, one that is directly related to ID. Software cannot generate new CSI. That is very important, because conscious beings can do that, and they definitely do that all the time, and very easily. That is a very strong argument in favour of the assumption (this time, very reasonable) that consciousness is necessary to generate CSI. With all the logical consequences which ensue, both for darwinian theory and for strong AI theory. gpuccio
Congratulations to Jurassicmac for his/her clear and intelligent comments. One other point. If you do not believe consciousness has a physical basis then why should it be heritable? It can hardly be in our genes if it is not physical. And how did it first appear? If you accept common descent and believe consciousness to be a unique thing rather than a matter of degree then there must have been one or more families where the parents were not conscious but the children were! markf
Jurassicmac,
ID currently is suffering from a very strong perception of being anti-science. That perception is reinforced with articles like this which propose that little-understood phenomena are somehow problems for evolutionary theory.
I didn't actually mention ID in the article, it was about Darwinism. And yes, I do think that little-understood phenomena are problems for any theory which claims to explain them, especially if that theory is taught in all our science classrooms as being as well established as gravity, and given legal protection from scientific criticism. No one here is even proposing that ID be taught in science classrooms, you are the one whose speculative theory is being taught as religious dogma. Apparently your position is that because some sci-fi aficionados like yourself think scientists might someday create computers that are conscious (do you also think traveling back in time may be possible some day?), it's 'anti-science' to recognize that human consciousness might be a problem for modern evolutionary theory. Granville Sewell
To clarify to everyone here: I'm all for ID establishing itself as a legitimate scientific discipline. But ID currently is suffering from a very strong perception of being 'anti-science'. That perception is reinforced with articles like this which propose that little-understood phenomena are somehow 'problems' for evolutionary theory. It doesn't help that the article wanders into baseless speculation about what technological achievements won't ever happen. jurassicmac
lamark said:
Well it’s obviously a problem for your theory if it’s so slippery you can’t even define it.
It's not that I can't explain what causes consciousness, it's that no one can. (yet) Saying that any 'x' is a problem for any theory is nonsense when nothing is known about 'x'. It would be like saying 'blobutrons' are a problem for the theory of relativity without knowing anything about what causes blobutrons, or even what they are. If it turns out that 'consciousness' is an emergent property of a sufficiently complex brain, or something like that, then evolutionary theory wouldn't have any more trouble explaining it than any other emergent propery in nature, like 'migration' or flocking behavior. jurassicmac
JDH said:
Having a Ph.D in physics ( computer simulation of ionospheric processes and artificial heating ) and having been in the computer industry for over 25 years I think I know a little fact that proves that human consciousness can not have evolved. I may be wrong, but it will take a tremendous amount of real logic, not complicated gobbedly gook to change my mind. The simple fact is this. Despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don’t even no how to do it.
JDH, I must respectfully point out you that you have made 2 logical fallacies at the same time. First and foremost, you have made the fallacy of 'Argument from Analogy." You have presented an analogy of how you think computer programming relates to consciousness, but then declare it a 'known little fact' that 'proves' human consciousness could not have evolved. An analogy never proves anything. (especially when its an irrelevant analogy to begin with) Secondly, your 'analogy' is also an "Argument from Incredulity." "I can not write a computer program..." "I don't even know how...." Just because you don't know how to write code that disobeys you, doesn't mean it's impossible. It's irrelevant to the point anyway: You certainly could write to programs that disobey each other. And to play devil's advocate for a second, how do you know that humans aren't following their 'program'? Do you have access to the 'source code' that the rest of us don't have? Do tell. jurassicmac
StephenB:
To say, “God made it that way” is more illuminating that to say, “it emerged.” The first statement is consistent with science and tells us something about the cause; the second statement is not consistent with science and tells us absolutely nothing about the cause.
Stephen, let me clarify. I wasn't saying that "because God made it that way," was a less satisfying explanation than any other, I was saying that "because God made it that way," isn't an explanation at all. I am a Christian, and I believe that anything that happens is at the very least allowed by God. In one sense, I believe it is accurate to say that it was God's intention that my house exist. But if someone asks me how my house came to exist, and I simply answer: "Because God willed it so," I haven't answered their question in at all. I haven't explained anything. The same thing goes for "Why does it rain?" The reply: "Because God causes it," isn't an explanation in any sense of the word. Science is concerned with discerning proximate causes, not ultimate causes. To say "God made it that way," isn't illuminating in the least, because that is the same answer a theist could give for everything. What causes radioactivity? God. Why does lithium react with water the way it does? God. How did our solar system form? God. How does photosynthesis work? God. You see, not only is "God did it," not a more 'illuminating' explanation, it is not an explanation at all! And It could not be less consistent with science. And of course, the question that follows emergent properties is "why does this property emerge?"
For the same reason, it makes sense to say that Mozart composed his symphonies, but it makes no sense at all to say that these works of art “emerged.”
This is a terrible analogy; of course it makes sense to say that Mozart composed his symphonies: we know with a high degree of certainty that he did. We know this for several reasons. We have plausible, verifiable agency; we know that humans can and do write music. We have plausible, verifiable mechanism; we know that humans can transcribe musical notes with pen and paper. We even corroborating historical documentation. With the diversification of life, we have none of these things. jurassicmac
Granville said:
If you ask whether they believe computers will someday be conscious, most people will say, of course not, that’s ridiculous; yet many of them believe random mutations and natural selection could accomplish this. My post was aimed at these people, to get them to see the inconsistency.
My bet is that most people who accept evolution would not say it's ridiculous to think that a computer-like device may someday be conscious. You seem to be mixing your demographics here; I'd say that many, many, more ID proponents think it's 'ridiculous' to speculate that it may be a possibility to design a machine that's conscious, even in the distant future. I wholeheartedly agree with you that it would be an inconsistency to think that evolution could produce consciousness, but that intentional design couldn't, at least in principle. But I think that you're making (incorrect) presumptions about what a lot of evolutionists think. I'm not even saying it will happen; just that saying it won't seems a stretch when in the same breath you acknowledge that no one even knows what consciousness is. I mean, if God designed conscious beings while still constructing them out of atoms, why couldn't we? (Especially if we were to model the computer after the brain in the first place)
Apparently you are one of those who would say it is “extremely presumptuous” to rule out the possibility that computers can be conscious. As I said in the post, I don’t know how to reach people like you; all I can say is, what about typewriters?
Yes, I absolutely would say that it is presumptuous to rule out the possibility that computers could one day be conscious, and if you can't see that it is, then I don't know how to reach people like you. Calling this statement presumptuous doesn't even have anything to do with evolution; I would have considered it an indefensible statement even when I was a young earth creationist. Evaluate what you're saying. You (or anyone else for that matter) can't answer the following: A. What consciousness even is, or what causes it in the first place or B. What computer technology will be like in 10, 100, or 1,000 years. So you're making a blanket statement with two variables, without knowing anything about the two variables. Can you not see the problem with this? My buddy Dave and I used to have debates about whether or not humans will ever invent teleporters, like in Star Trek. His position is that we wouldn't, because he thought the technology required was too farfetched. my position was that his position was nonsense; what seems like fantasy to one generation is commonplace to the next. Those who claim that a particular invention or technological achievement are impossible are almost always eventually proven wrong. Just in case there's any confusion, I'm not claiming that consciousness exists in current computers. I'll have to assume that your comment "what about typewriters," was a joke that fell flat; that you weren't seriously insinuating that anyone thinks that consciousness could exist in such a simple device. jurassicmac
very interesting insight @ 8 JDH, bornagain77
This is very interesting to: Today Show: Woman recounts life after death - 01/20/2010 - video http://www.youtube.com/watch?v=DhZ2tvv3mto bornagain77
I have to put a plug in for this book since it mentions "Spiritual Brain" and Signature In The Cell" in the promo: In Consciousness Beyond Life, the internationally renowned cardiologist Dr. Pim van Lommel offers ground-breaking research into whether or not our consciousness survives the death of our body. If you enjoy books about near-death experiences, such as those by Raymond Moody, Jeffrey Long, and James Van Praagh; watch televisions shows like Ghosthunters, Touched by an Angel, and Ghost Whisperer; or are interested in works that explore the intersection of faith and science, such as Spiritual Brain, Signature in the Cell, and When Science Meets Religion; you’ll find much to ponder in Consciousness Beyond Life. http://www.harpercollins.com/books/Consciousness-Beyond-Life-Pim-Van-Lommel/?isbn=9780061777257 further excerpt: Van Lommel provides scientific evidence that the near-death phenomenon is an authentic experience that cannot be attributed to imagination, psychosis, or oxygen deprivation. He further reveals that after such a profound experience, most patients' personalities undergo a permanent change. In van Lommel's opinion, the current views on the relationship between the brain and consciousness held by most physicians, philosophers, and psychologists are too narrow for a proper understanding of the phenomenon. In Consciousness Beyond Life, van Lommel shows that our consciousness does not always coincide with brain functions and that, remarkably and significantly, consciousness can even be experienced separate from the body. bornagain77
JDH, Interesting insight. Thanks. If you could design your computer to disobey you, then it's obeying you. Sigh :( CannuckianYankee
"Since you yourself fully admit that “… it is hard to say anything “scientific” about consciousness, since we don’t really know what it is,” don’t you think it is a bit premature to conclude that consciousness is somehow a ‘problem’ for evolutionary theory?" Well it's obviously a problem for your theory if it's so slippery you can't even define it. Doesn't that make it a big fat problem? Probably the biggest problem, and very well defined as a problem at that? Here's the firt part of the problem defined for you: a. thought does exist, there's no getting around that without getting cop-out ethereal. lamarck
---jurassicmac: "We don’t really know anything about consciousness; and that’s my point: for all we know, it could be an emergent property of a complex brain." ---"And of course, “Because God made it that way,” isn’t an explanation of anything." To say, "God made it that way" is more illuminating that to say, "it emerged." The first statement is consistent with science and tells us something about the cause; the second statement is not consistent with science and tells us absolutely nothing about the cause. For the same reason, it makes sense to say that Mozart composed his symphonies, but it makes no sense at all to say that these works of art "emerged." StephenB
Having a Ph.D in physics ( computer simulation of ionospheric processes and artificial heating ) and having been in the computer industry for over 25 years I think I know a little fact that proves that human consciousness can not have evolved. I may be wrong, but it will take a tremendous amount of real logic, not complicated gobbedly gook to change my mind. The simple fact is this. Despite years of experience writing many complex codes, I can not write a computer program that disobeys me. I don't even no how to do it. I can write computer programs that have bugs and don't perform what I thought they were going to do; I can write computer programs that make pseudo-random choices. I do not know how to write a program that disobeys. I would contend it can't be done. But the ability to disobey the Creator is the essence of consciousness. Otherwise it's just complicated programming with random choices. JDH
Jurassicmac, If you ask whether they believe computers will someday be conscious, most people will say, of course not, that's ridiculous; yet many of them believe random mutations and natural selection could accomplish this. My post was aimed at these people, to get them to see the inconsistency. Apparently you are one of those who would say it is "extremely presumptuous" to rule out the possibility that computers can be conscious. As I said in the post, I don't know how to reach people like you; all I can say is, what about typewriters? Granville Sewell
Granville, Since you yourself fully admit that "... it is hard to say anything “scientific” about consciousness, since we don’t really know what it is," don't you think it is a bit premature to conclude that consciousness is somehow a 'problem' for evolutionary theory? How could one possibly say that the 'explanation' of consciousness poses a problem for any theory, if it is not currently understood at all to begin with? Illion, you said:
"Consciousness is itself; it’s not made out of something else."
That seems a bit like saying "Migration is itself' it's not made out of something else." That statement is true in that migration isn't 'made' out of something physical, but it is an emergent feature of some physical, biological systems. We don't really know anything about consciousness; and that's my point: for all we know, it could be an emergent property of a complex brain. We don't have any evidence to suggest that consciousness can exist in the absence of a brain. (or something like a brain) Again, all I'm saying is that it sounds extremely presumptuous to say that any particular theory can't explain a phenomena, if that phenomena isn't understood at all to begin with. And of course, "Because God made it that way," isn't an explanation of anything. jurassicmac
Very interesting line of thought llion. I was kind of thinking along the same line. Consciousness is somewhat of a 'primal' entity of the universe that precedes the 'material' universe in the first place. We end up being very much like the blind men trying to describe different parts of the elephant to each other. Notes: This following experiment went extended Wheeler's delayed choice experiment to highlight the centrality of 'information' in the Double Slit Experiment and refutes any 'detector centered' arguments for why the wave collapses: (Double Slit) A Delayed Choice Quantum Eraser - updated 2007 Excerpt: Upon accessing the information gathered by the Coincidence Circuit, we the observer are shocked to learn that the pattern shown by the positions registered at D0 (Detector Zero) at Time 2 depends entirely on the information gathered later at Time 4 and available to us at the conclusion of the experiment. (i.e. This experiment clearly shows that the detector is secondary in the experiment and that a conscious observer, being able to know the information of which path a photon takes with local certainty, is primary to the wave collapsing to a particle in the experiment. The act of a detector detecting a photon at an earlier time in the experiment does not determine if the wave will be collapsed at the end of the experiment. Only the availability of the information to the observer is what matters for the wave to collapse. That is what he meant by 'we the observer are shocked to learn') http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm It is interesting to note that some materialists seem to have a very hard time grasping the simple point of the double slit experiments, but to try to put it more clearly; To explain an event which defies time and space, as the quantum erasure experiment clearly does, you cannot appeal to any material entity in the experiment like the detector, or any other 3D physical part of the experiment, which is itself constrained by the limits of time and space. To give an adequate explanation for defying time and space one is forced to appeal to a transcendent entity which is itself not confined by time or space. But then again I guess I can see why forcing someone who claims to be a atheistic materialist to appeal to a non-material transcendent entity, to give an adequate explanation, would invoke such utter confusion on their part. Yet to try to put it in even more 'shocking' terms, the 'shocking' conclusion of the experiment is that a transcendent Mind, with a capital M, must precede the collapse of quantum waves to 3-Dimensional particles. Moreover, it is impossible for a human mind to ever 'emerge' from any 3-D material particle which is itself semi-dependent on our 'observation' for its own collapse to a 3D reality in the first place. This is more than a slight problem for the atheistic-evolutionary materialist who insists that our minds 'emerged', or evolved, from 3D matter. In the following article Professor Henry puts it more clearly than I can: The Mental Universe - Richard Conn Henry - Professor of Physics John Hopkins University Excerpt: The only reality is mind and observations, but observations are not of things. To see the Universe as it really is, we must abandon our tendency to conceptualize observations as things.,,, Physicists shy away from the truth because the truth is so alien to everyday physics. A common way to evade the mental universe is to invoke "decoherence" - the notion that "the physical environment" is sufficient to create reality, independent of the human mind. Yet the idea that any irreversible act of amplification is necessary to collapse the wave function is known to be wrong: in "Renninger-type" experiments, the wave function is collapsed simply by your human mind seeing nothing. The universe is entirely mental,,,, The Universe is immaterial — mental and spiritual. Live, and enjoy. http://henry.pha.jhu.edu/The.mental.universe.pdf Astrophysicist John Gribbin comments on the Renninger experiment here: Solving the quantum mysteries - John Gribbin Excerpt: From a 50:50 probability of the flash occurring either on the hemisphere or on the outer sphere, the quantum wave function has collapsed into a 100 per cent certainty that the flash will occur on the outer sphere. But this has happened without the observer actually "observing" anything at all! It is purely a result of a change in the observer's knowledge about what is going on in the experiment. i.e. The detector is completely removed as to being the primary cause of quantum wave collapse in the experiment. As Richard Conn Henry clearly implied previously, in the experiment it is found that 'The physical environment' IS NOT sufficient within itself to 'create reality', i.e. IS NOT sufficient to explain quantum wave collapse to a 'uncertain' 3D particle. Why, who makes much of a miracle? As to me, I know of nothing else but miracles, Whether I walk the streets of Manhattan, Or dart my sight over the roofs of houses toward the sky,,, Walt Whitman - Miracles That the mind of a individual observer would play such an integral, yet not complete 'closed loop' role, in instantaneous quantum wave collapse to uncertain 3-D particles, gives us clear evidence that our mind is a unique entity. A unique entity with a superior quality of existence when compared to the uncertain 3D particles of the material universe. This is clear evidence for the existence of the 'higher dimensional soul' of man that supersedes any material basis that the soul/mind has been purported to emerge from by materialists. I would also like to point out that the 'effect', of universal quantum wave collapse to each 'central 3D observer', gives us clear evidence of the extremely special importance that the 'cause' of the 'Infinite Mind of God' places on each of our own individual souls/minds. Psalm 139:17-18 How precious concerning me are your thoughts, O God! How vast is the sum of them! Were I to count them, they would outnumber the grains of sand. When I awake, I am still with you. further notes: I especially like how the authors draw out this following 'what it means to be human' distinction in their paper: "although Homo neanderthalensis had a large brain, it left no unequivocal evidence of the symbolic consciousness that makes our species unique." -- "Unusual though Homo sapiens may be morphologically, it is undoubtedly our remarkable cognitive qualities that most strikingly demarcate us from all other extant species. They are certainly what give us our strong subjective sense of being qualitatively different. And they are all ultimately traceable to our symbolic capacity. Human beings alone, it seems, mentally dissect the world into a multitude of discrete symbols, and combine and recombine those symbols in their minds to produce hypotheses of alternative possibilities. When exactly Homo sapiens acquired this unusual ability is the subject of debate." The authors of the paper try to find some evolutionary/materialistic reason for the extremely unique 'information capacity' of humans, but of course they never find a coherent reason. Indeed why should we ever consider a process, which is utterly incapable of ever generating any complex functional information at even the most foundational levels of molecular biology, to suddenly, magically, have the ability to generate our brain which can readily understand and generate functional information? A brain which has been repeatedly referred to as 'the Most Complex Structure in the Universe'? The authors never seem to consider the 'spiritual angle' for why we would have such a unique capacity for such abundant information processing. This following short video, and verses, are very clear as to what the implications of this evidence means to us and for us: Modus Tollens - It Is Impossible For Evolution To Be True - T.G. Peeler - video http://www.metacafe.com/w/5047482 Genesis 3:8 And they (Adam and Eve) heard the voice of the LORD God walking in the garden in the cool of the day... John 1:1-1 In the beginning, the Word existed. The Word was with God, and the Word was God. A very strong piece of suggestive evidence, which persuasively hints at a unique relationship that man has with 'The Word' of John 1:1, is found in these following articles which point out the fact that ‘coincidental scientific discoveries’ are far more prevalent than what should be expected from a materialistic perspective,: In the Air – Who says big ideas are rare? by Malcolm Gladwell Excerpt: This phenomenon of simultaneous discovery—what science historians call “multiples”—turns out to be extremely common. One of the first comprehensive lists of multiples was put together by William Ogburn and Dorothy Thomas, in 1922, and they found a hundred and forty-eight major scientific discoveries that fit the multiple pattern. Newton and Leibniz both discovered calculus. Charles Darwin and Alfred Russel Wallace both discovered evolution. Three mathematicians “invented” decimal fractions. Oxygen was discovered by Joseph Priestley, in Wiltshire, in 1774, and by Carl Wilhelm Scheele, in Uppsala, a year earlier. Color photography was invented at the same time by Charles Cros and by Louis Ducos du Hauron, in France. Logarithms were invented by John Napier and Henry Briggs in Britain, and by Joost Bürgi in Switzerland. ,,, For Ogburn and Thomas, the sheer number of multiples could mean only one thing: scientific discoveries must, in some sense, be inevitable. http://www.newyorker.com/reporting/2008/05/12/080512fa_fact_gladwell/?currentPage=all List of multiple discoveries Excerpt: Historians and sociologists have remarked on the occurrence, in science, of "multiple independent discovery". Robert K. Merton defined such "multiples" as instances in which similar discoveries are made by scientists working independently of each other.,,, Multiple independent discovery, however, is not limited to only a few historic instances involving giants of scientific research. Merton believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science. http://en.wikipedia.org/wiki/List_of_multiple_discoveries The following video is far more direct in establishing the 'spiritual' link to man's ability to learn new information, in that it shows that the SAT (Scholastic Aptitude Test) scores for students showed a steady decline, for seventeen years, after the removal of prayer from the classroom by the Supreme Court in 1963: The Real Reason American Education Has Slipped – David Barton – video http://www.metacafe.com/watch/4318930 These following studies, though of materialistic bent, offer strong support that Humans are extremely unique in this 'advanced information capacity' when compared to animals: Darwin’s mistake: Explaining the discontinuity between human and nonhuman minds: Excerpt: There is a profound functional discontinuity between human and nonhuman minds. We argue that this discontinuity pervades nearly every domain of cognition and runs much deeper than even the spectacular scaffolding provided by language or culture can explain. We hypothesize that the cognitive discontinuity between human and nonhuman animals is largely due to the degree to which human and nonhuman minds are able to approximate the higher-order, systematic, relational capabilities of a physical symbol system (i.e. we are able to understand information). http://www.bbsonline.org/Preprints/Penn-01062006/Referees/Penn-01062006_bbs-preprint.htm Origin of the Mind: Marc Hauser Excerpt: "Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement footprint of the skyscraper that is the human mind",,, These following studies highlight the difficulty materialists have in fitting our mental abilities into any plausible evolutionary scenario: Origin of Soulish Animals: Excerpt: Bolhuis and Wynne contrast the cognitive capacities of birds and primates.,,, They also refer to an experiment demonstrating that "crows can also work out how to use one tool to obtain a second with which they can retrieve food, a skill that monkeys and apes struggle to master." Evidently, certain bird species exhibit greater powers of the mind than do apes. Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://www3.interscience.wiley.com/journal/55000207/abstract?CRETRY=1&SRETRY=0 bornagain77
Well, that was pretty vacuous. Let's start with the easy problems.
With the right software, my laptop may already be able to pass a Turing test, and convince me that I am Instant Messaging another human.
What a curious laptop you have. If I were to participate in in a Turing test, I'd be asking questions like "Of the two most recent questions you were asked, which was the more difficult to answer and why?" Your dictionary approach doesn't work too well there, does it. You've also managed to mischaracterize the Turing test, and I don't think you've read anything scholarly in artificial intelligence or philosophy of mind. Arguments from incredulity are almost always arguments from ignorance without the admission of ignorance, yes? Turing's "Computing Machinery and Intelligence" paper can be found here. Daniel Dennett's Can Machines Think? begins with the words "Much has been written about the Turing test in the last few years, some of it preposterously off the mark." It explains (among other things) why a dictionary-based approaches to artificial intelligence won't every be able to pass a Turing test. If you're not able to define consciousness then it's not surprising that you're unable to consider a physical model of it, and if you don't have a physical model then there's not much point in trying to evaluate an evolutionary solution (unless you're intending on making an argument from incredulity rooted in an argument from ignorance). Kandel's Principles of Neural Science is 1,400+ pages of how the brain works (and fails to work), starting at the electrochemical and working up through the cellular to larger brain structures. Once you remove all the aspects of thought and behavior where neural correlates are known, there's very little remaining for a non-physical consciousness to do. So yes, I agree with you: you really don’t know how to argue with people who believe computers could be conscious. BarryR
"Though we may not be able to say exactly what consciousness is ..." A huge part of the problem (perhaps the entirety of it?) is that trying to "say exactly what consciousness is" generally involves mechanistic/materialistic eliminative reductionism ... which is to say, "explaining" consciousness by explaining it away. Consciousness is itself; it's not made out of something else. Ilion
Here is another article that is far more nuanced in its discerning of the 'transcendent mind', than the brute empirics I'm drawn to: The Mind and Materialist Superstition - Six "conditions of mind" that are irreconcilable with materialism: http://www.evolutionnews.org/2008/11/the_mind_and_materialist_super.html --------- Of related interest on the "spiritual" aspect of man: https://docs.google.com/Doc?docid=0AYmaSrBPNEmGZGM4ejY3d3pfNGQ4aGM4NzZq&hl=en bornagain77
Though we may not be able to say exactly what consciousness is, I believe that it is possible to determine that consciousness is its own unique entity: Here are my notes to that effect: Quantum mind–body problem Parallels between quantum mechanics and mind/body dualism were first drawn by the founders of quantum mechanics including Erwin Schrödinger, Werner Heisenberg, Wolfgang Pauli, Niels Bohr, and Eugene Wigner “It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness.” Eugene Wigner (1902 -1995) laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963. http://en.wikipedia.org/wiki/Eugene_Wigner These following studies and videos confirm this ‘superior quality’ of existence for our souls/minds: Miracle Of Mind-Brain Recovery Following Hemispherectomies – Dr. Ben Carson – video http://www.metacafe.com/watch/3994585/ Removing Half of Brain Improves Young Epileptics’ Lives: Excerpt: “We are awed by the apparent retention of memory and by the retention of the child’s personality and sense of humor,” Dr. Eileen P. G. Vining; In further comment from the neuro-surgeons in the John Hopkins study: “Despite removal of one hemisphere, the intellect of all but one of the children seems either unchanged or improved. Intellect was only affected in the one child who had remained in a coma, vigil-like state, attributable to peri-operative complications.” The Day I Died – Part 4 of 6 – The Extremely ‘Monitored’ Near Death Experience of Pam Reynolds – video http://www.metacafe.com/watch/4045560 Blind Woman Can See During Near Death Experience (NDE) – Pim von Lommel – video http://www.metacafe.com/watch/3994599/ Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This ‘anomaly’ is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).) Quantum Consciousness – Time Flies Backwards? – Stuart Hameroff MD Excerpt: Dean Radin and Dick Bierman have performed a number of experiments of emotional response in human subjects. The subjects view a computer screen on which appear (at randomly varying intervals) a series of images, some of which are emotionally neutral, and some of which are highly emotional (violent, sexual….). In Radin and Bierman’s early studies, skin conductance of a finger was used to measure physiological response They found that subjects responded strongly to emotional images compared to neutral images, and that the emotional response occurred between a fraction of a second to several seconds BEFORE the image appeared! Recently Professor Bierman (University of Amsterdam) repeated these experiments with subjects in an fMRI brain imager and found emotional responses in brain activity up to 4 seconds before the stimuli. Moreover he looked at raw data from other laboratories and found similar emotional responses before stimuli appeared. In The Wonder Of Being Human: Our Brain and Our Mind, Eccles and Robinson discussed the research of three groups of scientists (Robert Porter and Cobie Brinkman, Nils Lassen and Per Roland, and Hans Kornhuber and Luder Deeke), all of whom produced startling and undeniable evidence that a “mental intention” preceded an actual neuronal firing – thereby establishing that the mind is not the same thing as the brain, but is a separate entity altogether. “As I remarked earlier, this may present an “insuperable” difficulty for some scientists of materialists bent, but the fact remains, and is demonstrated by research, that non-material mind acts on material brain.” Eccles “Thought precedes action as lightning precedes thunder.” Heinrich Heine – in the year 1834 A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,Nobel prize winner W. Penfield could sometimes induce flashes of recollection of the past (never a complete life review), experiences of light, sound or music, and rarely a kind of out-of-body experience. These experiences did not produce any transformation. After many years of research he finally reached the conclusion that it is not possible to localize memories (information) inside the brain. Scientific Evidence That Mind Effects Matter – Random Number Generators – video http://www.metacafe.com/watch/4198007 (I once asked a evolutionist after showing him the preceding experiment, "Since you ultimately believe that the 'god of random chance' produced everything we see around us, what in the world is my mind doing pushing your god around?) Genesis 2:7 And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul. As well it should be noted that, counter-intuitive to materialistic thought (and to every kid who has ever taken a math exam), a computer does not consume energy during computation but will only consume energy when information is erased from it. This counter-intuitive fact is formally known as Landauer's Principle. i.e. Erasing information is a thermodynamically irreversible process that increases the entropy of a system. i.e Only irreversible operations consume energy. Reversible computation does not use up energy. Unfortunately the computer will eventually run out of information storage space and must begin to 'irreversibly' erase the information it has previously gathered (Bennett: 1982) and thus a computer must eventually use energy. i.e. A 'material' computer must eventually obey the second law of thermodynamics for its computation. Landauer's principle Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle "Those devices can yield only approximations to a structure (of information) that has a deep and "computer independent" existence of its own." - Roger Penrose - The Emperor's New Mind - Pg 147 "Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Weiner - MIT Mathematician - Father of Cybernetics This ability of a computer to 'compute answers' without ever hypothetically consuming energy, until information is erased, is very suggestive that the answers/truth already exist in reality, and in fact, when taken to its logical conclusion, is very suggestive to the postulation of John 1:1 that 'Logos' is ultimately the foundation of our 'material' reality in the first place. John 1:1-3 In the beginning, the Word existed. The Word was with God, and the Word was God. He was with God in the beginning. Through Him all things were made; without Him nothing was made that has been made. (of note: 'Word' in Greek is 'Logos', and is the root word from which we get our word 'Logic') This strange anomaly between lack of energy consumption and the computation of information seems to hold for the human mind as well. Appraising the brain's energy budget: Excerpt: In the average adult human, the brain represents about 2% of the body weight. Remarkably, despite its relatively small size, the brain accounts for about 20% of the oxygen and, hence, calories consumed by the body. This high rate of metabolism is remarkably constant despite widely varying mental and motoric activity. The metabolic activity of the brain is remarkably constant over time. http://www.pnas.org/content/99/16/10237.full THE EFFECT OF MENTAL ARITHMETIC ON CEREBRAL CIRCULATION AND METABOLISM Excerpt: Although Lennox considered the performance of mental arithmetic as "mental work", it is not immediately apparent what the nature of that work in the physical sense might be if, indeed, there be any. If no work or energy transformation is involved in the process of thought, then it is not surprising that cerebral oxygen consumption is unaltered during mental arithmetic. The preceding experiments are very unexpected to materialists since materialists hold that 'mind' is merely a 'emergent property' of the physical processes of the material brain. Considering computers can't pass this following test for creating new information,,, "... no operation performed by a computer can create new information." -- Douglas G. Robertson, "Algorithmic Information Theory, Free Will and the Turing Test," Complexity, Vol.3, #3 Jan/Feb 1999, pp. 25-34. ,,Whereas humans can fairly easily pass the test for creating new information,, "So, to sum up: computers can reshuffle specifications and perform any kind of computation implemented in them. They are mechanical, totally bound by the laws of necessity (algorithms), and non conscious. Humans can continuously create new specification, and also perform complex computations like a computer, although usually less efficiently. They can create semantic output, make new unexpected inferences, recognize and define meanings, purposes, feelings, and functions, and certainly conscious representations are associated with all those kinds of processes." Uncommon Descent blogger - gpuccio https://uncommondesc.wpengine.com/intelligent-design/atheisms-not-so-hidden-assumptions/#comment-357770 ,,,thus these findings strongly imply that we humans have a 'higher informational component' to our being,, i.e. these findings offer another line of corroborating evidence which is very suggestive to the idea that humans have a mind which is transcendent of the physical brain and which is part of a 'unique soul from God'. Moreover this unique mind that we humans have seems to be capable of a special and intimate communion with God that is unavailable to other animals, i.e. we are capable of communicating information with "The Word" as described in John 1:1. bornagain77
Though we may not be able to say exactly what consciousness is, I believe that it is possible to determine that consciousness is its own unique entity: Here are my notes to that effect: Quantum mind–body problem Parallels between quantum mechanics and mind/body dualism were first drawn by the founders of quantum mechanics including Erwin Schrödinger, Werner Heisenberg, Wolfgang Pauli, Niels Bohr, and Eugene Wigner http://en.wikipedia.org/wiki/Quantum_mind%E2%80%93body_problem "It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness." Eugene Wigner (1902 -1995) laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963. http://en.wikipedia.org/wiki/Eugene_Wigner These following studies and videos confirm this 'superior quality' of existence for our souls/minds: Miracle Of Mind-Brain Recovery Following Hemispherectomies - Dr. Ben Carson - video http://www.metacafe.com/watch/3994585/ Removing Half of Brain Improves Young Epileptics' Lives: Excerpt: "We are awed by the apparent retention of memory and by the retention of the child's personality and sense of humor,'' Dr. Eileen P. G. Vining; In further comment from the neuro-surgeons in the John Hopkins study: "Despite removal of one hemisphere, the intellect of all but one of the children seems either unchanged or improved. Intellect was only affected in the one child who had remained in a coma, vigil-like state, attributable to peri-operative complications." http://www.nytimes.com/1997/08/19/science/removing-half-of-brain-improves-young-epileptics-lives.html The Day I Died - Part 4 of 6 - The Extremely 'Monitored' Near Death Experience of Pam Reynolds - video http://www.metacafe.com/watch/4045560 Blind Woman Can See During Near Death Experience (NDE) - Pim von Lommel - video http://www.metacafe.com/watch/3994599/ Kenneth Ring and Sharon Cooper (1997) conducted a study of 31 blind people, many of who reported vision during their Near Death Experiences (NDEs). 21 of these people had had an NDE while the remaining 10 had had an out-of-body experience (OBE), but no NDE. It was found that in the NDE sample, about half had been blind from birth. (of note: This 'anomaly' is also found for deaf people who can hear sound during their Near Death Experiences(NDEs).) http://findarticles.com/p/articles/mi_m2320/is_1_64/ai_65076875/ Quantum Consciousness - Time Flies Backwards? - Stuart Hameroff MD Excerpt: Dean Radin and Dick Bierman have performed a number of experiments of emotional response in human subjects. The subjects view a computer screen on which appear (at randomly varying intervals) a series of images, some of which are emotionally neutral, and some of which are highly emotional (violent, sexual....). In Radin and Bierman's early studies, skin conductance of a finger was used to measure physiological response They found that subjects responded strongly to emotional images compared to neutral images, and that the emotional response occurred between a fraction of a second to several seconds BEFORE the image appeared! Recently Professor Bierman (University of Amsterdam) repeated these experiments with subjects in an fMRI brain imager and found emotional responses in brain activity up to 4 seconds before the stimuli. Moreover he looked at raw data from other laboratories and found similar emotional responses before stimuli appeared. http://www.quantumconsciousness.org/views/TimeFlies.html In The Wonder Of Being Human: Our Brain and Our Mind, Eccles and Robinson discussed the research of three groups of scientists (Robert Porter and Cobie Brinkman, Nils Lassen and Per Roland, and Hans Kornhuber and Luder Deeke), all of whom produced startling and undeniable evidence that a "mental intention" preceded an actual neuronal firing - thereby establishing that the mind is not the same thing as the brain, but is a separate entity altogether. http://books.google.com/books?id=J9pON9yB8HkC&pg=PT28&lpg=PT28 “As I remarked earlier, this may present an “insuperable” difficulty for some scientists of materialists bent, but the fact remains, and is demonstrated by research, that non-material mind acts on material brain.” Eccles "Thought precedes action as lightning precedes thunder." Heinrich Heine - in the year 1834 A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,Nobel prize winner W. Penfield could sometimes induce flashes of recollection of the past (never a complete life review), experiences of light, sound or music, and rarely a kind of out-of-body experience. These experiences did not produce any transformation. After many years of research he finally reached the conclusion that it is not possible to localize memories (information) inside the brain. http://www.nderf.org/vonlommel_skeptic_response.htm Scientific Evidence That Mind Effects Matter - Random Number Generators - video http://www.metacafe.com/watch/4198007 Genesis 2:7 And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul. bornagain77
Dr. Sewell, I really enjoyed your book and read it fully in just two days. I hope you write another one in the same way this one was done. I especially liked the way you handled and described the 2nd Law of Thermodynamics and the problem it poses for explanations based on darwinian processes. skynetx

Leave a Reply