Uncommon Descent Serving The Intelligent Design Community

The e-YES Joke shows the power of manipulative framing in rhetoric, media and persuasion

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This weekend, someone shared the e-YES joke now making the rounds with me, and I found it on YouTube:

[youtube 0Al_V7fsL7g]

It seems to be just a prank at first, but on a second look, it shows us how framing distorts perceptions and is potentially quite manipulative.

As we consider the various issues now before our civilisation, let us ponder the framing challenge and let us ask ourselves how we may be being manipulated with more serious cases than e-YES vs EYE-s.

For instance, we have cases of objectors to design theory who regularly come here and post long ASCII-coded text strings while proclaiming they see no evidence that warrants the design inference on seeing FSCO/I or the more general case, CSI. Actually, some objectors actually think these concepts are ill formed rubbish. Never mind that every comment at UD beyond 73 ASCII characters is a case in point.

Time to think again, objectors. END

Comments
F/N: Been trying out e-YES on the ground here. It seems to in part be an intelligence and focus test, can you reframe on the fly and spot the framing issue? A lot of IQ test items really test that. But it readily snaps up those not paying close attention or who are "just average." That is it is well suited for manipulative use by the clever, esp. on topics where ordinary people look to reference figures for authority or leadership. Science and Sci Ed, Econ, many policy issues and the like are just down that street. And of course if you have got on blue-tinted sunglasses everything takes on that cast. this underscores the need for balance and for de-spinning. So, highly relevant to the ID community. KF PS: Finally got a Dell Trackpad Driver that works and turned off that stick pointer.kairosfocus
November 24, 2016
November
11
Nov
24
24
2016
09:25 AM
9
09
25
AM
PDT
FordGreen: I appreciate all the responses, and I sort of kind of get where people are coming from, but I think some work is needed to make this accessible to laypeople without science backgrounds.
Douglas Axe makes such an attempt in his recent book "Undeniable":
THE UNIVERSAL DESIGN INTUITION Tasks that we would need knowledge to accomplish can be accomplished only by someone who has that knowledge. The design intuition is utterly simple. Can you make an omelet? Can you button a shirt? Can you wrap a present? Can you put sheets on a bed? Tasks like these are so ordinary that we give them little thought, and yet we weren’t born with the ability to do them. Most of the training we received occurred so early in life that we may struggle to recall it, but we have only to look at a young person still in the training years to be reminded that all of us had to be taught. Whether we taught ourselves these skills or were taught by others, the point is that knowledge had to be acquired in the form of practical know-how. ... According to the design intuition, neither bricks nor shoes get made unless someone makes them. As familiar as this intuition is, it turns out to have huge implications for biological origins, because the claimed exceptions are so concentrated there. And what dramatic exceptions they are! Bricks don’t get made until someone makes them (or today, until someone makes the machine that makes them), but somehow much more complex things, like dragonflies and horses, did get made without anyone making them, we are told. … Far beyond such simple things are the pinnacles of human technology, like robots and communications satellites and smartphones, which we also know can’t appear by accident. Finally, at the highest reaches of the complexity scale are the true masterpieces—things like hummingbirds and dolphins—all of them alive, all of them eluding our best efforts to understand them. [Douglas Axe, Undeniable, Ch.2]
Origenes
November 23, 2016
November
11
Nov
23
23
2016
02:13 PM
2
02
13
PM
PDT
FG, while some reading up will help, in fact much of the issue is right there in front of us. We just need to reframe, as the e-YES example shows. Text is FSCO/I, check. It can be reduced to a bits measure, in the case of Blog comments, ASCII running at 7 bits/character (discounting the parity check bit). Where does complex text beyond 20 - 24 characters -- per, observation -- come from? What does 73+ characters do to that observation? [The 500 bit threshold.] Why is it maximally implausible for blind chance and/or mechanical necessity to get to FSCO/I? Then, see how FSCO/I extends to 3-d objects such as a fishing reel via say AutoCAD. Then, extend to the systems we observe in biological life starting with DNA. KFkairosfocus
November 23, 2016
November
11
Nov
23
23
2016
11:32 AM
11
11
32
AM
PDT
KF - Thank you for taking the time to reply so extensively. I have a lot to think about (and read), and hopefully in time the penny will drop!Fordgreen
November 23, 2016
November
11
Nov
23
23
2016
09:00 AM
9
09
00
AM
PDT
FG, ASCII etc can be reduced to structured chains of y/n q's, in a description language or code; chains of two-option alternatives in effect; a sophisticated version of the parlor game animal, vegetable or mineral. (One ASCII character is 7 bits of info, requiring 7 y/n structured q's to specify for general purpose English text [as was proved by several decades of utter dominance, with Unicode in effect being a 16-bit extension involving other linguistic and comms contexts], the context of this discussion.) This is one reason why the bit is a fundamental unit of information, and the log metric implicit in that arises naturally from the required additive property. Description languages extend this to 3-d functional entities, as AutoCAD etc show, with the exploded view (nodes and arcs mesh network . . . cf. the 6500 C3) leading onward to assembly instructions and bills of components. Note, the biological cell involves a lot of automated assembly and is a self-replicating automaton. This means 3-d entities can be viewed informationally as structured strings constrained by need to function in a contextually relevant way as part of an overall complex system. Durston et al follow a line of work in which functional strings can be analysed in terms of info content, using a step by step mathematical analysis. The metrics implicit in the discussion of FSCO/I here are in general simpler, but can be adjusted to use the Durston approach, which reckons with empirical evidence of a certain degree of redundancy per empirical evidence of protein families. Albeit, just because a particular protein string is part of a given family does not necessarily mean it functions quite right in a particular animal. IIRC, pig insulin is not quite right for us, but the result of using it was better for diabetics than the alternative at the time; IIFRC, they now synthesise human insulin. I add, that codes do not work in isolation from information systems, which are further FSCO/I, that is, the evaluation of strings gives us a conservative index of involved information and organisation. KF PS: Dembsky bridged the focus on functional complex organisation to the specified complexity discussed by Orgel back in 1973. He thus generalised to the search challenge of finding zones of interest T in spaces of possible lumped or scattered configurations W. It turns out functionality is one form of specification, and implies a wider context. that is, parts fit into wholes in their proper places as part of a configuration organised in order to effect due function. All of which ties in with design understood as intelligently directed configuration. Which is a matter of process, not identification of designers in themselves.kairosfocus
November 22, 2016
November
11
Nov
22
22
2016
11:11 PM
11
11
11
PM
PDT
FG
SA – yes, I agree even if we started with words (which is the natural building block of languages) yes you still have to create sentences etc. But that might be in of itself an interesting exercise to see how FSCO/I comes out using that assumptions.
Yes, it would be easier to achieve functional specificity if you start from words, but that's a short-cut. What we have already is evolutionists starting from full sentences and paragraphs and then observing small changes. But we're looking for the origin of the content. So, you have to start at the molecular level. Even there, chemical properties give kind of a rule book of grammar. We already give materialists that much, when we should ask where these properties originated (their short answer: the multiverse).
My point is that letters don’t come randomly together to form words, those words already exist as building blocks, therefore that should be the starting point (or perhaps at least parts of words – and there are semantic rules that exist too).
As above, the topic is the origin of such things. So, we can't start with pre-existing information structures. A dictonary of words is already evidence of intelligent design in itself.
I appreciate all the responses, and I sort of kind of get where people are coming from, but I think some work is needed to make this accessible to laypeople without science backgrounds. Or just acknowledge that specialized training and knowledge is needed to understand this, which is what I’m starting to think is probably the case here.
I would say yes and no. Yes - specialized training is needed to probe the depths of micro-biology and information science. But no - an amateur can pick up the basic concepts (not immediately but it doesn't take years either).
Yes, the ASCII thing is confusing to me – aren’t there alternative (simpler/smaller) coding schemes that should be used instead – after all wasn’t ASCII invented for computers and not for FSCO/I calculations? Isn’t ASCII to long (even at 7 bit?).
Durston's paper on mathematical measurements of functional data uses the common understading of a "bit" of information. That's the smallest measure - usually a digit. But there are models like Dawkins' weasel program that use an alphabet.Silver Asiatic
November 22, 2016
November
11
Nov
22
22
2016
08:47 AM
8
08
47
AM
PDT
I wonder if Fordgreen's difficulty stems from the way so many comments here assume one is already familiar with premises and abbreviations of previous conversations. I'm a longtime reader here, and still I am sometimes confused by the assumed knowledge in many posts. Going back to the first comment, where Fordgreen wondered why there was all the emphasis on bits and ASCII codes and etc, perhaps the missing information is the concept from William Dembski and others that undirected processes of a universe the size and age as ours could only produce the equivalent of about 500 bits of meaningful or functional information. Since that quantity is an easy thing to compare in text strings, they have become a frequent example of evidence showing an intelligence behind the origin of our universe and ourselves. Every cynical comment about lack of evidence for an intelligent designer of the universe or of living things, posted as it is in measurable text strings via computers and information networks, exhibits more functional information than could ever be produced by an undesigned, randomly operating universe. I hope that helps. Apologies in advance if I have misstated the argument somewhere.DennisM
November 22, 2016
November
11
Nov
22
22
2016
05:06 AM
5
05
06
AM
PDT
FG, Statistical Thermodynamics is the underlying field for thermodynamics [roughly, the study of heat, matter, energy and related things], it studies statistical properties of the micro-level of matter i/l/o the macro-level observables that define state (simple case, pressure, volume, temperature etc). As an example, temperature is an index of the average random kinetic energy per degree of freedom of particles at molecular level in a body. In that context, the entropy of a system [roughly, a metric of its disorder] can be seen as a measure of the average information about particular micro-states [roughly, molecular scale configuration/ state] that is missing, given the macro-state. That opens up a school of thought, informational thermodynamics, which is closely related to information theory and onwards to the issues raised by functionally specific complex organisation and/or associated information. This is actually the road by which I came to accept the significance of the design inference, as the note linked through my handle documents. I add: to see direct relevance, ponder what would have to happen in Darwin's little pond or a similar environment to get to first cell-based life. where it is precisely the chemistry and physics of molecules that one has to address, including thermodynamics. This shows the relevance from the root of Darwin's evolutionary tree of life. The simple approach through the obviously informational nature of text allows a much more direct view into what is going on, especially when one reduces text to binary digits via structured chains of y/n questions. Then, we can look at AutoCAD and the like to understand that 3-d functional configs can be similarly reduced, so thinking in terms of s-t-r-i-n-g-s is WLOG. Thus, as Abel et al do (and Thaxton et al before them), we can distinguish orderly [ATATAT . . . AT], random [rterkbhiwshiprwhfdlkwufgw3 . . . ] and functionally specific [this sequence expresses a message . . . ] sequences and use the bit measure as an index of complexity. Where, orderly sequences are readily compressed through identifying a unit cell and stipulating repeat n times, random ones are essentially incompressible [one has to copy the sequence to express it] and functional ones are somewhat less resistant to compression but also are constrained by requisite that they fulfill a function dependent on specific information bearing configuration. From this, we can measure informational complexity, and see why beyond a threshold of 500 - 1,000 bits of complexity, it is maximally implausible for blind chance and/or blind mechanical necessity to find the islands of function in the space of possible configurations. Thus, when we see FSCO/I beyond a reasonable threshold of complexity, it is empirically and analytically well-warranted to infer to the only observed adequate cause (per trillions of cases): intelligently directed configuration, aka design. KFkairosfocus
November 22, 2016
November
11
Nov
22
22
2016
01:09 AM
1
01
09
AM
PDT
KF: "13 kairosfocusNovember 21, 2016 at 5:17 pm FG, that is the start of the pro grade level, with bleeding over into the informational view on statistical thermodynamics. Yes, there is a reason why very similar math crops up in information theory and stat mech; cf the note linked through my handle. The simple code string level is start point level, the obvious case. KF" No idea what statistical thermodynamics is. I guess I would have to Google it!Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
05:28 PM
5
05
28
PM
PDT
Some examples: https://uncommondescent.com/intelligent-design/btb-q-where-does-the-fscoi-concept-come-from/ More: https://uncommondescent.com/intelligent-design/id-foundations/functionally-specific-complex-organisation-and-associated-information-fscoi-is-real-and-relevant/kairosfocus
November 21, 2016
November
11
Nov
21
21
2016
03:25 PM
3
03
25
PM
PDT
FG, one of my favourite examples of FSCO/I is an Abu 6500 C3 mag round reel. The ribosome assembling a protein involves coded and non code FSCO/I. KFkairosfocus
November 21, 2016
November
11
Nov
21
21
2016
03:18 PM
3
03
18
PM
PDT
FG, that is the start of the pro grade level, with bleeding over into the informational view on statistical thermodynamics. Yes, there is a reason why very similar math crops up in information theory and stat mech; cf the note linked through my handle. The simple code string level is start point level, the obvious case. KFkairosfocus
November 21, 2016
November
11
Nov
21
21
2016
03:17 PM
3
03
17
PM
PDT
I looked at the paper you linked to KF. But since I don't have a science degree, it's really not easy to understand (and forget the math...) SA - yes, I agree even if we started with words (which is the natural building block of languages) yes you still have to create sentences etc. But that might be in of itself an interesting exercise to see how FSCO/I comes out using that assumptions. My point is that letters don't come randomly together to form words, those words already exist as building blocks, therefore that should be the starting point (or perhaps at least parts of words - and there are semantic rules that exist too). I appreciate all the responses, and I sort of kind of get where people are coming from, but I think some work is needed to make this accessible to laypeople without science backgrounds. Or just acknowledge that specialized training and knowledge is needed to understand this, which is what I'm starting to think is probably the case here. Yes, the ASCII thing is confusing to me - aren't there alternative (simpler/smaller) coding schemes that should be used instead - after all wasn't ASCII invented for computers and not for FSCO/I calculations? Isn't ASCII to long (even at 7 bit?). That's why non-textual examples could work better, if they are available?Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
02:45 PM
2
02
45
PM
PDT
Fordgreen
But I would still like to know understand why, in the text string example, the calculations are done at byte/bit level and not at the natural building block of language – words.
Either way, it comes out the same. We're looking at the origin of an informational system. So, you could use letters, words, codes, ASCII - anything you want. It's not the specifics of the code that is the problem to solve, but 'what the code does'. In what we're talking about with functional complex specified information is a system of signalling, messages, instructions. Of course, the English language is only one such coding system that can communicate information. So is morse code or ASCII or mathematical symbols. But what we're looking at is a messaging process that causes variable events to occur - information that drives various functions. And the problem here - is how you describe the origin of such a system? An accidental, unintelligent process?
Again, remember language came first and if we are trying to detect a “designer” here we are looking for the presence of intelligent language.
Exactly. A coded language that communicates instructions - that indicates intelligence.
I think the focus on text is misleading – sure, if we calculate the probablity that the first letter in a word is “T”, then. “H”, then “E” etc, of course we are going to have astronomical probabilities. But language is made up of words (and I’m sure that’s how it works in our brains too), not randomly thrown together letters. It may be that when the FSCO/I calculation is done at the word level the results would be the same, don’t know.
Well, it's an interesting point, and I'm grateful that you're open to the evidence wherever it leads - that is refreshing, actually. Most ID objections are not even willing to explore the topic with an open mind, so it is appreciated. Regarding the use of words instead of letters, the first problem is that the words already contain informational content. So, you'd be starting with information - but the task is explaining the origin of that content. With letters, they don't have meaning on their own. Now we have to explain how they come together, purely randomly, to form meaningful words. After the words are formed -- then make sentences. Then have sentences that communicate functional information from sender to receiver. But even if you started with words (did they just randomly assemble into a dictionary of terms and meanings?) -- you'd still have to try to create sentences out of the jumble that you get randomly.Silver Asiatic
November 21, 2016
November
11
Nov
21
21
2016
02:28 PM
2
02
28
PM
PDT
FG, I suggest you have a look here at Durston et al, 2007. KF PS: Coded text strings are also used for machine code, and D/RNA as used in protein synthesis is a classic case in point. When you go up to addressing implied rules for English text, the associated information and organisation involved shoot up through the roof. By counting bits in ASCII symbols, we are making a very conservative information estimate for a narrow purpose. One that so far you have distracted from rather than addressed. PPS: I should note that the design inference is to design as process, not to designers. A material distinction.kairosfocus
November 21, 2016
November
11
Nov
21
21
2016
02:17 PM
2
02
17
PM
PDT
Ford, I am not making judgments about your ability to learn and know. I would simply suggest your point about words and letters is somewhat misplaced, and perhaps it is standing in your way of grasping the larger issue. I suspect that a lot of people engage interesting topics in that same manner, and there is nothing particularly wrong with it (as long as you stay engaged long enough to get past it).Upright BiPed
November 21, 2016
November
11
Nov
21
21
2016
01:22 PM
1
01
22
PM
PDT
UP: "All you have to have is the desire to know." Well I do, but then on the other hand I don't have much formal education in information science, mathematics or statistics. But I can usually learn by looking at examples which is why I think FSCO/I needs to be illustrated with as many examples as possible. I can follow up with looking at some of the people you mentioned. But I would still like to know understand why, in the text string example, the calculations are done at byte/bit level and not at the natural building block of language - words. So far nobody has addressed this point. Again, remember language came first and if we are trying to detect a "designer" here we are looking for the presence of intelligent language. I think the focus on text is misleading - sure, if we calculate the probablity that the first letter in a word is "T", then. "H", then "E" etc, of course we are going to have astronomical probabilities. But language is made up of words (and I'm sure that's how it works in our brains too), not randomly thrown together letters. It may be that when the FSCO/I calculation is done at the word level the results would be the same, don't know. I'm sure you're going to tell me I'm barking up the wrong tree, but I think FSCO/I if it is going to succeed is going to have to address the kinds of queries I have (and maybe I'm not the smartest person out there, but I can assure you I'm not a dummy either, at least not on paper...)Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
01:12 PM
1
01
12
PM
PDT
Ford, I agree that the average person is all thumbs when it comes to representation and interpretation (two necessarily complementary aspects of information), and the same goes double for most biologists. I would want to impress you to understand that it is not because these things are not well-understood. There is, in fact, a RICH history of understanding on the subject. Perhaps the best of these threads of understanding runs through Charles Sanders Peirce in the 1860s, to Turing in the 1940s, to von Neumann in the 1950s, to Pattee in the 1960s and 70s (and beyond). All you have to have is the desire to know.Upright BiPed
November 21, 2016
November
11
Nov
21
21
2016
01:01 PM
1
01
01
PM
PDT
UB: "Ford, I don’t think you’ve quite conceptualized the issues at hand. No problem." More than likely. But I would bet I'm not the only one. I think to understand it better I would like to see more worked out examples - in different domains (and especially biology). I think applying FSCO/I to text strings which in turn is based on language (using different units of measurement if you will) is hard to comprehend. Then throwing in modern computer era ASCII codes in the mix doesn't help either. Think about how would you have to explain this 100 years ago, before the advent of ASCII?Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
12:44 PM
12
12
44
PM
PDT
Ford, I don't think you've quite conceptualized the issues at hand. No problem. The reason written representations are of interest is because they are of the required high-capacity and are memory. Life requires both. If you'd like to come up to speed, you might try the bibliography on Biosemiosis.org. Don't be intimiidated, there is nothing there that is overly difficult to understand. Start at the front page if you'd prefer. cheersUpright BiPed
November 21, 2016
November
11
Nov
21
21
2016
12:24 PM
12
12
24
PM
PDT
I don't get it. Coded text (mostly) is a representation of language which developed using a unit of words, not characters. Text developed only as a written notation of language. If the FSCO/I of a textual string is supposed to represent the complexity of a designer (in this case a human), I do not understand why calculating FCSO/I on characters is meaningful if that text is based on spoken language which uses words as its primary unit (after all spoken language originated long before written text). I'm not saying that FCSO/I is not real or does not have application, but I think using text strings as examples (at least for me) doesn't make any sense, but that seems to be the primary example KF uses. Perhaps it's better to promote more examples from biology, which after all is the whole point, isn't it?Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
11:57 AM
11
11
57
AM
PDT
FG, The phenomenon of coded t-e-x-t-_-s-t-r-i-n-g-s is a manifestation of information, here ASCII code at 7 bits per character, that is seven structured y/n questions suffice in a given context to code text; which happens to be the code typically used for English text. (Other contexts such as teletype may work differently, that only brings up the wider FSCO/I and irreducible complexity in the communication system required for messages to work in a technological context. Computers, the relevant communication devices typically use ASCII or nowadays maybe Unicode, but that only adds to the implied information and organisation, it does not change the core issue.) This then instantiates functionally specific complex organisation, as the specific ordering of particular values at each character in turn is required to get to contextually responsive messages in English. This is a direct example of functionally specific complex organisation and associated information. Text strings are a particularly important example as they directly manifest the source of such FSCO/I. (And this is a descriptive phrase.) On over a trillion cases just through Internet pages, we reliably know the source of such FSCO/I. Where this becomes directly biologically relevant is in an observation made by Crick to his son Michael in a March 19, 1953 letter that recently sold for US$ 6 mn IIRC, when he had identified that DNA was a similar text string: "Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another) . . ." In this case, the elements are 4-state, and are clustered in threes, for 64 states each. Going on, we may observe how things like AutoCAD function, using similarly structured strings in accord with a description language to specify 3-d entities that carry out functions dependent on their configuration. Thus, the strings specify elements, orientations and arrangement with coupling to form a functional whole. That is, analysis on strings is WLOG. None of this is particularly new or difficult, nor is it useful to try to puff out a squid ink cloud of distractions behind which one retreats, loudly proclaiming that FSCO/I is meaningless or incapable of reduction to metrics. (In point of fact, back in 1973, Orgel used exactly the chaining of y/n constraints as a metric; which is readily converted into bits . . . binary digits. Which is of course also known to be a log metric tied to probability analyses.) So, while many may have been subjected to rhetorical framing -- the context of usage in the OP above -- that sets up resistance to understanding much less accepting the concept FSCO/I (and the more general-purpose CSI), the actual concepts are actually quite obvious and obviously manifest to the tune of trillions of cases around us. (Every nut and bolt, gear train, etc is a case in point.) The onward concern is, what, reliably do we see about the causal source of this phenomenon? Consistently, when we observe the origin, it is design. What is more, the text string case allows us to see why. A 500-bit string has 2^500 = 3.27*10^150 possible states, from 000 . . . 0 to 111 . . . 1 inclusive. the 10^57 atoms of our sol system acting as observers each assigned a tray of coins or the like shuffled 10^12 - 14 times/sec cannot observe a significant fraction of that configuration space in 10^17s, comparable to the age of the observed cosmos on the usual timeline. Go up to 1,000 bits, and the 10^80 atoms are even more overwhelmed. A simple calculation that converts what they could do in that time as comparable to a needle, the size of the haystack to be searched would dwarf the observed cosmos. In short the blind search challenge to find islands of function is not reasonably feasible on blind chance and/or mechanical necessity. the only empirically known force capable of such creation is the same one we see producing comments at UD all the time. Intelligently directed configuration. AKA, design. FSCO/I is an empirically reliable, analytically plausible string sign of design as cause. This is what allows us to put forward that as we may only infer to causes of origin of life and of body plans from traces, we must revert to known adequate causes for such traces. Once the frame of a priori evolutionary materialism and/or its fellow travellers is broken, it becomes utterly simple to see that the living cell is designed, and major body plans are designed. By who or what is a different question. But this breakthrough is already enough to refocus our entire approach. And, beyond, to lead us into reverse engineering and industrial transformation, development transformation and solar system colonisation over the next 100 - 200 years. KFkairosfocus
November 21, 2016
November
11
Nov
21
21
2016
09:55 AM
9
09
55
AM
PDT
Is this more an example of what psychologists call priming, rather than framing? (Maybe both are at work here). As to FSCO/I and the use of ASCII characters as a measurement. Assuming that we are talking about the workings of the human brain here, why are we using a fairly new computer representation of characters? Last time I checked I don't think my brain has a 32 or 64-bit processor and does not run on an ASCII-based operating system. And wouldn't the basic character set (A-Z plus a few punctuations) be represented with 6 or even 5 bits? So why assume the calculations have to be based on how modern computers represent characters (which is an 8-bit code, although ASCII originated as a 7-bit code - and is a relatively new invention dating back to this 50s or 60s?). And why is the calculation using characters anyway? Isn't it true that the human brain doesn't deal with individual characters anyway in composing sentences, but draws from a library of words? Characters only exist as a way to facilitate writing and reading? As I've said before, I'm not sure I'm understanding this all that well, but these are the questions that pop into my head...Fordgreen
November 21, 2016
November
11
Nov
21
21
2016
08:32 AM
8
08
32
AM
PDT

Leave a Reply