Uncommon Descent Serving The Intelligent Design Community

Is Standard Calculus Notation Wrong?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We usually think of basic mathematics such as introductory calculus to be fairly solid. However, recent research by UD authors shows that calculus notation needs a revision.

Many people complain about ID by saying, essentially, “ID can’t be true because all of biology depends on evolution.” This is obviously a gross overstatement (biology as a field was just fine even before Darwin), but I understand the sentiment. Evolution is a basic part of biology (taught in intro biology), and therefore it would be surprising to biologists to find that fundamental pieces of it were wrong.

However, the fact is that oftentimes fundamental aspects of various fields are wrong. Surprisingly, this sometimes has little impact on the field itself. If premise A is faulty and leads to faulty conclusions, oftentimes workaround B can be invoked to get around A’s problems. Thus, A can work as long as B is there to get around its problems.

Anyway, I wanted to share my own experience of this with calculus. Some of you know that I published a Calculus book last year. My goal in this was mostly to counter-act the dry, boring, and difficult-to-understand textbooks that dominate the field. However, when it came to the second derivative, I realized that not only is the notation unintuitive, there is literally no explanation for it in any textbook I could find.

For those who don’t know, the notation for the first derivative is . The first derivative is the ratio of the change in y (dy) compared to the change in x (dx). The notation for the second derivative is . However, there is not a cogent explanation for this notation. I looked through 20 (no kidding!) textbooks to find an explanation for why the notation was the way that it was.

Additionally, I found out that the notation itself is problematic. Although it is written as a fraction, the numerator and denominator cannot be separated without causing math errors. This problem is somewhat more widely known, and has a workaround for it, known as Faa di Bruno’s formula.

My goal was to present a reason for the notation to my readers/students, so that they could more intuitively grasp the purpose of the notation. So, I decided that since no one else was providing an explanation, I would try to derive the notation myself.

Well, when I tried to derive it directly, it turns out that the notation is simply wrong (footnote – many mathematicians don’t like me using the terminology of “wrong”, but, I would argue that a fraction that can’t be treated like a fraction *is* wrong, especially when there is an alternative that does work like a fraction). Most people forget that is, in fact, a quotient. Therefore, the proper rule to apply to this is the quotient rule (a first-year calculus rule). When you do this to the actual first derivative notation, the notation for the second derivative (the derivative of the derivative) is actually . This notation can be fully treated as a fraction, and requires no secondary formulas to work with.

What does this have to do with Intelligent Design? Not much directly. However, it does show that, in any discipline, there is the possibility that asking good questions about basic fundamentals may lead to the revising of some of even the most basic aspects of the field. This is precisely what philosophy does, and I recommend the broader application of philosophy to science. Second, it shows that even newbies can make a contribution. In fact, I found this out precisely because I *was* a newbie. Finally, in a more esoteric fashion (but more directly applicable to ID), the forcing of everything into materialistic boxes limits the progress of all fields. The reason why this was not noticed before, I believe, is because, since the 1800s, mathematicians have not wanted to believe that infinitesimals are valid entities. Therefore, they were not concerned when the second derivative did not operate as a fraction – it didn’t need to, because it indeed wasn’t a fraction. Infinities and infinitesimals are the non-materialistic aspects of mathematics, just as teleology, purpose, and desire are the non-materialistic aspects of biology.

Anyway, for those who want to read the paper, it is available here:

Bartlett, Jonathan and Asatur Khurshudyan. 2019. Extending the Algebraic Manipulability of Differentials. Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis 26(3):217-230.

I would love any comments, questions, or feedback.

Comments
hazel, Yes, it's exactly the same point. The first 2/3 of johnnyb's post #72 has thrown me for a loop, I confess. I am thinking of this situation strictly mathematically, where the independent variable is what it is, period. There is no philosophical musing about whether time is the "primary" independent variable. So if a problem is specified completely, then it should be crystal clear what d^2x/dx^2 is. Edit: As an example, suppose y = x^2 and x = 3t + 1. What is d^2x/dx^2? And what are d^2x and dx^2 separately?daveS
April 13, 2019
April
04
Apr
13
13
2019
06:31 AM
6
06
31
AM
PDT
H: I see something:
H, 79: dy/dx is not meant to indicate a division, but the symbol is appropriate because it is being used to represent the ratio of the instantaneous, infinitely small, changes in x and y at a point
By definition, a point is without scale. So, could it be that there are no changes at a point, though there may be changes around it?* That is, we have x, and with it f(x), which has associated a slope that is expressed in the tangent at the point. That tangent-slope itself varies in general as x takes different values so we may identify with f, f '(x) a flow function, then higher order flow functions. I put a space to allow clear visibility of the prime. In that context, we are looking at changes that occur so close to x that they are closer to x than any value (x + 1/k) once k is finite. That is, we have a hyperreal cloud surrounding x and are looking at values like (x + 1/K), K being beyond any finite k but connected to the reals by some sort of transfinite extension sufficiently close that m = 1/K is a value in the continuum around 0 but closer than any 1/k, k finite can give us. Is this part of what we are fishing for? KF *PS: We then ask, how are we moving around x, which brings in questions of onward influence. I don't think we can avoid them, though in effect we may stipulate that x is considered to change smoothly similar to a steadily flowing time domain. I used to talk in terms of road cuts, where we can see how the height varies along the road, but that already smuggles in comparing distinct locations along the road. Something is allowing us to hop in location, and we should allow it to surface. Total differential dy reflects its influences. What does dx really mean, especially where x is independent?kairosfocus
April 13, 2019
April
04
Apr
13
13
2019
06:30 AM
6
06
30
AM
PDT
Dave, at 85 you write, “Aren’t there always infinitely many other variables that could be involved, in principle? y could be a function of x, then x is a function of t1, t1 is a function of t2, etc.” I think this point is the same as I was making at 80 when I wrote, “But x is the independent variable. If you want to say that x is dependent on some other variable, you have pushed the problem one step back, but haven't made it go away.”hazel
April 13, 2019
April
04
Apr
13
13
2019
06:19 AM
6
06
19
AM
PDT
DS, is a total differential, dy really total? What lurks behind it? KFkairosfocus
April 13, 2019
April
04
Apr
13
13
2019
06:07 AM
6
06
07
AM
PDT
DS, it may be worse than that, we KNOW dy is bound up in the issue that quasi-spacetime is all mutually bound up. A total differential -- per the basic expression from partial differentiation -- is generally bound to a context of underlying influence variables with their own change behaviours and so we have to reckon with classical limiting cases. Indeed, dy = curly-dy/dx times dx plus a chain of similar factors suggests interesting questions. For example, is it meaningful to reduce that to one factor, x only? And, is x as independent as we thought? Where, too, what does it really mean for x to be independent -- what does it cumulatively imply? Also, spatial variables are tied to state space trajectories that are influenced by time or other similar underlying parameters. KFkairosfocus
April 13, 2019
April
04
Apr
13
13
2019
06:02 AM
6
06
02
AM
PDT
Folks, Money shot comment by JB:
JB, 74: we have Arbogast’s D() notation that we could use, but we don’t. Why not? Because we want people to look at this like a fraction. If we didn’t, there are a ton of other ways to write the derivative. That we do it as a fraction is hugely suggestive, especially, as I mentioned, there exists a correct way to write it as a fraction.
This is pivotal: WHY do we want that ratio, that fraction? WHY do we think in terms of a function y = f(x), which is continuous and "smooth" in a relevant domain, then for some h that somehow trends to 0 but never quite gets there -- we cannot divide by zero -- then evaluate:
dy/dx is lim h --> 0 of [f(x + h) - f(x)]/[(x + h) - x]
. . . save that, we are looking at the tangent value for the angle the tangent-line of the f(x) curve makes with the horizontal, taken as itself a function of x, f'(x) in Newton's fluxion notation. We may then consider f-prime, f'(x) as itself a function and seek its tangent-angle behaviour, getting to f"(x), the second order flow function. Then onwards. But in all of this, we are spewing forth a veritable spate of infinitesimals and higher order infinitesimals, thus we need something that allows us to responsibly and reliably conceive of and handle them. I suspect, the epsilon delta limits concept is more of a kludge work-around than we like to admit, a scaffolding that keeps us on safe grounds among the reals. After all, isn't there no one closest real to any given real, i.e. there is a continuum? But then, is that not in turn something that implies infinitesimal, all but zero differences? Thus, numbers that are all but zero different from zero itself considered as a real? Or, should we be going all vector and considering a ring of the close in C? In that context, I can see that it makes sense to consider some K that somehow "continues on" from the finite specific reals we can represent, let's use lower case k, and confine ourselves to the counting numbers as mileposts on the line: 0 - 1 - 2 . . . k - k+1 - k+2 - . . . . - K - K+1 - K+2 . . . {I used the four dot ellipsis to indicate specifically transfinite span} We may then postulate a catapult function so 1/K --> m, where m is closer to 0 than ANY finite real or natural we can represent by any k can give. Notice, K is preceded by a dash, meaning there is a continuum back to say K/2 and beyond, descending and passing mileposts as we go: K-> K-1 --> K-2 . . . K/2 - [K/2 - 1] etc, but we cannot in finite successive steps bridge down to k thence to 1 and 0. Where, of course, we can reflect in the 0 point, through posing additive inverses and we may do the rotation i*[k] to get the complex span. Of course, all of this is to be hedged about with the usual non standard restrictions, but here is a rough first pass look at the hyperreals, with catapult between the transfinite and the infinitesimals that are all but zero. Where the latter clearly have a hierarchy such that m^2 is far closer to 0 than m. And, this is also very close to the surreals pincer game, where after w steps we can constrict a continuum trough in effect implying that a real is a power series sum that converges to a particular value, pi or e etc. then, go beyond, we are already in the domain of supertasks so just continue the logic to the transfinitely large domain, ending up with that grand class. Coming back, DS we are here revisiting issues of three years past was it: step along mile posts back to the singularity as the zeroth stage, then beyond as conceived as a quasi-physical temporal causal domain with prior stages giving rise to successors. We may succeed in finite steps from any finitely remote -k to -1 to 0 and to some now n, but we have no warrant for descent from some hyperreal remote past stage - K as the descent in finite steps, unit steps, from there will never span to -k. That is, there is no warrant for a proposed transfinite quasi-physical, causal-temporal successive past of our observed cosmos and its causal antecedents. Going back to the focus, if 0 is surrounded by an infinitesimal cloud closer than any k in R can give by taking 1/k, but which we may attain to by taking 1/K in *R, the hyperreals, then by simple vector transfer along the line, any real, r, will be similarly surrounded by such a cloud. For, (r + m) is in the extended continuum, but is closer than any (r + 1/k) can give where k is in R. The concept, continuum is strange indeed, stranger than we can conceive of. So, now, we may come back up to ponder the derivative. If a valid, all but zero number or quantity exists, then -- I am here exploring the logic of structure and quantity, I am not decreeing some imagined absolute conclusion as though I were omniscient and free of possibility of error -- we may conceive of taking a ratio of two such quantities, called dy and dx, where this further implies an operation of approach to zero increment. The ratio dy/dx then is much as conceived and h = [(x +h) - x] is numerically dx. But dx is at the same time a matter of an operation of difference as difference trends to zero, so it is not conceptually identical. Going to the numerator, with f(x), the difference dy is again an operation but is constrained by being bound to x, we must take the increment h in x to identify the increment in f(x), i.e. the functional relationship is thus bound into the expression. This is not a free procedure. Going to a yet higher operation, we have now identified that a flow-function f'(x) is bound to the function f(x) and to x, all playing continuum games as we move in and out by some infinitesimal order increment h as h trends to zero. Obviously, f'(x) and f"(x) can and do take definite values as f(x) also does, when x varies. So, we see operations as one aspect and we see functions as another, all bound together. And of course the D-notation as extended also allows us to remember that operations accept pre-image functions and yield image functions. Down that road lies a different perspective on arithmetical, algebraic, analytical and many other operations including of course the vector-differential operations and energy-potential operations [Hamiltonian] that are so powerful in electromagnetism, fluid dynamics, q-mech etc. Coming back, JB seems to be suggesting, that under x, y and other quasi-spatial variables lies another, tied to the temporal-causal domain, time. Classically, viewed as flowing somehow uniformly at a steady rate accessible all at once everywhere. dt/dt = 1 by definition. From this, we may conceive of a state space trajectory for some entity of interest p, p(x,y,z . . . t). At any given locus in the domain, we have a state and as t varies there is a trajectory. x and y etc are now dependent. This brings out the force of JB's onward remark to H:
if x *is* the independent variable, and there is no possibility of x being dependent on something else, then d^2x (i.e., d(d(x))) IS zero
Our simple picture breaks if x is no longer lord of all it surveys. Ooooopsie . . . Trouble. As, going further, we now must reckon with spacetime and with warped spacetime due to presence of massive objects, indeed up to outright tearing the fabric at the event horizon of a black hole. Spacetime is complicated. A space variable is now locked into a cluster of very hairy issues, with a classical limiting case. Now, in that context, could JB draw out further what he is pondering? KFkairosfocus
April 13, 2019
April
04
Apr
13
13
2019
05:47 AM
5
05
47
AM
PDT
johnnyb, Thanks, this is more involved than I thought. I guess the idea is that if you perform algebraic manipulations such as dy/dx = 2x => dy = 2xdx (separating dy and dx), then the dy has no "memory" of where it came from, that is, whether it was the numerator of dy/dx or dy/dt, e.g. Am I in the right ball park? Edit: Referring to #77, aren't there always infinitely many other variables that could be involved, in principle? y could be a function of x, then x is a function of t1, t1 is a function of t2, etc.daveS
April 13, 2019
April
04
Apr
13
13
2019
05:46 AM
5
05
46
AM
PDT
Hazel - Yes, that is what I'm saying. DaveS - The difference matters, because it prevents you from using the differential algebraically. Otherwise, you would need to carry around the variable with which you are differentiating. For instance, instead of dy, you would need d_x(y)/d_x(x), because the d(y) would be a *different* d(y) than the one for a d_t(y)/d_t(t). This is similar to what I'm developing for partial differentials.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
09:55 PM
9
09
55
PM
PDT
hazel, That's an interesting question. And even if x did depend on t, we're differentiating with respect to x, so the t dependence would be irrelevant.daveS
April 12, 2019
April
04
Apr
12
12
2019
08:19 PM
8
08
19
PM
PDT
But if x is the independent variable, which is what we assume when we write y = f(x) and then y' = dy/dx, then the last term of your derivation is zero, so your derivation just reduces to the standard notation for the second derivative. Are you saying your notation is only different than the standard notation if x is not the independent variable, and some other unnamed variable is?hazel
April 12, 2019
April
04
Apr
12
12
2019
08:03 PM
8
08
03
PM
PDT
DaveS - Yes, if x *is* the independent variable, and there is no possibility of x being dependent on something else, then d^2x (i.e., d(d(x))) IS zero. Hazel - Ratios and quotients have the same rules for manipulation. So, if you are thinking of it as a ratio vs a quotient, we aren't actually in disagreement.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
07:22 PM
7
07
22
PM
PDT
One more comment, which is similar to Dave's: Johnny, you write, "Now what d^2x/dx^2 actually *is* is a different story. It actually depends on what x is itself dependent on." But x is the independent variable. If you want to say that x is dependent on some other variable, you have pushed the problem one step back, but haven't made it go away. I think at some point you need a better response to the objection that you mention at the bottom of page 222. Sayings that d^2x/dx^2 reduces to zero is “not necessarily true”, but not being able to say that it is definitely not zero, and why, is a problem for your approach that needs to be solved, I think.hazel
April 12, 2019
April
04
Apr
12
12
2019
07:21 PM
7
07
21
PM
PDT
Glad to see the revived discussion, although lots of points are being covered at once. I'd like to start with this one: Johnny, you write, "However, in the case of dy/dx, there is literally no reason for the symbology of division, except to make people think you are dividing." Both kf and I have outlined the standard way one introduces the meaning of the definition of derivative f'(x) = limit (as h ->0) (f(x + h) - f(x))/h by thinking of the slope between the fixed point P = (x, f(x)) and Q = (x + h, f(x + h)), and then letting Q approach P by letting h -> 0. Slope is delta y/delta x, a ratio, and so dy/dx means the limit of that ratio. We often use mathematical symbols to mean related things: all algebra I teachers know the problems in explaining the three meanings of the negative sign: subtract, negative of, and negative number, although they all revolve around the same idea. Likewise, a/b can be thought of as division but it can also be thought of as a ratio between numbers. Just because we write dy/dx as a ratio using the slash and not, for instance, the colon a:b, doesn't mean that there is no difference between its meaning as a ratio and its meaning as a quotient. So I disagree with your statement above: dy/dx is not meant to indicate a division, but the symbol is appropriate because it is being used to represent the ratio of the instantaneous, infinitely small, changes in x and y at a pointhazel
April 12, 2019
April
04
Apr
12
12
2019
07:09 PM
7
07
09
PM
PDT
johnnyb, If there are no other variables, so x is _the_ independent variable, I don't know how to evaluate d(dx) I guess. That is, I don't know how to plug x + ε and x into dx. On the other hand, if dx were a function, it should be constant, I would assume? If so, this would imply d(dx) = 0. Is this correct?daveS
April 12, 2019
April
04
Apr
12
12
2019
07:01 PM
7
07
01
PM
PDT
Dave @ 75 - If you mean that you need to know about other possible involved variables to understand d^2x/dx^2, then, yes, that is what I mean.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
06:44 PM
6
06
44
PM
PDT
Dave @ 68 - Yes, d(x) is *both* a function (more like an operator) of x which, in the normal circumstances of calculus (smooth/continuous/etc) normally yields an infinitesimal value. The way to think of it is this. Imagine a variable "q" which really is the independent variable. The d() evaluates its interior both at "q" and "q + epsilon" and subtracts them. d(x) then refers to whatever change happens in x between "q" and "q + e". d(x^2) = 2x dx, which means that the difference of x^2 depends on both where "x" is at the moment, and the output of d(x).johnnyb
April 12, 2019
April
04
Apr
12
12
2019
06:42 PM
6
06
42
PM
PDT
johnnyb, et al,
Now what d^2x/dx^2 actually *is* is a different story. It actually depends on what x is itself dependent on.
So I guess the "value" of d^2x/dx^2 is not clear at this point? I can see that, as d^2x/dx^2 is not the second derivative of x wrt x in your notation.daveS
April 12, 2019
April
04
Apr
12
12
2019
06:35 PM
6
06
35
PM
PDT
Hazel @ 63 - I think I answered the d^2x just above @ 72, but for the question of whether dy/dx are infinitesimals, (a) yes I treat them that way, but (b) I'm not sure if that is required for my notation to be correct. Dave @ 64 - Good question. If you look at the paper carefully, notice that all of the "d"s are in roman type, and all of the variables are in italic. This is to visually prevent confusion (though I'm not sure how much it helps). Usually, standard functions are written in roman type and variables in italic. Just like many people don't put parenthesis around sin(x), but instead typeset "sin" in roman font and "x" in italic. I feel that, with dx, it is acting sufficiently like its own variable to warrant being stuck together as a unit, but I do like typesetting the "d" and the "x" differently so that it is clear that it is really "d(x)". It *is* true that, sometimes, in mathematics, you have to double-up symbology. However, in the case of dy/dx, there is literally no reason for the symbology of division, except to make people think you are dividing. That is, we have Arbogast's D() notation that we could use, but we don't. Why not? Because we want people to look at this like a fraction. If we didn't, there are a ton of other ways to write the derivative. That we do it as a fraction is hugely suggestive, especially, as I mentioned, there exists a correct way to write it as a fraction.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
06:34 PM
6
06
34
PM
PDT
Hazel @ 55 - I teach infinitesimals to my high school students, but at the end of the course on calculus. It might be fun, though, to try one year to do it first. I'm not sure which one would be more straightforward. My present approach is to use real-ish numbers at the beginning (I DON'T teach limits at the beginning), and then make it rigorous at the end with infinitesimals. It might be interesting to teach infinitesimals first, because the rules are actually really easy and straightforward. I'll have to think on that approach. Anyway, for anyone wanting an overview of my approach and how it differs from normal, take a look here.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
06:24 PM
6
06
24
PM
PDT
32-36, mostly for steve_h: Most people mistakenly think (as I once did as well), that (d^2x/dx^2) = 0. This is based on the idea that it would be the derivative of dx/dx, which is 1. However, the actual derivative of dx/dx is "(d^2x/dx^2) - (dx/dx)(d^2x/dx^2)", which is obviously 0 by inspection. Now what d^2x/dx^2 actually *is* is a different story. It actually depends on what x is itself dependent on. There's actually a really interesting idea from George Montanez on what this can be used for, but I haven't had the time to look into it or the money to pay someone to. The reason why d^2x/dx^2 is thought to be zero is that calculus' main application point is physics, and, in 19th century physics, the primary independent variable was time, and the conception of time was that it had a constant flow. Therefore, dt was considered constant, and, since it was a constant, its differential was 0, so d^2t/dt^2 *would* be zero. Then the tradition became that the bottom differential was always considered to be a "constant" differential, and therefore zero. I can see the possibility that a truly independent variable's second differential should always be zero, but I don't know if I'm fully convinced of that yet. Still trying to decide. As for "infinitesimal constants", it's not a contradiction in terms. There are infinitely many infinitesimal constants. 36 - Kairosfocus: " I have seen a suggestion that m such that m^2 ~ 0, is a good yardstick for what an infinitesimal is". This is probably "smooth infinitesimal analysis". I don't really like this version, as they explicitly deny the law of the excluded middle. Non-standard analysis doesn't require this. Instead, you have a "standard part" function, which yields the closest real number. If "e" is your base infinitesimal unit, you can have 3e/6e, and the "e"s cancel, yielding 1/2. However, if you have "(3e + 4e^2)/(6e + 2e^2)", that also equals 1/2, because e^2 is infinitely smaller than e. However, if you just had 5e^2/4e^2 that would yield 5/4 as the e^2's would cancel (which they wouldn't if they were equal to zero). I haven't seen how SIA treats second derivatives, but I am indeed curious.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
06:16 PM
6
06
16
PM
PDT
johnnyb, Thanks for the reply---I'll take a look at your derivation again and see if I can understand how the third derivative version would go.daveS
April 12, 2019
April
04
Apr
12
12
2019
05:59 PM
5
05
59
PM
PDT
Sorry for my delay getting back. To start off with, Dave @ 31, I think the formula would be tied pretty closely to Faa di Bruno's formula, probably using similar components. I have not really dug into this, though. Unfortunately, my time is limited, and I tend to have to pay people to get me to help :( I found my coauthor through Upwork, and it cost quite a bit money (for me, anyway) to get him to help me flesh out the idea fully. I continue to go to him when I need mathematical work done (his work is excellent and original), but I'm almost entirely self-funded, so I can't always do this. As for Weyl algebras, I have not gotten to those, but it sounds interesting. I googled it, and didn't understand it, but I also didn't spend a lot of time on it.johnnyb
April 12, 2019
April
04
Apr
12
12
2019
05:51 PM
5
05
51
PM
PDT
Dave, you write, "presumably these two usages end up being consistent (or I missed something)" Or perhaps, in fact, Johnny's ideas aren't really justified because they aren’t consistent, even if his algebraic manipulations work out. At Mind Matters, the article says,
Correcting the notation will also likely open doors in fundamental calculus research. Better notation will improve the ability of mathematicians to do advanced work within calculus. Some of those fruits are already apparent, as the authors have already been using the new notation in published work with fruitful results.
It would be interesting to see these fruitful results, even though there is a reasonable possibility they would be beyond me without more study than I was interested in doing. When I taught, I always tried to explain the meaning of formulas and procedures. I think good notation should help support good understanding of the underlying concepts, if possible, as well as efficient manipulation for algebraic purposes. I think that is why I’m puzzled, and perhaps skeptical, of the value of Johnny’s formulation. But, again, if it does lead to something new, or a better way of understanding and working with something old, then it has some value.hazel
April 12, 2019
April
04
Apr
12
12
2019
05:26 PM
5
05
26
PM
PDT
Hazel, Yes, that's a good point. I understood dx to stand for an infinitesimal in dy/dx, but it also seems to mean an operator applied to x, as in d(x); presumably these two usages end up being consistent (or I missed something). I can't read the paper now to check the details unfortunately.daveS
April 12, 2019
April
04
Apr
12
12
2019
04:38 PM
4
04
38
PM
PDT
Correction: When I ask "Does this distinction . . . ", I should have started with, "Is this distinction . . . "PaV
April 12, 2019
April
04
Apr
12
12
2019
03:26 PM
3
03
26
PM
PDT
re 64: Hi Dave. Yes, johnny himself writes d(dx) as d^2(x) in his derivation using the quotient rule. But, as steve_h points out, we take the derivative of a function, and dx is not a function, so I don’t know what the derivative of dx, taken by itself, could mean. When we write d(dx), the d’s aren’t standing for the same thing, it seems to me, so this is not making sense to me. Maybe johnny will have time this weekend to explain and/or discuss.hazel
April 12, 2019
April
04
Apr
12
12
2019
12:20 PM
12
12
20
PM
PDT
Johnynb@ 23: In the paper linking the footnote, you write:
Therefore, when a compact representation of higher order derivatives is needed, this paper will use Arbogast’s notation for its clarity and succinctness.
This is the distinction I was referring to. Does this distinction, that is, the 'need' for a "compact representation of higher order derivatives" germaine to 'analysis'? Is this when Arbrogast's D notation becomes important?PaV
April 12, 2019
April
04
Apr
12
12
2019
11:20 AM
11
11
20
AM
PDT
hazel, Regarding your second question, I guess the "numerator" is d^2(x) or d(dx). I wonder if turnabout is fair play here---should we insist that dx is a product, therefore we need to use the product rule to evaluate d(dx)? In that case, I guess d^2x = d(dx) = d(d)*x + d*d(x) = 2d^2x. Well, maybe not. :-)daveS
April 12, 2019
April
04
Apr
12
12
2019
10:44 AM
10
10
44
AM
PDT
The article at Mind Matters doesn't allow comments, so it's hard to say whether people's reactions have been supportive, questioning (as in this thread), or what, nor whether people have mostly reacted to the article text, or whether they have looked at and evaluated the paper itself. I think that perhaps the article is a bit misleading when it writes,
that elementary calculus contains a longstanding flaw that has been present for over a century. ...The flaw they discovered is one of notation. Now, you may be thinking, how can notation be wrong? Well, notation can be wrong when it implies untrue things, especially when notation exists that implies the correct things.
I am not clear what the “untrue things” are. Probably it means that what is wrong is writing the second derivative as a fraction when it can’t be treated algebraically as a fraction. However, as Dave has pointed out, maybe the mistake is taking dy/dx as a fraction that can be manipulated algebraically. Also, as Steve_h pointed out, the quotient rule is, and I quote Wikipedia, “a method of finding the derivative of a function that is the ratio of two differentiable functions.” I’m not sure it is correct to think of dy and dx as differentiable functions, so I’m not sure using the quotient rule is appropropriate. Also, Dave and I have wondered about the explanation at the bottom of page 222: what exactly does d^2(x)/dx^2 mean if it doesn’t mean the second derivative of x, which would be zero. The other possible “untrue thing” that the article might be referring to is related to this: “because no one wanted to give differentials that same ontological status as other numbers …” I think maybe this points to the difference between seeing dx as a limit of delta x as delta x goes to zero, as in the standard formulation of calculus, and seeing dx as an “infinitesimal” (is that the same as “differential” in the above quote?) as is done in a non-standard approach to calculus. However, if thinking of dy and dx as infinitesimals includes incorporating the hyperreal system, as in the textbook paper kf linked to uses, I think it’s a judgment call as to which is best, but not an issue of which is true or not, nor of any relative difference in “ontological status”, whatever that mean. So, to help clarify, here’s a couple of questions for Johnny? 1. Is your approach thinking of dy and dx as infinitesimals in the hyperreal sense? 2. How do you understand the meaning of d^2(x)/dx^2? 3. For that matter, how do you understand the meaning of d^2(y)/dx^2, if it doesn’t mean the second derivative? Very interesting discussion, by the way.hazel
April 11, 2019
April
04
Apr
11
11
2019
05:12 PM
5
05
12
PM
PDT
The issue is going viral: https://uncommondescent.com/intelligent-design/ud-authors-suggested-correction-to-calculus-goes-viral/kairosfocus
April 11, 2019
April
04
Apr
11
11
2019
01:34 PM
1
01
34
PM
PDT
1 2 3 4 5

Leave a Reply