From Eurekalert:
The team, led by UA astronomer Peter A. Milne, discovered that type Ia supernovae, which have been considered so uniform that cosmologists have used them as cosmic “beacons” to plumb the depths of the universe, actually fall into different populations. The findings are analogous to sampling a selection of 100-watt light bulbs at the hardware store and discovering that they vary in brightness.
“We found that the differences are not random, but lead to separating Ia supernovae into two groups, where the group that is in the minority near us are in the majority at large distances — and thus when the universe was younger,” said Milne, an associate astronomer with the UA’s Department of Astronomy and Steward Observatory. “There are different populations out there, and they have not been recognized. The big assumption has been that as you go from near to far, type Ia supernovae are the same. That doesn’t appear to be the case.”
The discovery casts new light on the currently accepted view of the universe expanding at a faster and faster rate, pulled apart by a poorly understood force called dark energy. This view is based on observations that resulted in the 2011 Nobel Prize for Physics awarded to three scientists, including UA alumnus Brian P. Schmidt. More.
Rob Sheldon kindly writes to say,
The 2011 Nobel prize in physics was awarded to Perlmutter, Schmidt and Riess for “proving” that the universe was accelerating. Since the Big Bang happened 13.7 billion years ago, the only thing that might account for this acceleration was a larger volume. Another way to say it is that if spacetime has a pressure then the bigger it got the more it expanded. Yet another way to view this term is as a “anti-gravity” term to balance gravity. That’s why Einstein put it into his equation–because he wanted the universe to be unchanging, eternal and static. DeSitter proved that this was unstable to collapsing at the slightest perturbation, but Hubble showed that the galaxies were moving away from each other–e.g., not static at all. Einstein removed the term, supposedly calling it “his greatest mistake.” But by the 1970’s, theorists were putting it back in. For one thing, the galaxies were distributed unevenly throughout the sky like a lace tablecloth, and no one knew why there were these gigantic holes or voids with no galaxies. Only by putting back in some “anti-gravity” was it possible to simulate the voids in the cosmology models. Therefore when Perlmutter and colleagues used the “calibrated” Type Ia supernova to estimate distances–since every SNIa was supposed to be the same intrinsic brightness–then they could map distance (red shift) with brightness, and see what the universe had been doing. Sure enough, the most distant, red-shifted galaxies were also the faintest by <1% or so, and Perlmutter argued that this could only be caused by the acceleration of the universe and anti-gravity.
Now many people asked. What about dust? How about variations in SNIa brightness? How about magnetic field? Maybe stars were less “metallic” in the distant past and this made them fainter? “No,” said Perlmutter, “we took all that into account and it doesn’t explain it. The only alternative is anti-gravity.”
This is argument by exhaustion. This is why Global warming is true, because “We took into account all other natural forms of warming, so the remainder must be anthropogenic.” This is why Darwin is true, “We took into account all other explanations of change over time, and the remainder must be evolution.” This is why there are no man-eating tigers in my town, because I have a lucky charm that prevents them from coming within 10 miles of me.
Anyway, after awarding the Nobel prize about ten years after Perlmutter’s paper was peer reviewed, we now have evidence that all SNIa are not created equal, but some are intrinsically more bright than others, and the ones that he was looking at were intrinsically fainter.
Thoughts?
Note: Man-eating tigers in the Ottawa, Canada region had better have several really heavy winter coats of hair. Nothing else will save them.
See also: Dark Energy Mission
Follow UD News at Twitter!
I wonder if Dr. Sheldon would be willing to comment on what effect, if any, this finding will have on the 1 in 10^120 fine-tuning of the cosmological constant
Here are the 9 lines of evidence that Dr. Ross mentioned in the preceding video
Here is the paper from the atheistic astrophysicists, that Dr. Ross referenced in the preceding video, that speaks of the ‘disturbing implications’ of the finely tuned expanding universe (1 in 10^120 cosmological constant):
Besides the evidence that Dr. Ross listed for the 1 in 10^120 finely tuned expansion of the universe, this following paper clearly indicates that we do live in universe with a ‘true cosmological constant’. A cosmological constant that is not reducible to a materialistic basis. Thus, the atheistic astrophysicists are at a complete loss to explain why the universe expands in such a finely tuned way, whereas Theists are vindicated once again in their beliefs that the universal constants are truly transcendent of any possible materialistic explanation!
Also of note, in the following paper it is found that the proton-electron mass ratio does not vary in strong gravitational fields either.:
Here are the verses from the Bible which Dr. Ross listed, which were written well over 2000 years before the discovery of the finely tuned expansion of the universe, that speak of God ‘Stretching out the Heavens’; Job 9:8; Isaiah 40:22; Isaiah 44:24; Isaiah 48:13; Zechariah 12:1; Psalm 104:2; Isaiah 42:5; Isaiah 45:12; Isaiah 51:13; Jeremiah 51:15; Jeremiah 10:12. The following verse is my favorite out of the group of verses:
Less dark energy (ie smaller cosmo constant), more dark matter, same normal matter. Although I’ve put on a couple pounds.
BA@1
Hugh Ross, before he started his apologetics group, was a post-doc in astronomy at CalTech. Despite his conversion at a church there, astronomy was his first love, and we all know how significant that is. So I would assume that much of his astronomical statements come from identifying and approving of the “standard science” of the astronomy community. That is, one can find outliers like Sir Fred Hoyle or the Burbidges, but these are not the people Ross identifies with. For whatever reason, I’m an eternal outsider, so frankly I identify more with Sir Fred than, say, with Baron Martin Rees. (Even Hoyle saw himself as a quixotic knight compared to the establishment Rees.)
So that addresses the psychology of why Ross might prefer the “standard” model of Lambda-CDM. If we were discussing the physics, I would argue that the current spate of computer models have far too many adjustable dials in them to be metaphysically convincing. For reasons that Peter Woit discusses in his book “Not Even Wrong”, the various branches of physics have had far too much money thrown at hastily constructed models. Unlike the Manhattan Project, which also had a lot of money thrown at it, the current crop of models has little data, and nothing as momentous as winning a war with Japan to motivate it. As a consequence, physicists are swayed by prestige and grant money to buy into an establishment “model”, which often has little in the way of data to support it. In case you think I’m exaggerating, Global Climate Models have recieved $1 billion a year for the past 20 years, gravitational waves (LIGO) is a $1bn set of observatories now trying to get funding for a satellite, and CERN just spent $10bn finding the Higgs boson (a booby prize for not finding SUSY particles).
With those kinds of money pots, it is no wonder that science gets derailed, and I am 99.9% sure that this has happened to the Lambda-CDM “standard” cosmology model.
So what do I make of the 10^120 mistake in estimating dark energy? Well, since I don’t believe in dark energy, the mistake is not quantitative, but qualitative; it’s not a gazillion percent wrong, it’s infinitely wrong. Or to borrow Pauli’s phrase, “it isn’t even wrong”.
as to my question in post 1:
As long as the universe is accelerating at all, it appears that this finding will have no effect on the 1 in 10^120 fine-tuning for the cosmological constant, since the accelerating expansion of the universe only ‘implies’ (i.e. empirically verifies) a slight positive value for the cosmological constant and is not the means by which the fine-tuning is derived.
Thanks Dr. Sheldon for your answer at 3. As you can see from my reference at 4, (while I don’t personally believe in ‘Dark Energy’ either), I believe the 1 in 10^120 fine-tuning holds for the cosmological constant as long as any acceleration of the expansion of the universe is confirmed to any degree.
Perhaps, if I read him right, it even holds without any acceleration?!?
Although I’m not versed enough in the issue to be able to say for sure if the fine-tuning would hold without any acceleration, but it seemed prior to the supposed discovery of acceleration, i.e. ‘Dark Energy’, that the materialistic models trying to account for the 1 in 10^120 fine-tuning were falling apart then too.
from the reference
semi OT: Here is a fairly recent lecture by Gerald L. Schroeder
R&B Science: Lecture by Professor Gerald L. Schroeder on “Proof of God in Two Steps”
https://www.youtube.com/watch?v=M0H6zS-cxCU
Crev.info has a good write-up on this press release:
How many other assumptions, on which we build our MODELS, are false and we just don’t know it?
“How many other assumptions, on which we build our MODELS, are false and we just don’t know it?”
History tells us every single one to a varying degree, Tjguy. But NS + RM is way way off the mark. Way.