Uncommon Descent Serving The Intelligent Design Community

Energy transformation vs. the ghost of Malthus


Our civilisation is haunted by the ghost of the Rev. Thomas Malthus. His core vision of resource exhaustion and population crashes haunts our imaginations. As BBC profiles in brief:

The Rev. Thomas Robert Malthus [HT: Wiki]

Malthus’ most well known work ‘An Essay on the Principle of Population’ was published in 1798, although he was the author of many pamphlets and other longer tracts including ‘An Inquiry into the Nature and Progress of Rent’ (1815) and ‘Principles of Political Economy’ (1820). The main tenets of his argument were radically opposed to current thinking at the time. He argued that increases in population would eventually diminish the ability of the world to feed itself and based this conclusion on the thesis that populations expand in such a way as to overtake the development of sufficient land for crops. Associated with Darwin, whose theory of natural selection was influenced by Malthus’ analysis of population growth, Malthus was often misinterpreted, but his views became popular again in the 20th century with the advent of Keynesian economics.

Ehrlich’s alarmist book cover

A key to this thinking is that cropland extension is at best linear by addition of land, and would face diminishing returns by driving into increasingly marginal areas; all the while population growth tends to be exponential, overtaking the growth in land thus forcing either voluntary limitation of reproduction or else a crash due to hitting the limits:

That the increase of population is necessarily limited by the means of subsistence,
That population does invariably increase when the means of subsistence increase, and,
That the superior power of population is repressed by moral restraint, vice and misery.

[Malthus, Thomas Robert. An Essay on the Principle of Population. Oxfordshire, England: Oxford World’s Classics. p. 61.]

This line of thought should seem familiar, it lies behind the familiar nightmare of a global “population bomb.” Indeed, even Darwin’s theory turns on appeal to similar dynamics, save that of course, moral restraint would not apply and that extinction and replacement through descent with modification leading to preservation of “favoured races” is held to drive unlimited variation, leading to ecosystems filled with the various life forms we see.

Currently, the view is more or less like:

Why the “Bomb” has not yet exploded . . .

This chart brings out that a key flaw in much Malthusian thinking is that resources and technologies are more or less static or even face diminishing returns so exponential population growth will inevitably overtake resources, unless a radical solution is implemented. Similarly, in the pollution-population variant, despoliation of our environment multiplies the challenge . . . with of course anthropologically driven climate change the current headlined villain of the piece. As a rule, those “radical solution[s]” being put forward favour centralised national and global control by largely unaccountable bureaucracies.

(And BTW, arguably, the current wave of Anglo-American Atlantic basin “populism” that is widely decried and “resisted” by media elites and by established power circles is a C21 version of old fashioned peasant uprisings, expressed so far by ballot box rather than pitchfork and pruning bill — or, AR15. Let’s show a couple of maps.

Peasant uprising 1, US Election 2016:

Peasant uprising 2, UK Election 2019 (the Brexit Election):

See the pattern of urban areas vs hinterlands?)

A better view is to see that we are in an era where generation(s) long — Kondratiev — waves of technological, economic and financial transformation dramatically shift the carrying capacity of both regions and our planet as a whole. For example, one of the arguments used by British officials to restrict Jewish settlement in Mandatory Palestine was carrying capacity, but now almost 10 million people live in the same zone at a far higher standard of living than eighty years ago, at least three to four times the former population.

Where, natural population growth is triggered by public health improvement which reduces especially infant mortality. But as standards of education and living rise, a demographic transition then leads to declining birth rates. Currently, the North is trending well below replacement level, ~ 2.1 children per woman. From some reports out there, China seems to be at 1.5 and India, 2.1, both following the downward trend; that’s the two dominant countries on population that are also shortly going to dominate global GDP growth. A result on trend — trends are made to be broken — is that we could peak at 9+ billion in mid century and decline thereafter, with the issues of aging and want of energy and optimism of youth etc.

All of this is in a sense preliminary.

The key issue is that technology transforms carrying capacity, and that energy is a key resource driver for economies and technological possibilities. The industrial revolution was fed by coal and steam, then by electricity and oil drove the post-war era. So, the logical breakthrough point is energy transformation. For, it is energy that allows us to use organised force to transform raw materials into products that allow us to operate in a C21 economy.

There is of course a green energy push, highlighting renewables. However, by and large, those renewables are often intermittent and fluctuating, leading to disruptive effects as we try to match production to consumption. For example, ponder solar photovoltaic and wind. That leaves on the table two major renewable sources, geothermal and large scaly hydroelectricity. The former, being high risk in exploration, the latter being targetted as environmentally destructive. (Of course we have also tended to forget the issue of large scale flooding which was a material factor in promoting dams.)

The other Mega- and Giga- Watt class dispatchable, baseload and load following energy source/technology is even more objected to: nuclear energy. In the form of fusion, it runs the universe, stars being fusion reactors. Indeed, the atoms in our bodies and in the planet underfoot by and large (Hydrogen being the main exception) came out of stellar furnaces; at least, that is the message of both Cosmology and astrophysics. Fission, is what drove the reactor technologies since the 1940’s and is targetted as a cause of radioactive wastes and risk of meltdowns etc.

We could get into endless arguments on this, but it is more profitable to point out that while fusion is still under development, we have two major reactor technologies that across this century could drive global prosperity, break the water crisis [e.g. through desalination] and power the first phases of solar system colonisation.

We have already seen an example of how reduction of key metals can be transformed, once we have the electricity. And, currently, electric and hybrid electric vehicles are a major trend. In short, technology again looks like driving a breakthrough that takes Malthusian-style population crashes off the table.

A first technology is a development of the High Temperature Gas Reactors of the 1960’s to 80’s; pebble bed [modular] reactors. As Wikipedia summarises:

The pebble-bed reactor (PBR) is a design for a graphite-moderated, gas-cooled nuclear reactor. It is a type of very-high-temperature reactor (VHTR), one of the six classes of nuclear reactors in the Generation IV initiative. The basic design of pebble-bed reactors features spherical fuel elements called pebbles. These tennis ball-sized pebbles are made of pyrolytic graphite (which acts as the moderator), and they contain thousands of micro-fuel particles called TRISO particles. These TRISO fuel particles consist of a fissile material (such as 235U) surrounded by a coated ceramic layer of silicon carbide for structural integrity and fission product containment. In the PBR, thousands of pebbles are amassed to create a reactor core, and are cooled by a gas, such as helium, nitrogen or carbon dioxide, that does not react chemically with the fuel elements.
This type of reactor is claimed to be passively safe;[1] that is, it removes the need for redundant, active safety systems. Because the reactor is designed to handle high temperatures, it can cool by natural circulation and still survive in accident scenarios, which may raise the temperature of the reactor to 1,600 °C. Because of its design, its high temperatures allow higher thermal efficiencies than possible in traditional nuclear power plants (up to 50%) and has the additional feature that the gases do not dissolve contaminants or absorb neutrons as water does, so the core has less in the way of radioactive fluids.

In the case of He-4, this being effectively alpha particles that have collected electrons to form a gas, the fluid will not be radioactive.

Modular designs have been proposed and it is easy to see scaled designs on relevant scales for various needs, 100’s of kW, 1 – 100 MW, etc.


Similarly, when nuclear power plants were first developed, the US Navy played a key role. A major consideration was that they wanted bomb making materials, U-235 and/or Pu. That led away from a major alternative, Thorium, which is a common byproduct of mining of rare earth metals; which are key for modern motors, magnetic devices and even IC designs. But Thorium-based molten salt reactors were in fact demonstrated in the 1960’s and are quite feasible. Illustrating:

A Schematic for a Molten Salt Reactor (HT: US Dept of Energy)

(This type of reactor was actually envisioned for a so-called atomic aircraft, i.e. it is again highly scalable and modular.)

As the World Nuclear Association summarises and suggests:

Molten salt reactors operated in the 1960s.
They are seen as a promising technology today principally as a thorium fuel cycle prospect or for using spent LWR fuel.
A variety of designs is being developed, some as fast neutron types.
Global research is currently led by China.
Some have solid fuel similar to HTR fuel, others have fuel dissolved in the molten salt coolant.

Molten salt reactors (MSRs) use molten fluoride salts as primary coolant, at low pressure. This itself is not a radical departure when the fuel is solid and fixed. But extending the concept to dissolving the fissile and fertile fuel in the salt certainly represents a leap in lateral thinking relative to nearly every reactor operated so far. However, the concept is not new, as outlined below.

MSRs may operate with epithermal or fast neutron spectrums, and with a variety of fuels. Much of the interest today in reviving the MSR concept relates to using thorium (to breed fissile uranium-233), where an initial source of fissile material such as plutonium-239 needs to be provided. There are a number of different MSR design concepts, and a number of interesting challenges in the commercialisation of many, especially with thorium.

The salts concerned as primary coolant, mostly lithium-beryllium fluoride and lithium fluoride, remain liquid without pressurization from about 500°C up to about 1400°C, in marked contrast to a PWR which operates at about 315°C under 150 atmospheres pressure.

The main MSR concept is to have the fuel dissolved in the coolant as fuel salt, and ultimately to reprocess that online. Thorium, uranium, and plutonium all form suitable fluoride salts that readily dissolve in the LiF-BeF2 (FLiBe) mixture, and thorium and uranium can be easily separated from one another in fluoride form. Batch reprocessing is likely in the short term, and fuel life is quoted at 4-7 years, with high burn-up. Intermediate designs and the AHTR have fuel particles in solid graphite and have less potential for thorium use.

Graphite as moderator is chemically compatible with the fluoride salts.


During the 1960s, the USA developed the molten salt breeder reactor concept at the Oak Ridge National Laboratory, Tennessee (built as part of the wartime Manhattan Project). It was the primary back-up option for the fast breeder reactor (cooled by liquid metal) and a small prototype 8 MWt Molten Salt Reactor Experiment (MSRE) operated at Oak Ridge over four years to 1969 (the MSR program ran 1957-1976). In the first campaign (1965-68), uranium-235 tetrafluoride (UF4) enriched to 33% was dissolved in molten lithium, beryllium and zirconium fluorides at 600-700°C which flowed through a graphite moderator at ambient pressure. The fuel comprised about one percent of the fluid.
The coolant salt in a secondary circuit was lithium + beryllium fluoride (FLiBe).* There was no breeding blanket, this being omitted for simplicity in favour of neutron measurements.

* Fuel salt melting point 434°C, coolant salt melting point 455°C. See Wong & Merrill 2004 reference.

The original objectives of the MSRE were achieved by March 1965, and the U-235 campaign concluded. A second campaign (1968-69) used U-233 fuel which was then available, making MSRE the first reactor to use U-233, though it was imported and not bred in the reactor. This program prepared the way for building a MSR breeder utilising thorium, which would operate in the thermal (slow) neutron spectrum.

According to NRC 2007, the culmination of the Oak Ridge research over 1970-76 resulted in a MSR design that would use LiF-BeF2-ThF4-UF4 (72-16-12-0.4) as fuel. It would be moderated by graphite with a four-year replacement schedule, use NaF-NaBF4 as the secondary coolant, and have a peak operating temperature of 705°C . . .

Yes, there are significant technical challenges and there are major politics and perception issues. No surprise. The real point, though, is that we have to break the global energy bottleneck, and the sooner we do so, the better. That is, we face a familiar, Machiavellian change challenge:

Where, of course, onward, such technologies can open up solar system colonisation, the long term solution for the human population. To give an idea, here is Cosmos Magazine on a high thrust ion drive rocket engine:

Plasma propulsion engine
These engines are like high-octane versions of the ion drive. Instead of a non-reactive fuel, magnetic currents and electrical potentials accelerate ions in plasma to generate thrust. It’s an idea half a century old, but it’s not yet made it to space.
The most powerful plasma rocket in the world is currently the Variable Specific Impulse Magnetoplasma Rocket (VASIMR), being developed by the Ad Astra Rocket Company in Texas. Ad Astra calculates it could power a spacecraft to Mars in 39 days.

[ . . . ]

Thermal fission
A conventional fission reactor could heat a propellant to extremely high temperatures to generate thrust.

Though no nuclear thermal rocket has yet flown, the concept came close to fruition in the 1960s and 1970s, with several designs built and tested on the ground in the US.

The Nuclear Engine for Rocket Vehicle Application (NERVA) was deemed ready for integration into a spacecraft, before the Nixon administration shelved the idea of sending people to Mars and decimated the project’s funding . . .

Later, DV. END

F/N: Why this focus? Because, part of what energises what we are dealing with across the board is the ghost of Malthus. Indeed, that starts with Darwin's argument that favoured races are preserved in the struggle for life leading to descent with unlimited modification (i.e. examine the sub-title of Origin of Species). Then, in the population bomb- pollution- eco crisis- climate change form, we deal with a Hegel-Marx thesis-antithesis, convenient synthesis pattern of policy argument and linked power centralisation in the hands of unaccountable mandarins (aka, the international deep state's chief swamp dragons), often multiplied by cultural marxist agit prop, street theatre, media amplification and lawfare. Therefore, it is advisable to re-think our options and to put on the table alternatives that are more likely to address real problems and open up better opportunities for a free people . . . the Mandarins are inherently enemies of freedom. KF kairosfocus
PPPS: They continue:
A fourth reason for underassessing the power of technical change to raise living standards is that contemporaries pay most of its costs—in terms of such things as lost jobs, lost values of physical and human capital, and environmental effects associated with the teething troubles of new products and processes—while the benefits of the technologies in use (including other technologies that are built on it) are enjoyed by some in the present and all in future generations. We still benefit, for example, from the wheel, much of Greek mechanics, and the dynamo. This temporal asymmetry in costs and benefits tends to skew assessments. Everyone benefits from past technological change and few would want to undo advances that have been made in the past. But not everyone gains from current changes. Some—for example, a fifty-eight-year-old man who loses his job and the full value of the large investment in his now obsolete human capital—might have been better off if technological change had stopped just before it impinged so unfavourably on him. Indeed, it is possible that a self-interested contemporary electorate would vote to prevent some proposed new technological advance because the losers outnumbered the gainers, while the same technology would win overwhelming support in a vote taken fifty years hence because most of the losers would then be dead while the gains persisted . . . economic growth and increases in a sense of well-being are not necessarily perfectly correlated.
PPS: Note also from Lipsey et al:
A third reason why the power of growth is often underassessed is because the growth of 1 or 1.5 per cent per annum changes per capita GDP so slowly that people barely notice its variations from year to year and hence do not regard variations in growth rates (over their normal range) as a big force in their lives. But anyone who was taken back 50 or 100 years [--> 170% to 340% increment in per capita income across 100 years, where of course we must further note that at say 1.5% growth per year of population, there would also be 4.4 times the population now being supported at the higher level, i.e. per capita growth is lower than gross growth in GDP once population is rising] would see the enormous power of such growth to alter living standards and to reduce the blight of poverty.
LM, 2:
The reason that we don’t have hundreds of small Thorium reactors buried all around the country is because of the Nuclear Regulatory Commission and its attendant bureaucratic inertia. No administrator ever got ahead by advancing radical ideas. Not safe for either job or pension. They always play it cautious and insist on study after study, test after test before even a modicum of progress can be made. We need another Manhattan project directed at getting us a good safe design and getting them widely implemented.
The over regulated, ideologically driven state with its media publicists certainly can and will try to kill outside innovations. They want centralised power under the unaccountable mandarins whose public images will be ever so carefully manicured to present themselves in the best light. However, their power can be broken, and that is what we are seeing in the two hinterland revolts by ballot box pointed out in the OP. In the case of these technologies, pebble bed reactors extend the high temperature gas reactor technology pioneered in Germany in the 1960's. A demonstration molten salt reactor, likewise, was operated in the same decade. By the early 1970's a serious design for high power ion drive rockets was on the table, projected to be able to take a mission to the Moon in 39 days. Other countries have been working on reactor designs in recent years, including not only China but even South Africa. There is no reason why a consortium of universities, research institutes, corporations and DARPA or the like could not move the ball forward. Forty or fifty years were already effectively tossed away. Where, I believe Yucca Flats was identified as a suitable geologically stable repository. Use the rail network to transfer what would need to go to such. Many wastes can be managed on site. And sea ports can be used to move relevant materials. The point of the OP -- in part -- is to say, we have alternatives to a Malthus-Ehrlich panic and handing unlimited power into the hands of inevitably corrupt and abusive mandarins. I think Richard Lipsey et al, in their fairly recent exploration of technology-led growth, have some sobering words for us:
Although many of the alleged harmful effects of technological change have substance, many others are based on misinformation and misunderstanding. As outlined above, technological change is responsible for all the new products, process, and forms of organization that have raised material living standards 10-fold over the last century. Also, the number of new jobs created by all previous technological changes has far exceeded the number of old jobs destroyed. So, in spite of recurring worries, technological change has not so far been a net destroyer of jobs. Despite the valid points they make about the many harmful side effects of technological change, few of even the most vociferous critics of the effects of modern technology would be willing to go back to the technologies of 1900, foregoing all twentieth-century products and processes. Because they are never faced with such a stark choice, many of the critics of technological change underassess the power of the growth that it drives. This leads to many misguided policy views, including the belief that past technological change has been harmful on balance and that further growth is undesirable. We mention four of the many reasons for this underassess- ment of the benefits of growth. For more than a century most economists paid little attention to the importance of technological change. In spite of Schumpeter’s strong criticisms (1943) of the excessive emphasis that economists gave to static efficiency and their relative neglect of the economics of technological change, the profession continued largely to ignore his criticism. Although today there is more interest in economic growth and technological change than there was fifty years ago, the typical introductory eco- nomics course still spends far more time on the static theory of market allocation than on economic growth. Furthermore, if students do take a course on growth, it typically starts and ends with mathematical growth models in which technology is hidden in the black box of the neoclassical aggregate production function. As a result, students all too often learn almost nothing about technology and techno- logical change when learning about ‘growth’. They can also come away with some serious misconceptions of what has really happened in the history of long-term growth. One common misconception is that the upheavals that have beset the world over the last few decades of the twentieth century, and that are associated with new ICTs are unique. In fact, large economic, social, and political upheavals due to new technologies have occurred episodically ever since humans first abandoned their nomadic hunter-gatherer existence 10,000 or so years ago. ‘New economies’ are not new to human experience and the changes wrought by the current new economy are in many ways repeats of those wrought by previous ‘new economies’. Among other things, our study of technological shocks provides material that may help to guard against some common but mistaken beliefs about the actual record of growth and innovation—some of which are crude misconceptions, while others are quite subtle. A second reason why many dismiss the importance of technological change is that the majority of young people, naturally enough, take for granted the massive alterations that technology has wrought. It is hard for the youth of any recent generation to imagine the world in which their parents and grandparents grew up, let alone the world of their more distant forebears a century or two ago . . . [Economic Transformations, OUP, 2005, pp. 7 - 8.]
Hopefully, such can help us rebalance our thoughts. Not to overlook, highlighting the strategic significance of sci-tech in breaking through resource bottlenecks and Malthus-Ehrlich media-fanned panics and the machinations of those who hope to be our new mandarins. KF PS: It is Schumpeter, of course, who highlighted the significance of Kondratiev's point regarding long economic growth-waves. As these are generally sigmoid, that points to adoption waves, underscoring the role of breakthrough technologies. In my opinion the 2008 -9 great recession and aftermath with lingering effects marks the end of the ICT 1 wave, with 2 now struggling to be born. AI is an obvious key part, but an industrial-energy transformation that can work is also pivotal. Hence why I have started with metals winning and energy from a source with centuries of potential, also highlighting the Malthus-Ehrlich panic and how such can serve the new mandarins. Where, a manufactured version of a Hegel-Marx thesis-antithesis "crisis" is now almost a predictable trope. kairosfocus
F/N: While I am at it, a note on high N steels:
high nitrogen steels were developed to take advantage of what are now well known improved characteristics of the material including high strength and a much improved corrosion resistance. In this article it’s possible to see some of the latest developments led by CMRDI utilizing both open air and controlled atmosphere technologies for the production of several grades of advanced high nitrogen steels. The significant application of nitrogen as an alloying element commenced in the 80s of the past century. The steels produced then, which contained 0.5–1.0% of nitrogen, were called High-Nitrogen Steels (HNS) or nitrogen “hyperequilibrium” steels. At such its contents, nitrogen imparts unique properties to the steel; for example, stainless high-nitrogen steels are characterized by high strength and high corrosion resistance at the same time, and therefore the high-nitrogen steels have initiated a new branch in physical metallurgy. In the case of 9-12 % chromium steels higher nitrogen content supports the precipitation of particles of vanadium nitride, VN, which leads to increase the creep resistance of these steels with increasing nitrogen content. In the case of two-phase, austenitic-ferritic steel, nitrogen affects corrosion resistance of these steels, mechanical properties and has significant influence on the phase composition, i.e. the ratio of austenite and ferrite. Most often the ferrite content in these steels is between 40 and 60 percent. An increase of nitrogen in the duplex microstructure has several significant effects on the phase diagram. Nitrogen additions have a strong stabilization effect on austenite when considering the high temperature ferritic transformation. Considering the fact that nitrogen is element, which is able to stabilized austenite, it can be used as an inexpensive substitute for other more expensive elements. Nitrogen alloyed different steel grades can be used in various fields such as: Transportation (cables, blades of reactors [--> turbines?], landing parts of aircrafts, wheels for trains, body of cars, double shell for fuel tankers, etc.) Environment technologies, (safely in oil pipeline, petrol prospection, etc.) Industrial plants and equipments (mechanical industry, car industry, nuclear reactors, control devices, cutting machine, paper industry, etc.) Civil engineering Leisure and sport industry (high demand for extreme mechanical resistance and lightness) Defense and space industry Nitrogen steel can be produced in open air (electric arc and induction furnace, electroslag remelting), under nitrogen pressure (pressurized induction furnace, pressure electroslag remelting-PESR-), by powder metallurgy and surface alloying methods.
Part of the tech transformation focus. KF PS: Simple review on steel microstructure: https://www.thefabricator.com/thefabricator/article/metalsmaterials/a-review-of-steel-microstructures
pure iron is extremely soft and is not used in structural applications. However, iron with up to 2 percent carbon is known as steel, making it the most widely used engineered material in the world. Microscopically, pure iron can be thought of as a 3-D lattice of stacked billiard balls. For most low-carbon steels, more than 99 percent of the microstructure is still iron, with all other elements combining to form typically less than 1 percent of the overall composition. No matter how well the billiard balls are packed, some gaps will always be found in between. These small gaps are known as interstices. The smallest elements like carbon and nitrogen can fit in these gaps. Larger atoms like manganese, magnesium, silicon, and phosphorus substitute for iron in the lattice (see Figure 1). When a very small fraction of the interstices in between the iron lattice is occupied by carbon atoms, this interstitial-free (IF) steel is said to have a microstructure of ferrite. Ferrite has a body-centered cubic (BCC) crystal structure (see Figure 2a). Ferrite is a microstructural phase that is soft, ductile, and similar to pure iron. There is a limit on how much carbon can fit in the gaps in the ferrite structure: 0.02 percent carbon at 1,340 degrees F (725 degrees C), but dropping to 0.006 percent (60 PPM) carbon at room temperature. The gaps are a little larger in a phase known as austenite, which has a face-centered cubic (FCC) crystal structure (see Figure 2b). At around 2,100 degrees F (1,150 degrees C), up to 2 percent carbon can fit into the austenite microstructure. As the steel slowly cools from this temperature and carbon is forced out of solution, the austenite transforms into a combination of ferrite and another phase called cementite, also known as iron carbide, which has the chemical composition of Fe3C. The amount of cementite that forms is a function of how much carbon is in the steel. Because ferrite cannot contain more than about 60 PPM carbon at room temperature, the rest of the carbon winds up as cementite. Unlike ferrite, cementite has the characteristics of a ceramic: very hard and brittle, with low toughness and little resistance to crack initiation and propagation. The mixture of ferrite and cementite is called pearlite, named because it looks like mother of pearl under a microscope, with alternating layers of ferrite and cementite. Martensite Enters the Picture With faster cooling, different dynamics occur. Above a critical cooling rate (typically faster than 86 degrees F per second, but dependent on the alloy), the excess carbon of the FCC austenite does not have time to diffuse out of the crystal structure and form cementite. Instead, the carbon is trapped in with the now nearly pure iron and forced into the interstitial locations that are not large enough to accommodate the carbon atoms. This distorts and strains the crystal matrix into a body-centered tetragonal (BCT) structure (see Figure 2c), forming a hard phase called martensite. At higher carbon levels, more carbon is frozen into the BCT structure, further straining the crystal matrix. This is why the hardness of martensite increases with carbon level. The volume of the BCT martensite structure is larger than that of the FCC austenite, so the freshly transformed martensite is compressed by the surrounding matrix. If martensite is heated, carbon has the opportunity to diffuse out from the BCT structure, reducing the distortion of the crystal matrix, leading to decreased hardness and increased toughness. This heat treatment produces a microstructure of ferrite and iron carbide (Fe3C) called tempered martensite. The highly strained martensitic matrix results in an increased amount of Fe3C nucleation sites in tempered martensite, which leads to a more dispersed distribution of Fe3C than seen in the lamellar (layered) structure of pearlite. The volume of the BCC ferrite is smaller than that of the BCT martensite, so that when martensite is tempered, some of the residual martensite compression stresses from the austenite-to-martensite transition are relieved. Retained austenite is the term given to austenite that does not transform to martensite during quenching. The amount of retained austenite is a function of several factors, including carbon content and alloying to specifically promote retention of the austenitic structure. For example, austenitic stainless steels like 304 and 316 are engineered to be fully austenitic at room temperature. Bainite is another microstructure that can form when austenite is cooled. It typically consists of a combination of ferrite, cementite, and retained austenite. Because the cooling rate to form bainite is slower than the cooling rate needed to form martensite, carbon has some opportunity to diffuse out of the FCC austenite, allowing for the formation of BCC ferrite. The remaining austenite is enriched with carbon, which leads to cementite precipitation. However, the slow cooling rates that produce the flat-layered, brittle structure of pearlite do not exist; the higher cooling rates needed to produce bainite give the harder components of the microstructure enough energy to transform into a more rounded shape. Bainitic microstructures have the best balance of strength and ductility. The cooling rate is fast enough to increase the strength, while the rounded hard microstructural constituents are not as prone to crack initiation and propagation than if they were flat and elongated. The strength-toughness balance is why an increasing number of automotive wheels and suspension arms are being made from bainitic steels.
EG, it's not hard to understand in a post-Ehrlich world, that we are haunted by the ghost of Malthus. The Malthusian principle of linear growth to limits of cropland vs exponential growth of population leads to various issues, such as how technological transformation has led to huge growth in standard of living across the world. Currently, energy is a major potential breakthrough opening up transformation of earth and solar system colonisation (and building on new metal winning technologies). That points to needed baseload carrying, load following energy sources, especially modern nukes. Two key technologies we need to ponder are on the table, pebble bed and molten salt reactors; we already looked at the FFC Cambridge molten salt electrolysis approach to winning key metals and alloys (including high Nitrogen, powder alloying techniques for super steels). It is noted that they can also feed into high power ion drive rockets that have potential to be maybe four times as efficient as Chemical rockets. As in, 39 days to Mars. It is time for re-thinking. KF kairosfocus
KF, my apologies. My comment was not about what you were saying, it was just that I was having a very hard time figuring out what you were saying. Ed George
EG, you would not believe just who it made a lot of sense to. I suggest further reading here https://web.archive.org/web/20090422201700/http://www.ne.doe.gov/genIV/documents/gen_iv_roadmap.pdf KF kairosfocus
KF@5, I’m sure this makes sense to someone, but my tinfoil hat must be slipping. Can you provide an executive summary? Ed George
LM, this is about a critical enabling foundational element for a viable future, energy. DV, I will go on to other elements soon. We have to exorcise the ghost of Malthus, for a host of reasons, if our civilisation is to go forward in a healthy manner. KF PS: I am afraid, the sort of deep polarisation we have seen will proceed apace. PPS: I suspect, concrete and/or glass encapsulation and burial in tunnels in stable geological areas, will be effective. I would clearly mark and notify the zone using Skull-crossbones symbols and explanations in stone. Bronze might be stolen. The real overall volume of true wastes in the end will be manageable. BTW, RA liberated to general environment through burning coal etc is probably significantly higher than cumulative impacts of nuke plants. As I noted elsewhere, our brains and nerves are appreciably radioactive, as are bananas, due to K-40 isotope. kairosfocus
KF: Yeah, aware of that. Proof of concept. My guess that fabrication and design could probably be much improved. Certainly controllers are much better now. Built to be underground safe from terrorists and out of the sight of the anti-nuke nutters. Put a park on top...Hyman Rickover Park...heh! Latemarch
LM, more later, but they actually built and ran a molten salt reactor in the 60's. KF kairosfocus
Thanks for the posting. The reason that we don't have hundreds of small Thorium reactors buried all around the country is because of the Nuclear Regulatory Commission and its attendant bureaucratic inertia. No administrator ever got ahead by advancing radical ideas. Not safe for either job or pension. They always play it cautious and insist on study after study, test after test before even a modicum of progress can be made. We need another Manhattan project directed at getting us a good safe design and getting them widely implemented. Once you have cheap and plentiful electricity then it becomes practical to use that electricity to boost your way out of the gravity well. Laser Propulsion (Thermal Rocket) A few dollars of electricity is sufficient to move the mass of a man to low earth orbit. Do I remember correctly? Can't even some of our accumulated high level nuclear waste can be inserted into the fuel stream and burned? Two birds with one stone and all that. A lot better than burying it. Speaking of which. Why aren't we glassing the high level waste that we can't use and burying it under the silt at the bottom of the ocean in an active subduction zone? Such a project would provide technology that could lead to further exploration and use of the ocean. A frontier, poorly explored, right here next to us at the bottom of the gravity well. Now that I have that faculty lounge rant off my chest... None of this will happen. As you point out in the OP most of the world is now either at or below replacement levels. Empty Planet. New estimates show the planet peaking at about 9 billion and then into decline. Decline brings its own set of problems because you start to lose the people that are necessary for the division of labor as well as the institutional memory to keep an industrial civilization alive. Combine that with Genetic Entropy which may be a contributing cause of the decline in fertility and you have a one two punch that I don't think we will be able to survive....looking for the Lord's return before then. Latemarch
Moving Civilisation forward, 2: Energy transformation vs. the ghost of Malthus kairosfocus

Leave a Reply