The idea that low-entropy matter-energy is the ultimate natural resource requires some explanation. This can be provided easily by a short exposition of the laws of thermodynamics in terms of an apt image borrowed from Georgescu-Roegen. Consider an hour glass. It is a closed system in that no sand enters the glass and none leaves. The amount of sand in the glass is constant—no sand is created or destroyed within the hour glass. This is the analog of the first law of thermodynamics: there is no creation or destruction of matter-energy. Although the quantity of sand in the hour glass is constant, its qualitative distribution is constantly changing: the bottom chamber is filling up and the top chamber becoming empty. This is the analog of the second law, that entropy (bottom-chamber sand) always increases. Sand in the top chamber (low entropy) is capable of doing work by falling, like water at the top of a waterfall. Sand in the bottom chamber (high entropy) has spent its capacity to do work. The hour glass cannot be turned upside down: waste energy cannot be recycled, except by spending more energy to power the recycle than would be reclaimed in the amount recycled. As explained above, we have two sources of the ultimate natural resource, the solar and the terrestrial, and our dependence has shifted from the former toward the latter.
Daly and Cobb, FOR THE COMMON GOOD
STORAGE, EMERGY AND TRANSFORMITY
“The literature on evaluation of nature is extensive, much of it reporting ways of estimating market values of the storehouses and flows in environmental systems. In recent approaches to environmental evaluation (Repetto 1992), monetary measures were sought for the storages of nature. Others have used the simple physical measures of stored resources, especially energy.
“Shown in Figure 12.1 a is a storage of environmentally generated resources. Energy sources from the left are indicated with the circular symbol. Energies from sources are used in energy transformation processes to produce the quantities stored in the tank. Following the second law, some of the energy is degraded in the process and is shown as ‘used energy’ leaving through the heat sink, incapable of further work. Also due to the second law the stored quantity tends to disperse, losing its concentration. It depreciates, with some of its energy passing down the depreciation pathway and out through the used energy heat sink.
“To build and maintain the storage of available resources, work requiring energy use and transformation has to be done. Work is measured by the energy that is used up, but energy of one kind cannot be regarded as equivalent to energy of another kind. For example, one joule of solar energy has a smaller ability to do work then one joule of energy contained in coal, since the coal energy is more concentrated than the solar energy. A relationship between solar and coal energy could be calculated by determining the number of joules of solar energy required to produce one joule of coal energy. The different kinds of energy on earth are hierarchically organized with many joules of energy of one kind required to generate one joule of another type. To evaluate all flows and storages on a common basis, we use solar emergy (Odum 1986; Scienceman 1987) defined as follows:
“Solar emergy is the solar energy availability used up directly and indirectly to make a service or product. Its unit is the solar emjoule.
“Although energy is conserved according to the first law, according to the second law, the ability of energy to do work is used up and cannot be reused. By definition, solar emergy is only conserved along a pathway of transformations until the ability to do work of the final energy remaining from its sources is used up (usually in interactive feedbacks).
“Solar transformity is defined as follows:
“Solar transformity is the solar emergy required to make one joule of a service or product. Its unit is solar emjoules per joule. ” [p.p. 201-203]
Table 12.1. EMERGY OF SOME GLOBAL STORAGES
(NATURAL CAPITAL)* POSSIBLE ORDERS OF MAGNITUDE
VALUE, 1992 Em$
|World infrastructure***||100||9.44 E26||6.3 E14|
|Freshwaters||200||1.89 E27||1.26 E15|
|Terrestrial ecosystems||500||4.7 E27||3.1 E15|
|Cultural & technol. information||1 E4||9.44 E28||6.3 E16|
|Atmosphere||1 E6||9.44 E30||6.3 E18|
|Ocean||2 E7||1.89 E32||1.25 E20|
|Continents||1 E9||9.44 E33||6.3 E21|
|Genetic information of species||3 E9||2.8 E34||1.86 E22|
* Product of annual solar emergy flux, 9.44 E 24 sej/yr and order of magnitude replacement times in column 1.
**See notes in A.
***Highways, bridges, pipelines, etc. [p. 209]
INVESTING IN NATURAL CAPITAL, ISBN 1-55963-316-6 published by The International Society for Ecological Economics
ENTROPY, ENVIRONMENT AND RESOURCES (Second Edition), M. Faber, H. Niemes, and G. Stephan; Springer-Verlag, 1995
Preface to the Second Edition:
This book has been used as a text in the Department of Economics at the University of Heidelberg (FRG) during the last decade and the University of Bern (Switzerland) during the last seven years. We therefore were glad when Dr. Muller of Springer-Verlag offered to publish a soft cover version of the second edition, to make the text economically more accessible to students.
In Part II we develop our natural science starting point (cf. Fig. 0.1). Since the notion of entropy is very difficult to understand and at the same time of central importance for our approach, we devote the larger part of Chap. 3 to its introduction It is well known that economics has been strongly influenced by classical mechanics for about a century. The development of thermodynamics since the beginning of the nineteenth century, however, has remained largely unnoticed by economists (cf. MIROWSKI 1984). For this reason we have chosen to present in detail the thermodynamic relationships that are of importance for us. We hope that in this way we can highlight the difference between classical mechanics and thermodynamics. Thermodynamic processes are irreversible and thus process-dependent with respect to time; CLAUSIUS noticed this temporal aspect and introduced the notion of entropy, which stems from the Greek verb “turn over” (turn back, change). It can be argued that it was from classical mechanics that economists derived the attitude that economic processes are fully controllable once they have been fully described. Thus, in many models of growth theory the initial conditions and the growth rate suffice for a determination of the values of all variables at all times. The study of thermodynamic processes, however, shows that there are also uncontrollable variables in addition to controllable ones. Economists, of course, have noticed this, too. The following remark by LEONTIEF (1953 :14), however, still applies to many economic analyses even today:
“In principle at least, it has long been recognized that the ultimate determinants of the structural relationships which govern the operation of the economic system are to be sought outside the narrowly conceived domain of economic science. Notwithstanding their often expressed desire to cooperate with the adjoining disciplines economists have more often than not developed their own brand of psychology, their special versions of sociology, and their particular ‘laws’ of technology.”
It remains to the critics to decide how far this is also true for our Part II. Here, we only wish to mention that Chap. 3 was written for economists and may – except for Sects. 3.5, 3.8 and 3.9 – be skipped by readers with a natural science background.
In Chap. 4 we use the notion of entropy to establish a relationship between economic activities and the environment. We shall interpret the separation process in the extraction of resources as a reversed diffusion process. Thereafter we shall derive relationships between resource quantities, resource concentration, entropy change, energy, and factor inputs. We shall use these in order to show how changes in the environment influence the economic production process. We shall establish the relationship between the economic system and the environment as a supplier of resources by way of the resource concentration. We thus directly utilize a variable of nature. With our entropy approach we extend the resource problem beyond the quantitative problem, by the inclusion of aspects of the distribution of resources within the environmental sector and the specific conditions within resource deposit sites.
These two aspects are explicitly taken into consideration in Part III, which deals with “The Use of Scarce Resources with Decreasing Resource Concentration”. In Chap. 5 we integrate the resource problem into our capital-theoretic approach, using the same model structure as in Chap. 2. The common basic model is, however, extended by a resource sector. The waste treatment problem, on the other hand, remains temporarily outside of the analysis. We shall, however, be taking into consideration changes of resource quantities and concentration within the environmental sector. In Chap. 6 – similarly to Chap. 2 – we investigate the properties of our model by analyzing the effects of a rearrangement of production on the temporal distribution of the supply of consumption goods. In doing so, we are also interested in the replacement of techniques as a function of resource availability. We then derive optimality conditions for the temporal use of the environment as a supplier of resources. With the help of the variable ‘resource concentration’ we are able to show how the long-run increase of resource extraction costs can be explained as the result of technological and ecological conditions.
In Part IV we analyze interdependencies between environmental protection and resource use. For this purpose we join the environmental model of Chap. 2 with the resource model of Part III in a five sector model. With the examples of the recovery of resources from waste materials (recycling), and the controlled deposition of waste materials in the environmental sector, we show how our approach can be used to simultaneously investigate both environmental protection measures and resource use. [p.p. 6-8]
From the back cover:
In this book the authors analyze environmental protection and resource use in a comprehensive framework where not only economic but also natural scientific aspects are taken into consideration. With this inderdiciplinary procedure an attempt is made to incorporate the irreversibility of economic processes. The special features of the book are (i) that the authors utilize a natural scientific variable, entropy, to characterize the economic system and the environment, (ii) that environmental protection and resource use are analyzed in combination, and (iii) that a replacement of techniques over time is analyzed. A novel aspect is that resource extraction is interpreted as as a reversed diffusion process. Thus a relationship between entropy, energy and resource concentration is established. The authors investigate the use of the environment both as a supplier of resources and as a recipient of pollutants with the help of thermodynamic relationships. The book therefore provides a new set of tools for environmentalists and economists.
ENERGY AND THE ECOLOGICAL ECONOMICS OF SUSTAINABILITY, John Peet
“The fifth statement of the second law of thermodynamics is not so obvious as the previous ones, but it brings in some of the points just discussed: In spontaneous processes, concentrations tend to disperse, structure tends to disappear, and order becomes disorder.
“In thermodynamics, there is a concept called entropy that is a measure of the amount of energy no longer capable of conversion into work after a transformation process has taken place. It is thus a measure of the unavailability of energy. Entropy can also be shown to be a measure of the level of disorder of a system. Thus, the pile of broken pieces of china on the floor has a greater entropy than would the same plate, unbroken, on the floor, and the plate on the floor has a greater entropy than did the same plate on the table. If the second law is restated in yet another way, still equivalent to the others, in order to bring in the concept of entropy, it becomes this: All physical processes proceed in such a way that the entropy of the universe increases.” [p. 43]
INFORMATION AND ENTROPY
by Alan McGowan
Adaptive information (e.g. genetic information) does representstocks of high biological order* bought with energy degradation (photosynthesis driving natural selection) over long periods of time.
In fact, in H.T. Odum’s emergy theory of value**, genetic information turns out to have far and away the largest emergy, orders of magnitude more than human economic infrastructure. [Emergy is the amount ofphotosynthetic energy (NPP) that went into producing a biologicalor cultural structure.]
But the people who urge that information can provide a free lunch (technological Cargoists) are not talking about adaptive information, which results from eons of selection within the biological systems hierarchy (coevolution)—or, at the very least, from expensive cultural selection processes such as scientific discovery. They are talking about the fact that the cultural noise level is rising—e.g. we have more TV shows and computer games than we used to. They assume that this mostly junk information—which is growing exponentially as the population and economy do—is “just as good” (or even better!) than adaptive information made by evolution—so they think this information somehow promotes our survival. In other words, they have mistaken the growth of mere cultural variation for the evolutionary adaptation that is brought about by selection acting on variation. But just as lower fitness gene variants are not weeded out when a population is released from selection and expands exponentially, likewise junk cultural information is tolerated better under growth conditions: ecologically limited cultures can’t waste excessive human information bandwith on nonadaptive or maladaptive cultural information.
(*) Note that a crystal has very low entropy/high order, but it also has low dynamical freedom. Living systems have vast amounts of dynamical freedom of their internal chemical states, but keep within the much smaller subset of phase space where biological integrity is maintained. Limits to maintenance of integrity are the manifestations of the second law in biology. There are three sorts of limits to ecosystem integrity:
1) Functional/physiological constraints and tolerances. These include the most basic energetic constraints within life, e.g. the efficiency of photosynthesis, glycolysis, dna replication and repair rates, the rates of mitosis and meiosis, of cellular detoxification systems, etc. These are very ancient bits of frozen historicity, largely established long before the Cambrian during the evolution of single-celled life. They ultimately set the rates of energy flow and nutrient recycling through ecosystems. They set the objective time of life at its most basic, biochemical level.
2) Evolutionary constraints. I.e. has evolution produced species withthe ability to exploit or tolerate a particular set of conditions (and thereby provide a nutrient flow)?
3) Historical constraints. I.e. has the historical process (e.g. sweepstakes migrations, local extinctions) that produced the mix of species at a site actually provided the site with species able to exploit a particular set of conditions?
Actually, these are all “historical constraints”—just from very different depths of history. The functional constraints come in some cases from as far back as the beginning of life. The evolutionary constraints are the adaptive limits of the biota on the current page of geological time, in the given geographical region. And the historical constraints are those of the assembly of the ecosystem at the site—what species actually wound up there. These constraints limit function and resilience at the site.
(**) See Odum, H.T. 1994. The emergy of natural capital, in A. M. Jansson, M. Hammer, C. Folke, and R. Costanza (eds). Investing in natural capital: the ecological economics approach to sustainability. Island press.
THE ENTROPY CONCEPT IN BIOLOGY
by Alan McGowen
In ecology: Ecosystem function (nutrient cycling) arises from the physiologies of organisms. The functions are a network of nutrient pathways (a food web), the topology of which is determined by ecological strategies of the species, and is an essentially fixed evolutionary feature on the time scale of ecosystem processes. The rate of flow at a point in the network is determined by population growth of the species there, which is constrained by nutrient supply and physiological rates (also essentially fixed by evolution).
Thus, chemical thermodynamics applies, via physiology, to the energetics of ecosystem function. The entropy concept is imported from chemistry, which in turn derives it from physical (statistical) thermodynamics.
In evolution: Here the problem is to explain the energetics of the production of genetic information. Hierarchical Information Theory (HIT) which is based on Shannon entropy is the tool, and the microcanonical ensemble is defined on a space of genetic variation. It is well known that *formal* analogies exist between HIT and statistical thermodynamics. Whether these formal analogies reflect shared abstractions requires further evidence. Brooks and Wiley, 1988, argue that in the case of biological information, the shared abstraction analogies are strong enough to accept HIT entropy as physical entropy.
The entropy concept in ecological economics: Ecological economics is not concerned with the time scales of macroevolution. But it should be concerned with those of microevolution, since microevolutionary processes maintain the adaptive potential/resilience of natural capital over time horizons we can reasonably hope will support human societies and economies. [Though not necessarily the same societies and economies throughout the whole interval.]
Thus for ecological economics, the ecological (chemical) entropy concept needs to be supplemented with HIT entropy enough to account for the maintenance of evolutionary potential, including speciation potential, but not enough to account for the production of whole biotas over geological time. Additionally, suitable economic and cultural forms of “entropy” presumably need to be added to the picture—but the biological systems within which economies function (or fail to be functional) fall in the arena of evolutionary ecology, and this is the domain of the entropy concept into which an economic entropy concept should be fitted.
Brooks, D. R. and Wiley, E. O. 1988. Evolution as Entropy: Toward a Unified Theory of Biology. University of Chicago.
Weber, B. H., DePew, D. J. and Smith, J. D, eds. 1988. Entropy, Information and Evolution: New Perspectives on Physical and Biological Evolution. MIT Press.
Wicken, J. S. 1987. Evolution, Thermodynamics and Information: Extending the Darwinian Program. Oxford University Press.
MICROSOFT ENCARTA ENCYCLOPEDIA: Second Law of Thermodynamics
“The second law of thermodynamics gives a precise definition of a property called entropy. Entropy can be thought of as a measure of how close a system is to equilibrium; it can also be thought of as a measure of the disorder in the system. The law states that the entropy—that is, the disorder—of an isolated system can never decrease. Thus, when an isolated system achieves a configuration of maximum entropy, it can no longer undergo change: It has reached equilibrium. Nature, then, seems to ‘prefer’ disorder or chaos. It can be shown that the second law stipulates that, in the absence of work, heat cannot be transferred from a region at a lower temperature to one at a higher temperature.
“The second law poses an additional condition on thermodynamic processes. It is not enough to conserve energy and thus obey the first law. A machine that would deliver work while violating the second law is called a ‘perpetual-motion machine of the second kind,’ since, for example, energy could then be continually drawn from a cold environment to do work in a hot environment at no cost. The second law of thermodynamics is sometimes given as a statement that precludes perpetual-motion machines of the second kind.”
IS CAPITALISM SUSTAINABLE, O’Conner; The Guilford Press, 1994
Energy and the Forces of Production
“Bioeconomists view economic processes from the point of view of the principles of thermodynamics, insisting that these principles apply both to natural systems and to systems rearranged or transformed by man. The second law of thermodynamics highlights a key aspect of all productive processes: economic activity, intended to satisfy human needs, runs against the general tendency of the universe to move toward a state of greater disorder, of higher entropy. Human labor runs against this tendency toward increasing disorder of the physical world. It sets into motion the energy sleeping within nature, converts ‘wild’ energy into ‘domesticated,’ useful energy. But to make this useful energy available, a certain amount of human energy must be expended, either in the form of energy stored in machines or in the form of living human labor.
“By definition, the overall increase of entropy associated with any process of production is always greater than the local decrease of entropy corresponding to this process. Economic activity therefore does not escape the laws of physics: the organizational status of the economy only increases insofar as that of the universe as a whole decreases. As Nicholas Georgescu-Roegen observes, ‘In the perspective of entropy, every action of a human or of an organism, and even every process of nature, can lead only to a deficit for the overall system.’ ‘Not only, ‘ he continues, ‘does the entropy of the environment increase with every liter of gasoline in the tank of your car, but a substantial part of the free energy contained in the gasoline, instead of driving your car, will be reflected in a further increase of entropy…. When we produce a copper sheet from copper ore, we reduce the disorder entropy of the ore, but only at the price of a further increase of entropy in the rest of the universe.’ Living beings too are subject to this law. Every living organism, including human beings, strives to maintain its own entropy at a constant level by drawing low-entropy energy from its environment, particularly in the form of food. According to Georgescu-Roegen, ‘In terms of entropy, the cost of any economic or biological undertaking is always greater than the product, in such a way that activities are necessarily reflected in thermodynamic deficit.’
“Labor is not, all on its own, the primary self-renewing power conceived by Marxist theory. Its reproduction depends totally on a continuous input of low-entropy energy. This energy derives from the sun directly (rays, heat) or indirectly (wind, hydraulics), from solar radiation stored in fossil fuels (oil, coal, gas), and, in small part, from geothermal flows and nuclear energy. Energy cannot be created by labor or machines: it is always drawn from the environment. Even this extraction is governed by certain constraints. Just as labor is necessary to produce labor, energy is necessary to extract energy from the environment. And just as in a growth economy labor can produce more than what is necessary for its own reproduction, so the energy extracted from nature is generally greater than the energy expended for its extraction. The ratio of labor obtained to labor expended is a critical magnitude in economics: it is imperative that it be greater than 1. Similarly, the surplus corresponding to the difference between energy obtained and energy invested is net energy.” [p.p. 42-43]
VALUING THE EARTH, Daly and Townsend; MIT Press, 1993.
“Erwin Schrodinger (1945) has described life as a system in steady-state thermodynamic disequilibrium that maintains its constant distance from equilibrium (death) by feeding on low entropy from its environment—that is, by exchanging high-entropy outputs for low-entropy inputs. The same statement would hold verbatium as a physical description of our economic process. A corollary of this statement is that an organism cannot live in a medium of its own waste products.” [p. 253]
FOR THE COMMON GOOD, Daly and Cobb; Beacon Press, 1989
“Reflection on the use of energy leads immediately to the second law of thermodynamics. The law asserts that entropy is increased when work is done. The notion of entropy is often misunderstood, so it requires a brief explanation.
“The first law of thermodynamics declares that energy (or matter-energy) can neither be created nor destroyed. This seems to suggest that the use of energy will not reduce the amount of energy available to be used again. But this is not the case. The second law declares that whenever work is done, whenever energy is used, the amount of usable energy declines. The decline of usable energy is the increase of entropy (the increase of sand in the bottom chamber of the hour glass, to recall the analogy in the Introduction). For example, when a piece of coal is burned, the energy in the coal is transformed into heat and ash. This, too, is energy, and the amount of energy in the heat and ashes equals that previously in the coal. But now it is dispersed. The dispersed heat cannot be used again in the way it was originally used. Furthermore, any procedure for reconcentrating this energy would use more energy than it could regenerate. In other words, the dispersal of previously concentrated energy would increase. There is no way of reversing this process. Burning a piece of coal changes the low-entropy natural resource into high-entropy forms capable of much less work. In spite of the circular flow celebrated by economists there is something that is irrevocably used up, namely capacity for rearrangement. The economic process (production followed by consumption) is entropic. Raw materials from nature are equal in quantity to the waste materials ultimately returned to nature. But there is a qualitative difference between the equal quantities of raw and waste material. Entropy is the physical measure of that qualitative difference. It is the quality of low entropy that makes matter-energy receptive to the imprint human knowledge and purpose. High-entropy matter-energy displays resistance and implasticity. We cannot with any currently imaginable technology power a steamship with the heat contained in the ocean, immense though that amount of heat is. Nor can windmills be made of sand or ashes.
“When nature and its resources for human use are viewed as concentrations of usable energy instead of as passive matter, it will no longer be possible to ignore the fund-flow model of Nicholas Georgescu-Roegen, to whom we owe the path-breaking analysis of The Entropy Law and the Economic Process (1971), which we have freely drawn from.
“Georgescu-Roegen’s fund-flow model begins with the recognition that nature’s contribution is a flow of low-entropy natural resources. These raw materials are transformed by a fund of agents (laborers and capital equipment), which do not themselves become physically embodied in the product. Labor and capital funds constitute the efficient cause of wealth, and natural resources are the material cause. Labor and capital funds are ‘worn out’ and replaced over long periods of time. Resource flows are ‘used up’ or rather transformed into products over short periods of time. While there may be significant substitutability between the two funds, labor and capital, or among various resource flows, for example aluminum for copper or coal for natural gas, there is very little substitutability between funds and flows. You can build the same house with fewer carpenters and more power saws, but no amount of carpenters and power saws will allow you to reduce very much the amount of lumber and nails. Of course one can use brick rather than wood, but that is the substitution of one resource flow for another rather than the substitution of a fund for a flow. Funds and flows, efficient and material causes, are complements, not substitutes, in the process of production.
“From this commonsense perspective it is very difficult to understand the current neoclassical models of production which (a) often do not include resources at all, depicting production as a function of labor and capital only; (b) if they do include resources, assume that ‘capital is a near perfect substitute for land and other natural resources’; and (c) fail to recognize any physical balance constraint, that is, do not rule out cases where output constitutes a greater mass than the sum of the masses of all inputs (which would be a violation of the First Law of Thermodynamics). Some recognition of the last problem exists and some efforts have been made to limit substitution by a mass balance constraint on production functions. Economists are occasionally embarrassed by their infractions of the first law, but their more egregious violations of the second law have induced very little shame so far.
“Georgescu-Roegen argues that all resources, and indeed all items of value, are characterized by low entropy; but not all items characterized by low entropy have economic value. Value cannot be explained in only physical terms, but neither can it be explained purely in psychic terms of utility without reference to entropy, as neoclassical economics attempts to do. Since we neither create nor destroy matter-energy it is clear that what we live on is the qualitative difference between natural resources and waste, that is, the increase in entropy. We can do a better or worse job of sifting this low entropy through our technological sieves so as to extract more or less want satisfaction from it, but without that entropic flow from nature there is no possibility of production. Low-entropy matter-energy is a necessary but not sufficient condition for value. It is critically important, therefore, to analyze the sources of low entropy (the physical common denominator of usefulness), and their patterns of scarcity.
“As noted in the Introduction we basically have two sources of low entropy: the solar and the terrestrial. They differ significantly in their patterns of scarcity. The solar source is practically unlimited in its stock dimension, but is strictly limited in its flow rate of arrival to earth. The terrestrial source (minerals and fossil fuels) is strictly limited in its stock dimension, but can be used at a flow rate of our own choosing, within wide limits. Industrialism represents a shift away from major dependence on the stock-abundant solar source toward major dependence on the stock-scarce terrestrial source in order to take advantage of the variable (expandable) rate of flow at which we can use it. On the basis of this elementary consideration alone, it was possible for Georgescu-Roegen to predict, back in the 1960s when most economists were talking about feeding the world with petroleum, that exactly the opposite substitution would happen: we would be fueling our cars with alcohol from food crops that gather current sunshine. In Brazil this has already happened. Homo sapiens brasiliensis has entered into direct competition with Mechanistra automobilica for a place in the sun. Sugar cane for fuel is displacing rice and beans for food.
“Returning to the issue of the substitutability of capital for resources, our approach is to consider the amount of capital needed in two scenarios: a world of extensive resource depletion and high capital accumulation versus a world of conserved resources and reduced capital accumulation. It is evident that more is needed in a world in which renewable resources have become scarce. Food may be produced hydroponically, but this requires far more capital than producing the same amount of food in naturally fertile soil. Note that here we are speaking of substitution of humanly created capital stock for natural capital stock (soil), and not the substitution of capital for a resource flow. A carrot produced hydroponically embodies just as much matter and energy as one grown in the garden. The extra humanly created capital in hydroponics is not merely a matter of direct costs of equipment, chemicals, and water. It also involves supplying water that will be more expensive than at present. Deforestation will reduce stream flow, increase flooding, hasten the silting of dams, and speed up aquifer depletion. Capital will then be needed for flood control, new dams, diversion of distant rivers, and desalinization of ocean water. [p.p 194-197]
** See also STEADY-STATE ECONOMICS, Daly
VISION 2020, Laszlo; Gordon and Breach, 1994
“The third possible category is that in which systems are far from thermal and chemical equilibrium. Such systems are nonlinear and pass through indeterminate phases. They do not tend toward minimum free energy and maximum specific entropy but amplify certain fluctuations and evolve toward a new dynamic regime that is radically different from stationary states at or near equilibrium.
“Prima facie the evolution of systems in the far-from-equilibrium state appears to contradict the famous Second Law of Thermodynamics. How can systems actually increase their level of complexity and organization, and become more energetic? The Second Law states that in any isolated system organization and structure tend to disappear, to be replaced by uniformity and randomness. Contemporary scientists know that evolving systems are not isolated, and thus that the Second Law does not fully describe what takes place in them—more precisely, between them and their environment. Systems in the third category are always and necessarily open systems, so that change of entropy within them is not determined uniquely by irreversible internal processes. Internal processes within them do obey the Second Law: free energy, once expanded, is unavailable to perform further work. But energy available to perform further work can be “imported” by open systems from their environment: there can be a transport of free energy—or negative entropy—across the system boundaries. * When the two quantities—the free energy within the system, and the free energy transported across the system boundaries from the environment—balance and offset each other, the system is in a steady (i.e., in a stationary) state. As in a dynamic environment the two terms seldom balance each other for any extended period of time, in the real world systems are at best “metastable”: they tend to fluctuate around the states that define their steady states, rather than settle into them without further variation.
* Change in the entropy of the systems is defined by the well-known Prigogine equation dS = djS + deS Here dS is the total change of entropy in the system, while djS is the entropy changed produced by irreversible processes within it and deS is the entropy transported across the system boundaries. In an isolated system dS is always positive, for it is uniquely determined by djS, which necessarily grows as the system performs work. However, in an open system deS can offset the entropy produced within the system and may even exceed it. Thus dS in an open system need not be positive: it can be zero or negative. The open system can be in a stationary state (dS = 0), or it can grow and complexity (dS < 0). Entropy change in such a system is given by the equation deS – djS < 0); that is, the entropy produced by irreversible processes within the system is shifted into the environment. [p.p. 106-107]
OUR ECOLOGICAL FOOTPRINT, Wackernagel and Rees; New Society Pub., 1996
“The Second Law of Thermodynamics (the ‘entropy law’) states that the entropy of an isolated system always increases. This means that the system spontaneously runs down. All available energy is used up, all concentrations of matter are evenly dissipated, all gradients disappear. Eventually, there is no potential for further useful work—the system becomes totally degraded and ‘disordered.’ This has significant implications for sustainability:
“Non-isolated systems (such as the human body or the economy) are subject to the same forces of entropic decay as are isolated ones. This means that they must constantly import high-grade energy and material from the outside, and export degraded energy and matter to the outside, to maintain their internal order and integrity. For all practical purposes, this energy and material ‘throughput’ is unidirectional and irreversible.
“Modern formulations of the Second Law therefore argue that all highly-ordered, far-from-equilibrium, complex systems necessarily develop and grow (increase their internal order) ‘at the expense of increasing disorder at higher levels in the systems hierarchy.’ *
“The human economy is one such highly-ordered, complex, dynamic system. It is also an open sub-system of a materially closed, non-growing ecosphere, i.e., the economy is contained by the ecosphere. Thus the economy is dependent for its maintenance, growth and development on the production of low entropy energy/matter (essergy) by the ecosphere and on the waste assimilation capacity of the ecosphere.
“This means that beyond a certain point, the continuous growth of the economy (i.e., the increase in human populations and the accumulation of manufactured capital) can be purchased only at the expense of increasing disorder (entropy) in the ecosphere.
“This occurs when consumption by the economy exceeds production in nature and is manifested through the accelerating depletion of natural capital, reduced biodiversity, air/water/land pollution, atmospheric change, etc.”
* E. Schneider and J. Kay. 1992. Life as a Manifestation of the Second Law of Thermodynamics. Preprint from: Advances in Mathematics and Computers in Medicine. (Waterloo, Ont.: University of Waterloo Faculty of Environmental Studies, Working Paper Series). [p. 43]
NATURAL CAPITAL AND HUMAN ECONOMIC SURVIVAL, Thomas Prugh with Robert Costanza, John H. Cumberland, Herman Daly, Robert Goodland & Richard B. Norgaard; The International Society for Ecological Economics, 1995.
Things fall apart; the center cannot hold; Mere anarchy is loosed upon the world….
—W.B. Yeats, The Second Coming
Things fall apart because it’s the law—the second law of thermodynamics. (The first law of thermodynamics is the law of conservation, which says that matter and energy cannot be created or destroyed, only transformed. Matter is itself a form of energy, as is shown by Einstein’s famous equation, E=mc2.) The second law was developed in connection with steam engines in 1824 by French physicist Sadi Carnot. Carnot realized that using energy to do work (move matter through space) depended on the machine’s temperature gradient, i.e., the difference between the hottest and coolest parts. As the work is performed, it reduces the temperature differences. Although the energy total remains constant, it becomes less available to do further work (Boulding 1981a).
More generally, using energy makes it less available. The latent chemical energy in fireplace logs is highly available until it is released by burning. Thereafter, although the amount of energy in the heat, gases and ashes is the same as the amount that was in the wood, it is scattered (unavailable). In theory, it is possible to reassemble the components and reconcentrate the energy, but doing so would take more energy than it would yield (Daly and Cobb 1989).
Another way to express the entropy law is that in an isolated system objects and subsystems tend to disintegrate over time. They break, break down, break up, rust, die, decay, wear out or generally move from a state of higher organization to one of lower organization, from order to disorder. As far as is known, this process always moves in the same direction. (Irritatingly, since entropy is a measure of the disorder in a system, a highly organized system is said to be low-entropy, while a disordered system is said to be high-entropy. Entropy increases as order decreases.) The breaking down and wearing out of a system or object can be stopped if it is an open system capable of receiving inputs of matter and energy—maintenance—from outside. Even a closed system, which allows only inputs and outputs of energy, can maintain order over time. In an isolated system (one in which there are no inputs or outputs, i.e., no throughput), disorder must increase.
Is life an exception? Doesn’t life create order out of disorder? Anyone with small children will immediately doubt this. Yet life appears to be an example of movement from a state of lower organization to one of higher organization. Human beings, for example, gradually grow out of utter helplessness to relative independence, learning to survive in the world and do things of extraordinary complexity, up to and including writing novels, symphonies and arcane mathematical tracts on things like entropy. In fact, doesn’t evolution in general, with its vast, eons-long procession of movement from creatures of one-celled simplicity to blue whales, prove that entropy can be beaten?
Yes and no. On a local scale, yes: life has indeed evolved marvels of increasing organizational complexity. But in terms of the big picture, no. Living creatures exist only by being able to “import” highly complex, low-entropy matter (i.e., to eat food), extract useful energy and materials from it, and “export” wastes of much lower complexity (higher entropy). All life on Earth recycles itself through the ecosphere in this manner, each creature using something from its surroundings (usually including other creatures) to sustain and recreate itself. Matter is not created or destroyed, only broken apart and reassembled to be used again in some other form. As physicist Erwin Schrodinger once put it, life (and evolution) can be seen as the segregation of entropy: “[T]he device by which an organism maintains itself stationary at a fairly high level of orderliness (= a fairly low level of entropy) really consists in continually sucking orderliness from its environment” (Schrodinger 1967, p. 79). Life creates pockets of order at the cost of disorder elsewhere. Evolution is pollution (Boulding 1981a,b).
Humans and other living things are thus clearly open systems. However, the biosphere and the Earth itself are closed systems. Matter is essentially constant; little comes into the system except the occasional stray chunk of comet or meteorite, and little goes out except space probes. But in terms of energy, the flow of solar radiation coming in (balanced by the flow of reradiated heat) is continuous and crucial. It is the ultimate answer to the question, How does the economy (and the world) work? Hazel Henderson (1981) tells of a paper delivered by English Nobelist Frederick Soddy in 1921, in which he used the steam locomotive as a metaphor, asking “What makes it go?”:
In one sense or another the credit for the achievement may be claimed by the so-called engine-driver, the guard, the signalman, the manager, the capitalist, or the shareholder—or, again, by the scientific pioneers who discovered the nature of fire, by the inventors who harnessed it, by Labor, which built the railway and the train. The fact remains that all of them by their united efforts could not drive the train. The real engine driver is the coal. So, in the present state of science, the answer to the question how men live, or how anything lives, or how inanimate nature lives, in the senses in which we speak of the life of a waterfall or of any other manifestation of continued liveliness, is, with few and unimportant exceptions, BY SUNSHINE (p. 225).
“Needless to say,” Henderson writes, “Soddy was considered a crank.” But he was right: the steady imports of solar energy drive the life processes of Earth. If the Earth were closed to the solar flow, which is low-entropy energy made generally available to the biosphere through photosynthesis, all life would eventually cease. Of course, the sun is not exempt from the entropy law either; it is slowly running down as it burns up its nuclear fuel and will come apart, spectacularly, in a few billion years.
What has entropy got to do with economics?
The laws of thermodynamics are relevant to the economy because economic activity is entropic. Natural resources (low-entropy matter energy) are gathered, processed to separate out the useful parts from the rest, manufactured into goods and transported to the point of sale. Wastes are produced and energy is used up (and made less available) every step of the way. The quantity of raw materials is equal to the quantity of wastes (plus the products, which eventually become wastes), but the two amounts are qualitatively different. The difference is measured in terms of entropy. Economic production is utterly dependent on the availability of low-entropy inputs (Daly and Cobb 1989).
These inputs come from two sources. As noted above, one is the sun. The other is the Earth, which yields useful minerals, plant and animal life, and fossil fuels. There are obvious differences between the nature of the solar and terrestrial inputs, but perhaps even more important is the radical difference in their availability. The solar inputs are essentially unlimited (at least on time scales relevant to human beings), but they flow to the Earth in a comparative trickle, on the order of 100 to 200 watts per square meter of the Earth’s surface. Earthly stocks of renewable resources are, in turn, limited by the availability of solar energy. Earthly stocks of non-renewable resources (especially fossil fuels, which are really deposits of solar energy laid down millions of years ago) are finite, but by means of technology we can extract them from the ground and pour them into the economy at enormous rates (Daly and Cobb 1989).
Technology has enabled the human economy to temporarily suspend its dependence on the low-flow solar source of low-entropy inputs—which is what marked the threshold between pre-industrial and industrial society. But technology cannot abolish the entropy law. [p.p. 42-45]
A SURVEY OF ECOLOGICAL ECONOMICS, Krishnan, Harris and Goodwin; Island Press, 1995.
This is a really excellent book that has an entire chapter on entropy and how it relates to economics. Here are two selections:
Summary of Recycling, Thermodynamics, and Environmental Thrift
by R Stephen Berry
[Published in Bulletin of the Atomic Scientists 28 (May 1972): 8-15. From the Bulletin of the Atomic Scientists. ~ 1972 by the Educational Foundation for Nuclear Science, 6042 South Kimbark, Chicago, IL 60637, USA. A one year subscription to the Bulletin is $30.]
As environmental considerations become more important in policy decisions and planning, a compelling need has emerged for reliable and robust indices of environmental use. This is particularly true when choosing between alternative policies, which requires the identification of variables that can be quantified, that are general enough to allow comparison between quite different sorts of processes, that provide key measures or indices, and that yield true measures of the amount of use of the environment. Toward this end, the quantities derived from thermodynamics are the most obvious and natural, and they meet all of these criteria.
Thermodynamic potential is a fundamental measure of a system’s capacity to perform work. The science of thermodynamics enables us to determine the minimum expenditure of thermodynamic potential to achieve a given physical change. Since every process requires the consumption of some thermodynamic potential, we are able to compare different processes and select that which is the most thermodynamically efficient. The change in thermodynamic potential associated with a process will measure all of the energy exchanged as well as the effects upon the degree of disorder or dilution, i.e., the entropy of the system.
The two essential forms of stored potential are energy and order. There are multiple forms of energy storage, including hydroelectric facilities, fossil fuels, solar energy, and nuclear technologies. Order is used when, for example, we obtain materials from concentrated ore bodies rather than by finding them distributed evenly over the planet’s surface. Some forms of stored potential are readily accessible, while others require considerable effort and energy expenditure before they can be used. Measuring the total stored potential can be quite difficult and involves a considerable amount of guesswork. However, it is possible to measure accurately the change in potential associated with different processes, so that the thriftiest process can be identified and adopted.
This approach is different in practice from the money-based “least cost” method of optimizing production, so it is important to stress the differences between economic and thermodynamic analysis. Economic analysis is based upon perceptions of present value and scarcity as expressed in the marketplace, where the supply and demand framework is modeled on an instantaneous evaluation of the popular perception of shortages. However, “one cannot take seriously using a short-term market analysis to decide, say, in the year 2171, whether all the remaining fossil fuel should be reserved for the chemical industry.” But if economists were to determine their estimates of shortage by undertaking increasingly long-term analyses, even with discounting, their estimates would come closer and closer to those made by thermodynamicists. In a sufficiently long time frame, it becomes evident that the most important scarcity is of thermodynamic potential; thus thermodynamic analysis becomes essential.
Our system is one in which the manufacture of goods consumes materials and other resources from the environment. To calculate the real thermodynamic cost of a manufactured object, we evaluate the amount of thermodynamic potential that was extracted from the environment to produce the good and then subtract the amount of thermodynamic potential that remains stored in the object. In the unrealizable, idealized thermodynamic limit, the thermodynamic potential that resides in an object is identical to the potential extracted from the environment, the net change in potential is zero, and the process has merely transformed one form of potential into another. However, this naive ideal can never be reality; the net costs are always greater than zero, and there is always a loss of potential both in producing the good and in discarding it. This net loss from production is a true loss as it can not be recovered.
As an example of this thermodynamics-based approach, the thermodynamics associated with the manufacture of automobiles can be examined. Specifically, we can consider the amount of thermodynamic potential consumed in mining and manufacturing from “new” raw materials, the amount consumed in recycling processes, and the minimum requirements for an ideally efficient process. The criterion used is one of “thermodynamic thrift,” i.e., the idea that it is desirable to minimize the consumption of thermodynamic potential in achieving any particular goal. There are three policies to consider in this regard: (1) maximizing recycling, (2) extending the useful life of goods, and (3) developing more thermodynamically efficient processes for producing the goods in the first instance.
Each step of the manufacturing process involves the transformation of matter from one state to another, via transformation processes that include mining and smelting, manufacturing, normal use, recycling, junking, and natural degradation. Through numerous, complex calculations, actual figures for loss of thermodynamic potential have been calculated in units of total kilowatt hours (kwh) per automobile. An estimate of 5000-6525 kwh per automobile emerges. The estimate of the ideal thermodynamic potential requirement for producing an automobile, on the other hand, is only about 30 kwh.
The enormous magnitude of the gap between actual and ideal thermodynamic potential costs is striking. From this it is evident that our current manufacturing and mining processes “are reflections of the historically developed means of production and transport, rather than of the thermodynamic requirements for creating the ordered structure of an operable machine.”  The staggering inefficiency manifest in these figures clearly implies the existence of possibilities for vast savings in thermodynamic potential. Even modest improvements in productive processes could generate savings of thousands of kilowatt hours per vehicle.
The potential savings from the alternative policy approaches of recycling or extending product life are smaller but significant. Recycling might save between zero and a little over 1000 kwh per vehicle at best. A limitation of these savings from recycling is the need of new car manufacture for some new materials, mostly to maintain the strength of the vehicles, so the savings figures should be halved. Furthermore, even these savings may not be realizable with current recycling technologies. This assessment could change, however, with improved recycling technologies or an increase in the energy costs of mining and smelting.
The savings associated with an extension of the useful life of a product—for example, through enhanced precision in the manufacturing process itself, or improved maintenance procedures—are somewhat harder to quantify. It is certain, however, that the increased costs of more durable manufacture would be somewhat less than the costs associated with the manufacture of a new product. Doubling or tripling the useful life of an automobile could reduce the overall manufacturing costs by perhaps 1000 kwh, and when the reduced mining and smelting needs are factored in, the net savings increase to 2750-4500 kwh per vehicle.
These figures provide a compelling picture of the differences between these three choices: given current technologies, recycling provides small savings at best when compared to those associated with extending product life, which are in turn small compared with the possible savings from new technologies. However, while it is clear which policy would maximize thermodynamic thrift, the relative ease of adopting one policy over another must also be considered. A policy to encourage maximum recycling would require a relatively small perturbation of existing processes. The extension of useful product life, however, would be more difficult, as it requires a change in both manufacturing techniques and consumer attitudes. The basic technologies to implement the ideal system probably do not yet exist, and the costs of developing and especially of implementing them will be very large indeed. However, the potential savings from their development are so vast that the costs will be insignificant in comparison. For example, it is estimated that saving 1000 kwh per vehicle would equal the output of 8 to 10 large power generation facilities.
It is clear from the example of automobile manufacturing that a policy of thermodynamic thrift ought to be pursued as a national goal. A three-stage course seems desirable: to encourage recycling, to develop extended life machines, and to pursue the longer term goal of developing technologies that would operate with efficiencies closer to the ideal limits. However, the policy implications of this last and most crucial goal are at odds with much current federal policy. We should include in the training of scientists and engineers a specific orientation to conducting this type of research. We should also direct public funds and effort into the development of these technologies since, like military and space technologies, the requisite scale of development is too vast for the private sector. [p.p. 194-197]
Summary of Energy and the U.S. Economy
A Biophysical Perspective by Cutler J. Cleveland, Robert Costanza, Charles A.S. Hall, and Robert Kaufmann
Originally published in Science 225 (31 August 1984): 890-897. C 1984 by the AAAS
Between the mid-1940s and the early 1970s, the U.S. economy showed generally good performance. Since 1973, however, performance indicators such as labor productivity, inflation, and growth rates have been relatively disappointing, and mainstream economic models can not entirely explain this shift and its underlying causes. A theoretical perspective that recognizes the importance of natural resources, especially fuel energy, may help; some economic problems can be understood more clearly by explicitly accounting for the physical constraints imposed on economic production.
In this perspective, the focus is on the production process, i.e., the economic process that upgrades the organizational state of matter into lower entropy goods and services. This process involves a unidirectional, onetime throughput of low-entropy fuel that is eventually lost as waste heat. Production is a work process, and like any work process it will depend on the availability of free energy. The quality of natural resources is also important to this process, because lower quality resources will always require more work to upgrade them into final goods and services.
Based on this biophysical perspective, four hypotheses are presented and discussed below.
Energy and Economic Production
Hypothesis 1: A strong link between fuel use and economic output exists and will continue to exist.
Rather than viewing the economy as a closed system, it must be seen as an open system embedded within a larger global system that depends on solar energy. The global system produces environmental services, foodstuffs, and fossil and atomic fuels, all of which are derived from solar and radiation energies in conjunction with other important resources. Fossil and other fuels are used by the human economy to empower labor and to produce capital. Fuel, capital, and labor are then used to upgrade natural resources to produce goods and services. Production is a process using energy to add order to matter. Since fuels differ in the amount of economic work they can do per unit heat equivalent, both quantify and quality of fuel play a role in determining levels of economic production. An important quality of fuels is the amount of energy required to locate, extract, and refine the fuel to a socially useful state. This can be measured by a fuel’s Energy Return on Investment (EROI), which is the ratio of the gross fuel extracted to the economic energy required directly and indirectly to deliver the fuel in a useful form.
Standard economic theory views fuel and energy as just one set of inputs that is fully substitutable with other inputs, but this is incorrect. Free energy upgrades and organizes all other inputs, and it is a complement in the production process that cannot be created by combining the other factors of production. The specific amount of energy needed to produce goods and services is called the embodied energy.
If one considers the last one hundred years of the U.S. experience, fuel use and economic output are highly correlated. An important measure of fuel efficiency is the ratio of energy use to the gross national product, E/GNP. The E/GNP ratio has fallen by about 42% since 1929. We find that the improvement in energy efficiency is due principally to three factors: ( 1 ) shifts to higher quality fuels such as petroleum and primary electricity; (2) shifts in energy use between households and other sectors; and (3) higher fuel prices. Energy quality is by far the dominant factor.
Labor Productivity and Technical Change
Hypothesis 2: A large component of increased labor productivity over the past 70 years has resulted from increasing the ability of human labor to do physical work by empowering workers with increasing quantities of fuel, both directly and as embodied in industrial capital equipment and technology.
Economic models generally present technological advances as means to increasing labor and capital productivity. These effects of technological change are measured as a residual after accounting for all tangible factors; energy and natural resources are not considered tangible factors, thus leaving a large residual. From an energy perspective, however, the increases in labor productivity are actually driven by increased fuel use per worker-hour. In the pre-1973 period, when fuel prices were falling relative to the price of labor (the wage rate), labor productivity was rising as fuel was substituted for labor due to the change in relative prices. In the post-1973 period, as the price of fuel rose relative to wage rates, the data indicates declining labor productivity.
Energy and Inflation
Hypothesis 3: The rising real physical cost of obtaining energy and other resources from the environment is one important factor that causes inflation.
High inflation rates can be explained by the linkages between fuel use and money supply. If the money supply is increased, stimulating demand beyond levels that can be satisfied by existing fuel supplies, then prices will rise. This implies that when the costs of obtaining fuel are high, fiscal and monetary policies may not be successful in stimulating economic growth.
Energy Costs and Technological Change
Hypothesis 4: The energy costs of locating, extracting, and refining fuel and other resources from the environment have increased and will continue to increase despite technical improvements in the extractive sector.
It has been argued that technological innovations for mining low-quality ores can address the problems associated with the depletion of high-quality mineral deposits. Evidence of this is seen in the constant or declining amount of inputs used per unit output in the extractive sector during this century.
From a physical perspective, however, such a sanguine view of the depletion and scarcity of important natural resources is unwarranted. The extraction of lower quality ores requires the use of more energy-intensive capital and labor inputs. Over the last few decades, there has been an increase in the direct fuel input per unit of output of fuels and minerals. The present rising energy costs of fuel extraction do not bode well for future exploitation of nonrenewable resources.
The EROIs for natural gas, petroleum and coal have fallen dramatically over time in the continental United States. In Louisiana, the EROI for natural gas declined from 100:1 in 1970 to 12:1 in 1981, and a similar decline was observed in the petroleum industry. Nationally, the EROI for coal has fallen from 80:1 in the 1960s to 30:1 in 1977. Another indicator of the increasing cost of fuel extraction is the rise in the real dollar value of the mining sector share of real GNP, from 3-4% over most of this century to about 10% by 1982. Continued economic growth depends on our ability to develop sources of energy with more favorable EROIs.
Declining EROIs for fuels and increasing energy costs for nonfuel resources will have a negative impact on economic growth, productivity, inflation, and technological change. To maintain current levels of economic growth and productivity we will need to either develop alternative fuel technologies with EROI ratios comparable to those of petroleum today, or the efficiency of fuel use to produce economic output must increase. #1
#1. Author’s note: The empirical analyses in this article have been enriched and updated. An additional decade of information substantiates the basic conclusions of the article. The interested reader is referred to Robert K Kaufmann, “A Biophysical Analysis of the Energy/GDP Ratio,” Ecological Economics 6 (July 1992): 35-56; and Robert K Kaufmann, “The Relation Between Marginal Product and Price: An Analysis of Energy Markets,” Energy Economics 16 (1994): 145-48. [p.p 211-214]