The Project Gutenberg eBook of Physical significance of entropy or of the second law

This ebook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this ebook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook.

Title: Physical significance of entropy or of the second law

Author: Joseph Frederic Klein

Release date: July 28, 2023 [eBook #71291]

Language: English

Original publication: United States: D Van Nostrand Company, 1910

Credits: MWS, Laura and the Online Distributed Proofreading Team at https://www.pgdp.net (This file was produced from images generously made available by The Internet Archive/Canadian Libraries)

*** START OF THE PROJECT GUTENBERG EBOOK PHYSICAL SIGNIFICANCE OF ENTROPY OR OF THE SECOND LAW ***
cover

PHYSICAL SIGNIFICANCE
OF ENTROPY
OR OF THE SECOND LAW

BY

J. F. KLEIN

Professor of Mechanical Engineering,
Lehigh University

NEW YORK

D. VAN NOSTRAND COMPANY
23 MURRAY AND 27 WARREN STREETS
1910

COPYRIGHT, 1910,
BY
JOSEPH FREDERICK KLEIN

THE SCIENTIFIC PRESS
ROBERT DRUMMOND AND COMPANY
BROOKLYN, N. Y.

PREFACE

In this little book the author has in the main sought to present the interpretation reached by BOLTZMANN and by PLANCK. The writer has drawn most heavily upon PLANCK, for he is at once the clearest expositor of BOLTZMANN and an original and important contributor. Now these two investigators reach the result that entropy of any physical state is the logarithm of the probability of the state, and this probability is identical with the number of "complexions" of the state. This number is the measure of the permutability of certain elements of the state and in this sense entropy is the "measure of the disorder of the motions of a system of mass points." To realize more fully the ultimate nature of entropy, the writer has, in the light of these definitions, interpreted some well-known and much-discussed thermodynamic occurrences and statements. A brief outline of the general procedure followed will be found on p. 3, while a fuller synopsis is of course given in the accompanying table of contents.

J. F. Klein.

Lehigh University, October, 1910.

[Pg iii]

TABLE OF CONTENTS


    PAGE
  INTRODUCTION  
  Purpose, acknowledgments, the two methods of approach and outline of treatment  1
  PART I  
  THE DEFINITIONS, GENERAL PRELIMINARIES, DEVELOPMENT, CURRENT AND PRECISE STATEMENTS OF THE MATTERS CONSIDERED  
  SECTION A  
  (1) The "state" of a body and its "change of state"  5
  The two points of view; the microscopic and the macroscopic observer; the micro-state and macro-state or aggregate  5
  The selected and the rejected micro-states; the use of the hypothesis of "elementary chaos"  7
  PLANCK'S fuller description of what constitutes the state of a physical system 10
  (2) Further elucidation of the essential prerequisite, "elementary chaos." Sundry aspects of haphazard 11
  BOLTZMANN'S service to science in this field and his view of what constitute the necessary features of haphazard 12
  BURBURY'S simplification of haphazard necessary and his example of "elementary chaos" 15
  Haphazard as expressed by a system possessing an extraordinary number of degrees of freedom 17
  (3) Settled and unsettled states; distinction between final stage of "elementary chaos" and its preceding stages 18
  Each stage has sufficient haphazard; examples and characteristics of the settled and unsettled stages of "elementary chaos"; all micro-states not equally likely; the assumed state of "chaos" does not eliminate adequate haphazard; two anticipatory remarks 19
  SECTION B  
  CONCERNING THE APPLICATION OF THE CALCULUS OF PROBABILITIES  
  (1) The probability concept, its usefulness in the past, its present necessity, and its universality 22
  Popular objection to its use; Boltzmann's justification of this concept; its usefulness in the past and in other fields; some of its good points; the haphazard features necessary for its use 23
  (2) What is meant by probability of a state? Example 27
  SECTION C  
  (1) The existence, definition, measure, properties, relations and scope of irreversibility and reversibility 29
  Inference from experience; inference from the H-theorem or calculus of probabilities; definitions of irreversible and reversible processes; examples of each 30
  (2) Character of process decided by limiting states 32
  Nature's preference for a state; measure of this preference 33
  Entropy both the criterion and the measure of irreversibility 33
  (3) All the irreversible processes stand or fall together 34
  (4) Convenience of the fiction, the reversible processes 35
  Entropy the only universal measure of irreversibility. Outcome of the whole study of irreversibility 36
  SECTION D  
  (1) The gradual development of the idea that entropy depends on probability or number of complexions 37
  Why it is difficult to conceive of entropy. Origin and first definition due to CLAUSIUS; some formulas for it available from the start. Its statistical character early appreciated; lack of precise physical meaning; its dependence on probability; number of complexions a synonym for probability 37
  (2) PLANCK'S formula for the relation between entropy and the number of complexions 40
  Certain features of entropy 41
  SECTION E  
  Equivalents of change of entropy in more or less general physical terms or aspects 41
  Not surprising that its many forms should have been a reproach to the second law 41
  General principles for comparing these aspects. Various aspects of growth of entropy from the experiential and from the atomic point of view 42
  SECTION F  
  More precise and specific statements of the second law 44
  General arrangement and the principles for comparison 44
  Ten different statements of the law and comments thereon 44
  PART II  
  ANALYTICAL EXPRESSIONS FOR A FEW PRIMARY RELATIONS  
  Procedure followed 48
  SECTION A  
  Maxwell's law of distribution of molecular velocities 48
  Outline of proof, illustration, and consequences of this law 48
  SECTION B  
  Simple analytical expression for dependence of entropy on probability 53
  PLANCK'S derivation; illustration, limitations, consequences, features and comments 53
  SECTION C  
  Determination of a precise, numerical expression for the entropy of any physical configuration 56
  BOLTZMANN'S pioneer work, PLANCK'S exposition, and the six main steps 56
  Step a  
  Determination of the general expression for the  of a given configuration of a known aggregate state 57
  Step b  
  Determination of the general expression for the entropy  of a given configuration of a known aggregate state 63
  Step c  
  Special case of (b), namely, expression for the entropy  of the state of thermal equilibrium of a monatomic gas 63
  Step d  
  Confirmation, by equating this value of  with that found thermodynamically and then deriving known results 64
  Step e  
  PLANCK'S conversion of the expressions of (b) and (c) into more precise ones by finding numerical value of  66
  Step f  
  Determination of the dimensions of the universal constant  and therefore also of entropy in general 67
  PART III  
  THE PHYSICAL INTERPRETATIONS  
  SECTION A  
  Of the simple reversible operations in thermodynamics  
  Isometric, isobaric, isothermal, and isentropic change 69
  SECTION B  
  Of the fundamentally irreversible processes  
  Heat conduction, work into heat of friction, expansion without work, and diffusion of gases 72
  SECTION C  
  Of negative change of entropy  
  Some of its physical features and necessary accompaniments 78
  SECTION D  
  Physical significance of the equivalents for growth of entropy given on pp. 42-43 80
  SECTION E  
  Physical significance of the more specific statements of second law given on pp. 44-47 81
  PART IV  
  SUMMARY OF THE CONNECTION BETWEEN PROBABILITY, IRREVERSIBILITY, ENTROPY, AND THE SECOND LAW  
  SECTION A  
  (1) Prerequisites and conditions necessary for the application of the theory of probabilities  
  (a) Atomic theory; (b) like particles; (c) very numerous particles; (d) "elementary chaos" 83
  (2) Differences in the states of "elementary chaos" 85
  (3) Number of complexions, or probability of a chaotic state 86
  SECTION B  
  Irreversibility 86
  SECTION C  
  Entropy 87
  SECTION D  
  The Second Law  
  Its basis and best statements; it has no independent significance 88
  PART V  
  REACH OR SCOPE OF THE SECOND LAW  
  SECTION A  
  Its extension to all bodies  
  PLANCK'S presentation; fifteen steps in the proof 91
  SECTION B  
  General conclusion as to entropy changes 98

THE PHYSICAL SIGNIFICANCE OF ENTROPY
AND OF THE SECOND LAW

[There is no difference between change of Entropy and Second Law, when each is fully defined.]


INTRODUCTION

PURPOSE, ACKNOWLEDGMENTS, THE TWO METHODS OF APPROACH
AND OUTLINE OF TREATMENT

THIS article is intended for those students of engineering who already have some elementary knowledge of thermodynamics. It is intended to clear up a difficulty that has beset every earnest beginner of this subject. The difficulty is not one of application to engineering problems, although here too there have been widespread misconceptions,[1] for the expressions developed by CLAUSIUS are simple, have long been known and much used by engineers and physicists. The difficulty is rather as to the ultimate physical meaning of entropy. This term has long been known as a sort of property of the state of the body, has long been surmised to be of essentially a statistical nature, but with it all there was a sense that it was a sort of mathematical fiction, that it was somehow unreal and elusive, so it is no wonder that in certain engineering quarters it was dubbed the "ghostly quantity."

Now this instinct of the true engineer to understand things [Pg 1] down to the bottom is worthy of all encouragement and respect. For this reason and because the matter is of prime importance to the technical world, the final meaning of entropy (i.e., of the Second Law) must be clarified and realized. Indeed, we may well go beyond this somewhat narrow view and say that this is well worth doing because change of entropy constitutes the driving motive in all natural events; it has therefore a reach and a universality which even transcends that of the First Law, or Principle of the Conservation of Energy.

In striving to present the physical meaning of entropy and of the Second Law, the writer cannot lay claim to any originality; he has simply tried here to put in logical order the somewhat scattered propositions of the leading investigators of this subject and in such a way that the difficulties of apprehension might be minimized; in other words, to present the solutions of his own difficulties, in the hope that the solutions may be helpful to other students of engineering and thermodynamics. In overcoming these difficulties, the writer owes everything to the books and papers by PLANCK and BOLTZMANN, pre-eminently to PLANCK, who has so clearly and appreciatively interpreted the life work of BOLTZMANN.[2] The writer furthermore wishes to say that he has not hesitated here to quote verbatim from both these investigators and not always so that their own statements can be distinguished from his own. If any part of this presentation is particularly clear and exact the reader will be safe in crediting it to one or the other of these two investigators and expositors, although it would not be right to consider them responsible for everything contained in this little book.

In considering the proper approach to the matter in hand we must remember that[3] "in physical science there are two more or [Pg 2] less distinct modes of attack, namely, (a) a mode of attack in which the effort is made to develop conceptions of the physical processes of nature, and (b) a mode of attack in which the attempt is made to correlate phenomena on the basis of sensible things, things that can be seen and measured. In the theory of heat the first mode is represented by the application of the atomic theory to the study of heat phenomena, and the second mode is represented by what is called thermodynamics." In solving the special problem before us, as to the physical meaning of entropy and of the Second Law, our main dependence must be on the first mode of attack.

The second mode will furnish checks and confirmations of the results developed by the first, or we may say that the combination of the two modes will give the well-established characteristic equations and relations of bodies and their physical elements.

The whole discussion will now be taken up in a non-mathematical way, without the full proof required by a complete presentation, and about in this order:

(a) The definitions, general preliminaries and current statements of the matters considered.

(b) More or less precise statement of the primary relations and theorems.

(c) The physical interpretations.

(d) Summary of the connection between probability, irreversibility, entropy and the Second Law.

(e) Reach or scope of the Second Law.

On account of the difficulty which every student experience in realizing the physical nature of entropy we will in the main confine our attention here to gases and indeed to their simplest case, the monatomic gas, and will as usual assume that the dimensions of an atom or particle are very small in comparison with the average distance between two adjacent particles, that for the atoms approaching collision the distance within which they exert a significant influence on each other is very small as compared with the [Pg 3] mean distance between adjacent atoms, and that between collisions the mean length of the particle's path is great in comparison with the average distance between the particles. Later on we will indicate in a very general and brief way how the entropy idea may be extended to other states of aggregation and to other than purely thermodynamic phenomena. Mostly, therefore, we will only consider states and processes in which heat phenomena and mechanical occurrences take place.

[1]See Entropy, by JAMES SWINBURNE; this author has called attention to necessary corrections and duly emphasized the engineering aspect.

[2]BOLTZMANN, Gas Theorie; PLANCK, Thermodynamik, Theorie der Wärmestrahlung, and Acht Vorlesungen über Theoretische Physik.

[3]Professor W. S. FRANKLIN, The Second Law of Thermodynamics: its basis in Intuition and Common Sense. Pop. Science Monthly, March, 1910.

[Pg 4]

PART I

DEFINITIONS, GENERAL PRELIMINARIES, DEVELOPMENT, CURRENT AND PRECISE STATEMENTS OF THE MATTERS CONSIDERED

SECTION A

(1) The "State" of a Body and its "Change of State"

As we will make constant use of the terms contained in this heading and as they here represent fundamentally important conceptions, we will seek to make them clear by presenting them in the various forms into which they have been cast by the different investigators, even at the risk of being considered prolix.

In the Introduction to this article we called attention to the two distinct modes of attacking any physical problem. Now the conception "state of a body" varies with the chosen mode of attack. Of course as both modes are legitimate and lead to correct results, these differences in the conception of "state" can be reconciled and a broader definition reached. We can illustrate these different methods of approach, as PLANCK has done, by assuming two different observers of the state of the body, one called the microscopic-observer and the other the macroscopic-observer. The former possesses senses so acute and powers so great that he can recognize each individual atom and can measure its motion. For this observer each atom will move exactly according to the elementary laws prescribed for it by General Dynamics. These laws, so far as we know them, also at once permit of exactly the opposite course of each event. Consequently there can be here no question of probability, of entropy or of its growth. On the other hand, the "macro-observer," (who perceives the atomic host, say as a homogeneous gas, and consequently applies to its mechanical and thermal [Pg 5] events the laws of thermodynamics) will regard the process as a whole to be an irreversible one in accordance with the Second Law.... Now a particular change of state cannot at the same time be both reversible and irreversible. But the one observer has a different idea of "change of state" from the other; the micro-observer's conception of "change of state" is different from that of the macro-observer. What then is "change of state?" The state of a physical system can probably not be rigorously defined, otherwise than the conception, as a whole, of all those physical magnitudes whose instantaneous values, under given external conditions, also uniquely determine the sequence of these changing values.

BOLTZMANN'S statement is much more clear, namely, "The state of a body is determined, (a) by the law of distribution of the particles in space and (b) by the law of distribution of the velocities of the particles; in other words, a body's condition is determined (a) by the number of particles which lie in each elementary realm of the space and (b) by a statement of the number of particles which belong to each elementary velocity group. These elementary realms are all equal and so are the elementary velocity groups equal among themselves. But it is furthermore assumed that each elementary realm and each elementary velocity group contains very many particles."

Now if we ask the aforesaid two observers what they understand by the state of the atomic host or gas under consideration, they will give entirely different answers. The micro-observer will mention those magnitudes which determine the location and the velocity condition of all the individual atoms. This would mean in the simplest case, in which the atoms are regarded as material points, that there would be six times as many magnitudes as atoms present, namely, for each atom there would be three co-ordinates of location and three of velocity components; in the case of composite molecules there would be many more such [Pg 6] magnitudes. For the micro-observer, the state and the sequence of the event would not be determined until all these many magnitudes had been separately given. The state thus defined we will call the "micro-state." The macroscopic-observer on the other hand gets along with much fewer data; he will say that the state of the contemplated homogeneous gas is already determined by the density, the visible velocity and the temperature at each place of the gas and he will expect, when these magnitudes are given, that the course of the physical events will be completely determined, namely, will occur in obedience to the two laws of thermodynamics and therefore be bound to show an increase in entropy. The state thus defined we will call the "macro-state." The difference in the two observers is that one sees only the atomic events and the other the occurrences in the aggregate. The former would have the absolute mechanical idea of state and the latter the statistical idea. Before attempting to reconcile their apparently conflicting conclusions, we will here call attention to some necessary relations between the micro-state and the macro-state. In the first place we must remember that all a priori possible micro-states are not realized in nature; they are conceivable but never attain fruition. How shall we select what may be called these natural micro-states? The principles of general dynamics furnish no guide for such selection and so recourse may be had to any dynamic hypothesis whose selection will be fully justified by experience.

Now PLANCK says: "In order to traverse this path of investigation, we must evidently first of all keep in mind all the conceivable positions and velocities of the individual atoms, which are compatible with particular values of the density, the velocity and the temperature of the gas, or, in other words, we must consider all the micro-states which belong to a particular macro-state and must examine all the different events which follow from the different micro-states according to the fixed laws of dynamics. Now up to this time, the closer calculation and combination [Pg 7] of these minute elements has always given the important result that the vast majority of these micro-states belong to one and the same macro-state or aggregate, and that only comparatively few of the said micro-states furnish an anomalous result, and these few are characterized by very special and far-reaching conditions existing between the locations and the velocities of adjacent atoms. And, furthermore, it has appeared that the almost invariably resulting macro-event is just the very one perceived by the macroscopic observer, the one in which all the measurable mean values have a unique sequence, and consequently and in particular satisfies the second law of thermodynamics."

"Herewith is revealed the bridge of reconciliation between the two observers. The micro-observer needs only to take up in his theory the physical hypothesis, that all such particular cases (which premise very special, far-reaching conditions between the states of adjacent and interacting atoms) do not occur in Nature; or in other words, the micro-states are in 'elementary disorder' (elementar ungeordnet). This secures the unique (unambiguous) character of the macroscopic event and makes sure that the Principle of the Growth of Entropy will be satisfied in every direction."

Before elaborating all that is implied in this hypothesis of "elementary disorder" we will again point out that for each macro-state (even with settled values of density and temperature) there may be many micro-states which satisfy it in the aggregate.

According to PLANCK, "it is easy to see that the macro-observer deals with mean values; for what he calls density, visible velocity, temperature of the gas, are for the micro-observer certain averages, statistical data, which have been suitably obtained from the spatial arrangement and the velocities of the atoms. But with these averages the micro-observer at first can do nothing even if they are known for a certain time, for thereby the sequence of events is by no means settled; on the contrary, he can easily [Pg 8] with said given averages ascertain a whole host of different values for the location and velocities of the individual atoms, all of which correspond to said given averages, and yet some of these lead to wholly different sequences of events even in their mean values," events which do not at all accord with experience. It is evident, if any progress is to be made, that the micro-observer must in some suitable way limit the manifold character of the multifarious micro-states. This he accomplishes by the hypothesis of "elementary disorder" about to be more fully defined.

In passing we may here note for future use, that what has just been said concerning macro-states (aggregates) with "settled" mean velocity, density and temperature, applies also to states unsettled in the aggregate, so far as concerns the manifold character of the conceivable constituent micro-states and the differences in the mean character of their sequences. Even after the above limiting hypothesis removes all illegitimate micro-states, an enormously greater number of legitimate ones will be left to constitute the number of complexions properly belonging to the state contemplated. We may also add that it seems quite evident that the numbers representing these complexions will be different in the settled and unsettled states even if the latter should ultimately possess the mean velocity, density and temperature of the former.

On the other hand, we also point out that for one and the same set of external conditions the macro-state may itself vary very greatly. When it has a settled density and temperature, it is said to be in a stationary state, to be in thermal equilibrium and, anticipating, we may add that it is then has maximum entropy, in short we may say it is in a "normal" condition. But the external conditions remaining the same, before attaining to said "normal" ultimate state, it may pass through a whole series of so-called "abnormal" states after it leaves its initial condition. While it is in any one of these "abnormal" states, it may be said to be in a more or less turbulent condition; [Pg 9] it may then possess whirls and eddies; it may have different densities and temperatures in its different parts and then it will be difficult or impossible to measure these external physical features of its state as a whole. All this implies ever-varying atomic locations and velocities, but does not indicate any such special far-reaching regularities between adjacent and interacting particles as would vitiate at any stage our hypothesis of "elementary disorder" (elementar ungeordnet) or "molecular chaos."

Before going into more detail concerning this particular chaotic condition of the particles we will give PLANCK'S somewhat fuller statement of what constitutes the "state" of a physical system at a particular time and under given external conditions. It is, "the conception as a whole of all those mutually independent magnitudes which determine the sequence of events occurring in the system so far as they are accessible to measurement; the knowledge of the state is therefore equivalent to a knowledge of the initial conditions. For example, in a gas composed of invariable molecules the state is determined by the law of their space and velocity distribution, i.e., by the statement of the number of molecules, of their co-ordinates and velocity components which lie within each single small region. The number of molecules in any one of these different regions is in general entirely independent of the number in any other region, for the state need not be a stationary one nor one of equilibrium; these numbers should therefore all be separately known if the state of the gas is to be considered as given in the absolute mechanical sense. On the other hand, for the characterization of the state in the statistical sense, it is not necessary to go into closer detail concerning the molecules present in each elementary space; for here the necessary supplement is supplied by the hypothesis of molecular chaos, "which in spite of its mechanically indeterminate character guarantees the unambiguous sequence of the physical events." [Pg 10]



(2) Further Elucidation of this Essential Condition of "Elementary Chaos." Sundry Aspects of Haphazard

To gain as complete an understanding as possible of this fundamental idea we will now give the views of the several investigators as to the physical features of this chaotic state. We have seen how PLANCK, the chief expositor of BOLTZMANN, boldly excludes from consideration all cases leading to anomalous results, because of the very special conditions existing between the molecular data, by assuming that these cases do not occur in Nature. PLANCK reminds the physicists who object to the hypothesis of elementary disorder because they feel it is unnecessary or even unjustifiable, that the hypothesis is already much used in Physics, that tacitly or otherwise it underlies every computation of the constants attached to friction, diffusion and the conduction of heat. On the other hand he reminds others, those inclined to regard the hypothesis of "elementary disorder" as axiomatic, of the theorem of H. POINCARÉ, which excludes this hypothesis for all times from a space surrounded with absolutely smooth walls. PLANCK says that the only escape from the portentous sweep of this proposition is that absolutely smooth walls do not exist in Nature.

The foregoing thought PLANCK has also put in a slightly different way. Appreciating that all mechanically possible simultaneous arrangements and velocities of molecules are not realized in Nature, the concept of "elementary disorder" implies one limitation of the conceivable molecular states, namely that, between the numerous elements of a physical system there exist no other relations than those conditioned by the existing measurable mean values of the physical features of the system in question.

Another, briefer but equivalent, definition is that: "In Nature all states and processes which contain numerous independent (unkontrollierbar) constituents are in 'elementary disorder' (elementar ungeordnet)." The constituents are molecular elements [Pg 11] in mechanics and in thermodynamics and the energy elements in radiation.

The German word "unkontrollierbar"[4] here used may also with some justice be translated as, unconditioned, undetermined, unmeasurable, unregulated, uncorrelated, ungovernable or haphazard. But whichever term is best, PLANCK, mechanically speaking, meant by it, the confused, unregulated and whirring intermingling of very many atoms.

Either of these two equivalent definitions implies that such elementary disorder or chaos is a condition of sufficiently complete haphazard to warrant the application of the Theory of Probabilities to the unique (unambiguous) determination of the measurable physical features of the process viewed as a whole.

The foregoing ideas more or less tacitly underlie the whole of BOLTZMANN'S great pioneer work in this vast field. He it was who clearly showed that the Second Law could be derived from mechanical principles: that entropy was a property of every state, turbulent or otherwise; that the entropy idea would be emancipated from all thought of human, experimental, skill, and who thereby raised the Second Law to the position of a real principle. He did all this by a general basing of the idea of entropy on the idea of probability. Consequently we find much attention paid in all his work to haphazard molecular conditions. He first used the terms "molekular-geordnet" (molecularly ordered, or arranged), and "molekular ungeordnet" (molecularly disordered or disarranged), which latter phrase we must regard as synonymous with the term "elementar ungeordnet" (elementary disorder or chaos) with which we have already become acquainted in PLANCK'S presentation. We will, therefore, confine ourselves here to BOLTZMANN'S illustrations of these terms, for his work does not, in these particulars, contain any [Pg 12] sharp definitions. Indeed he may have feared over-precision and may have trusted to the use he made of the terms at different times to convey their meaning.

Concerning some of the characteristics of BOLTZMANN'S haphazard motion we take the following from Vol. I of his "Vorlesungen über Gas Theorie."

If in a finite part of a gas the variables determining the motion of the molecules have different mean values from those in another finite part of the gas (for example if the mean density or mean velocity of a gas in one-half of a vessel is different from those in the other half), or more generally, if any finite part of a gas behaves differently from another finite part of a gas, then such a distribution is said to be "molar-geordnet" (in molar order). But when the total number of molecules in every unit of volume exists under the same conditions and possesses the same number of each kind of molecules throughout the changes contemplated, then the same number of molecules will leave a unit volume and will enter it so that the total number ever present remains the same; under such conditions we call the distribution "molar-ungeordnet" (in molar disorder) and that finite distribution is one of the characteristics of the haphazard state to which the Theory of Probabilities is applicable. [As another illustration of the excluded molar-geordnet states we may instance the case when all motions are parallel to one plane.]

But although in passing from one finite part to another of a gas no regularities (of average character) can be discerned, yet infinitesimal parts (say of two or more molecules) may exhibit certain regularities, and then the distribution would be "molekular-geordnet" (molecularly-ordered) although as a whole the gas is "molar-ungeordnet." For example (to take one of the infinite number of possible cases) suppose that the two nearest molecules always approached each other along their line of centers, or if a molecule moving with a particularly slow speed always had ten (10) slow neighbors, then the distribution [Pg 13] would be "molekular-geordnet." But then the locality of one molecule would have some influence on the locality of another molecule and then in the Theory of Probabilities the presence of one molecule in one place would not be independent of the presence of some other molecule in some other place. Such dependence is not permissible by the Theory of Probabilities. Before, however, we can further describe what is here perhaps the most important term (molekular-ungeordnet), we must point out that BOLTZMANN considers the number of molecules  of one kind whose component velocities along the co-ordinate axes are confined between the limits,  and also the number of molecules  of another kind whose component velocities similarly lie between the limit  then, considering the chances that a molecule  shall have velocities between the limits specified in (1) and molecule  have velocities between limits (2), BOLTZMANN intimates that these chances are independent of the relative position of the molecules. Where there is such complete independence, or absence of all minute regularities, the distribution, according to BOLTZMANN, is "molekular-ungeordnet" (molecularly-disordered).

BOLTZMANN furthermore informs us that, as soon as in a gas, the mean length of path is great in comparison with the mean distance between two adjacent molecules, the neighboring molecules will quickly become different from what they formerly were. Therefore it is exceedingly probable that a "molekular-geordnete" (but molar-ungeordnete) distribution would shortly pass into a "molekular-ungeordnete" distribution.

Furthermore, from the constitution of a gas results that the place where a molecule collided is entirely independent of the spot where its preceding collision took place. Of course, this [Pg 14] independence could be maintained for an indefinite time only by an infinite number of molecules.

The place of collision of a pair of molecules must in our Theory of Probabilities be independent of the locality from which either molecule started.

From all the preceding we must infer what measure of haphazard BOLTZMANN considers necessary for the legitimate use of the Theory of Probabilities.

BOLTZMANN in proving his H-Theorem,[5] which establishes the one-sidedness of all natural events, makes the explicit assumption that the motion at the start is both "molar- und molekular-ungeordnet" and remains so. Later on, he assumes the same things but adds that if they are not so at the start they will soon become so; therefore said assumption does not preclude the consideration by Probability methods of the general case or the passage from "ordnete" to "ungeordnete" conditions which characterizes all natural events.

In fact these very definitions show solicitude for securing the uninterrupted operation of the laws of probability. BOLTZMANN intimates his approval of S. H. BURBURY'S statement of the condition of independence underlying his work.

Here S. H. BURBURY[6] simplifies the matter by assuming that any unit of volume of space contains a uniform mixture of differently speeded molecules and then says:

"Let  be the velocity of the center of gravity of any pair of molecules and  their relative velocity. Then the following condition (here called  ) holds: For any given direction of  before collision, all directions of  after collision are equally probable. Then BOLTZMANN'S H-theorem proves that if condition  be satisfied, then if all directions of the relative velocity  for given  are not equally likely, the effect of collisions [Pg 15] is to make  diminish." [In essence BURBURY'S condition  says no more than that Theory of Probabilities is applicable for finding number of collisions.] Furthermore, "any actual material system receives disturbances from without, the effect of which coming at haphazard without regard to state of system for the time being is, pro tanto, to renew or maintain the independence of the molecular motions, that very distribution of co-ordinates (of collision) which is required to make  diminish. So there is a general tendency for  to diminish, though it may conceivably increase in particular cases. Just as in matters political, change for the better is possible, but the tendency is for all change to be from bad to worse." Here BURBURY states what is practically true in all actual cases and thus furnishes an additional reason, if that were needed, for the legitimacy of the Probability method pursued by Boltzmann, and, another explanation of why the results obtained are in such perfect accord with experience.

As BURBURY'S remarks with respect to the nature of "elementary chaos" under consideration are always illuminating, we will, at the risk of repeating something already said, quote the following:

"The chance that the spheres approaching collision shall have velocities within assigned limits is independent of their relative position, and of the positions and velocities of all other spheres, and also independent of the past history of the system except so far as this has altered the distribution of the velocities inter se. In the following example this independence is satisfied for the initial state and, for the assumed method of distribution, has no past history.

"Example. A great number of equal elastic spheres, each of unit mass and diameter  , are at an initial instant set in motion within a field  of no force and bounded by elastic walls. The initial motion is formed as follows: (1) One person assigns component velocities  to each sphere according to any law subject to the conditions that  and that [Pg 16]  given constant. (2) Another person, in complete ignorance of the velocities so assigned, scatters the spheres at haphazard throughout  . And they start from the initial positions so assigned by (2) with the velocities assigned to them respectively by (1)."

The system thus synthetically constructed would without doubt, at the start be "molekular-ungeordnet"—in fact, it is as near an approach to chaos as is possible in an imperfect world. But there is reason to doubt if it would continue to be thus "molekular-ungeordnet." For the distribution of velocities is according to any law consistent with the above-mentioned conditions and some such laws would lead to results hostile to the Second Law, and then we may safely say such laws of velocity distribution would never occur in Nature and would therefore belong to the cases which have been specially excepted.

Now there are mechanical systems which possess the entropy property and it has been truly said that the Second Law and irreversibility do not depend on any special peculiarity of heat motion, but only on the statistical property of a system possessing an extraordinary number of degrees of freedom. In this sense Professor J. W. GIBBS treated Mechanics statistically and showed that then the properties of temperature and entropy resulted. This matter has already been touched upon, but as numerous degrees of freedom is a feature of the "elementary chaos" under consideration it deserves repetition here and more than a passing mention.

Illustration of Degrees of Freedom. Refer a body's motion to three axes,  . If a body has as general a motion as possible, it may be resolved into translations parallel to the  axes and to rotations about these axes. Each of these two sets furnishes three components of motion or a total of six components; then we say that the perfectly unconstrained motion of the body has six degrees of freedom. If a body moves parallel to one of the co-ordinate planes, we say it has two degrees of freedom. When [Pg 17] we come to consider molecular motion in general and the independence which characterizes the motion of each of the many molecules we see that altogether we have here an extraordinary number of degrees of freedom, and composed of such is the realm of our "elementary chaos."

If we go to the other extreme and think of only one atom, we see at once that we cannot properly speak of its disorder. But the case is different with a moderate number of atoms, say, a hundred or a thousand. Here we surely can speak of disorder if the co-ordinates of location and the velocity components are distributed by haphazard among the atoms. But as the process as a whole, the sequence of events in the aggregate, may not with this comparatively small number of atoms take place before a macroscopic observer in a unique (unambiguous) manner, we cannot say that we have here reached a true state of "elementary chaos." If we now ask as to the minimum number of atoms necessary to make the process an irreversible one, the answer is, as many as are necessary to form determinate mean values which will define the progress of the state in the macroscopic sense. Only for these mean values does the Second Law possess significance; for these, however, it is perfectly exact, just as exact as the theorem of probability, which says that the mean value of numerous throws with one cubical die is equal to 3½.

We may now properly infer from all these views that the state of "elementary chaos" (or "molekular ungeordnete" motion) is the necessary condition for adequate haphazard and makes the application of the Theory of Probabilities possible.

[4]On p. 133 of Wärmestrahlung PLANCK says, "only measurable mean values are kontrollierbar," and this may help us to get the meaning here.

[5]In BOLTZMANN'S H-Theorem we have a process (consisting of a number of separately reversible processes) which is irreversible in the aggregate.

[6]Nature, Vol. LI, p. 78, Nov. 22, 1894.



(3) Settled and Unsettled States; Distinction between Final Stage
of Elementary Chaos and its Preceding Stages

The immediate purpose in the next few pages is to establish the (a) distinction between the successive stages of "elementary disorder" (chaos) as they develop in their inevitable passage [Pg 18] from "abnormal" conditions to the final and so-called "normal" condition of thermal equilibrium and, furthermore, (b) to show that each of these stages is "elementar-ungeordnet" and (c) that in each one sufficient haphazard prevails to permit the legitimate application of the Theory of Probabilities.

We will first describe the unsettled (abnormal) and settled (normal) states, respectively. When we consider the general state of a gas "we need not think of the state of equilibrium, for this is still further characterized by the condition that its entropy is a maximum. Hence in the general or unsettled state of the gas an unequal distribution of density may prevail, any number of arbitrarily different streams (whirls and eddies) may be present, and we may in particular assume that there has taken place no sort of equalization between the different velocities of the molecules. We may assume beforehand, in perfectly arbitrary fashion, the velocities of the molecules as well as their co-ordinates of location. But there must exist (in order that we may know the state in the macroscopic sense), certain mean values of density and velocity, for it is through these very mean values that the state is characterized from the macroscopic standpoint." The differences that do exist in the successive stages of disorder of the unsettled state are mainly due to the molecular collisions that are constantly taking place and which thus change the locus and velocity of each molecule.

We may now easily describe the settled state as a special case of the unsettled one. In the settled state there is an equal distribution of density throughout all the elementary spaces, there are no different streams (whirls or eddies) present, and an equal partition of energy exists for all the elementary spaces. For it thermal equilibrium exists, the entropy is a maximum, and temperature of the state has now a definite meaning, because temperature is the mean energy of the molecules for this state of equilibrium. The condition is said to be a "stationary" or permanent one, for the mean values of the density, [Pg 19] velocity, and temperature of this particular aggregate no longer change, although molecular collisions are still constantly occurring.

Well-known examples of the unsettled state of a system are: The turbulent state with its different streams, whirls, and eddies, the state in which the potential and kinetic energy is unequally distributed; for instance, when one part is at a high pressure and another part at a lower pressure, when one part is hotter than another part, and when unmixed gases are present in a communicating system.

A more specific feature of the unsettled state may be found in the accompaniment to BURBURY'S condition  (already mentioned at bottom of p. 15) where it is intimated that (at the start and after collision) all directions of the relative velocity  may not be equally likely.

When such differences have all disappeared to the extent that equal elementary spaces possess their equal shares of the different particles, velocities, and energies, the system will be a settled one, be in thermal equilibrium, and will possess a maximum entropy and a definite temperature. Moreover, BURBURY'S condition  is here fully satisfied.

At this point we again call attention to the fact, that in both the unsettled and settled states of a system all conceivable micro-states are not equally likely to obtain. On p. 19 mention was made that the unsettled and the settled state each possessed a host of conceivable micro-states which agreed with the characteristic averages of their respective macro-states (the unsettled and the settled ones), and yet in each set some of these led subsequently to events which did not accord with experience. Therefore for both the unsettled and the settled state we must limit the manifold character of their micro-states by eliminating all those micro-states which lead to results contrary to experience. This is accomplished by assuming the hypothesis of "elementary-disorder" (elementar-ungeordnet) to obtain for the unsettled as well as the settled state. Now so far as the haphazard [Pg 20] character of the remaining motions are concerned, we might stop right here, for the very nature of this hypothesis insures results in harmony with experience, i.e., with the undisturbed operation of the laws of probability.

But if we do not stop here, preferring to examine some of the special features of fortuitous motion, as detailed on pp. 10, 13, 14 and 17, we still see that by this hypothesis we have not removed the haphazard character of the remaining motions in either the unsettled or the settled state. For instance, we have not removed BURBURY'S condition  . We must remember, too, that in PLANCK'S briefest statement of "elementary disorder" (bot. of p. 11), two important features of haphazard are emphasized, viz.: the independence and great number of the constituents. BOLTZMANN in his Gas Theorie of course considers the special features which underlie the application of the Calculus of Probabilities; thus he says they are, the great number of molecules and the length of their paths, which together make the laws of the collision of a molecule in a gas independent of the place where it collided before. Neither has the introduction of the hypothesis of "elementary disorder" done away with these special features. There have simply been excluded from consideration such pre-computed and prearranged regularities in the paths and directions of molecules as purposely interfere with the operation of the laws of probability. We are still free to consider all the imaginable positions and velocities of the individual molecules which are compatible with the mean velocity, density, and temperature properly characteristic of each stage of the passage from the unsettled to the settled state. For adequate haphazard we only need the assumption that the molecules fly so irregularly as to permit the operation of the laws of probabilities. Such a presentation as this of course calls for complete trust that all the specified requirements have been adequately met and BOLTZMANN'S eminence as a mathematical physicist and the endorsement of his peers must be our guarantee for such confidence and trust. [Pg 21]

Before closing this discussion of unsettled and settled states we will insert here two remarks, really at this stage, anticipatory in their nature. The first is, that under the limitation imposed by our supplementary hypothesis of "elementary chaos," the very sharpest definition of any macro-state is the number of its possible micro-states. This is evidently the number of permutations, possible with the given locus and velocity elements under the restriction imposed above. Later on we will find that this number of possible micro-states is smaller for the unsettled state than for the settled one. This gives us a clean-cut distinction between the two states contemplated. The second remark is that the inevitable change in the system as a whole is always from the less probable to the more probable, is a passage from an unsettled state of the system to its settled state and this is here synonymous with the growth of the number of possible micro-states. It is this difference between the initial and final states which constitutes the universal driving motive in all natural events.

SECTION B

THE APPLICATION OF CALCULUS OF PROBABILITIES
IN MOLECULAR PHYSICS.

(1) The Probability Concept, its Usefulness in the Past, its Present
Necessity, and its Universality.

An indication of its essential value in this physical discussion is evidenced by the fact that we have almost unwittingly been forced to constantly refer to it in all of our preliminaries. But when this concept is first broached to a student, he feels about it like the "man in the street"; it is by the latter regarded as a matter of chance and hence of uncertainty and unreliability; moreover, the latter knows in a vague way that the subject has to do with averages, that it is often of a statistical nature, and knows that statistics in general are widely distrusted. The student is at [Pg 22] first likely to share these views with said man in the street, and at best feels that its introduction is of remote interest, far fetched, and tends to hide and dissipate the kernel of the matter. The student must disabuse himself of these false notions by reflecting how much there is in Nature that is spontaneous, in other words, how many events there are in which there is a passage from a less probable to a more probable condition and that he cannot afford to despise or ignore a Calculus which measures these changes as exactly as possible.

In this connection BOLTZMANN says: (W. S. B. d. Akad. d. Wiss., Vol. LXVI, B 1872, p. 275).

"The mechanical theory of heat assumes that the molecules of gases are in no way at rest but possess the liveliest sort of motion, therefore, even when a body does not change its state, every one of its molecules is constantly altering its condition of motion and the different molecules likewise simultaneously exist side by side in most different conditions. It is solely due to the fact that we always get the same average values, even when the most irregular occurrences take place under the same circumstances, that we can explain why we recognize perfectly definite laws in warm bodies. For the molecules of the body are so numerous and their motions so swift that indeed we do not perceive aught but these average values. We might compare the regularity of these average values with those furnished by general statistics which, to be sure, are likewise derived from occurrences which are also conditioned by the wholly incalculable co-operation of the most manifold external circumstances. The molecules are as it were like so many individuals having the most different kinds of motion, and it is only because the number of those which on the average possess the same sort of motion is a constant one that the properties of the gas remain unchanged. The determination of the average values is the task of the Calculus of Probabilities. The problems of the mechanical theory of heat are therefore problems in this calculus. It would, however, be a mistake to think any uncertainty [Pg 23] is attached to the theory of heat because the theorems of probability are applied. One must not confuse an imperfectly proved proposition (whose truth is consequently doubtful) with a completely established theorem of the Calculus of Probabilities; the latter represents, like the result of every other calculus, a necessary consequence of certain premises, and if these are correct the result is confirmed by experience, provided a sufficient number of cases has been observed, which will always be the case with Heat because of the enormous number of molecules in a body."

To become more specific we will mention some of the problems to which the Theory of Probabilities has been profitably applied. In business to life and fire insurance; in engineering to reducing the inevitable errors of observations by the Method of Least Squares; and in physics to the determination of Maxwell's Law of the distribution of velocities. The results thus obtained are universally trusted and accepted by experts. Why then should this Calculus not be applicable to the more general natural events?

In this connection consider some of its good points: (a) It eliminates from a problem the accidental elements if the latter are sufficiently numerous; (b) it deals legitimately with averages; (c) it involves combination considerations other than averages; (d) it is available for non-mechanical as well as mechanical occurrences and thus (e) has a capacity for covering the whole range of natural events, giving it a character of universality which is now its most valuable asset.

As an example of this we may instance BOLTZMANN'S deservedly famous H-theorem, which establishes the one-sidedness of all natural events.[7] Concerning it, this master in mathematical physics says:

"It can only be deduced from the laws of probability that, if the initial state is not especially arranged for a certain purpose, the [Pg 24] probability that  decreases is always greater than that it increases. In this connection we may add that BOLTZMANN looked forward to a time, "when the fundamental equations for the motion of individual molecules will prove to be merely approximate formulas, which give average values which, according to the Theory of Probabilities, result from the co-operation of very many independently moving individuals constituting the surrounding medium, for example, in meteorology the laws will refer only to average values deduced by the Theory of Probabilities from a long series of observations. These individuals must of course be so numerous and act so promptly that the correct average values will obtain in millionths of a second."

To further strengthen our faith we may point out that the probability method has been successfully used to determine unique results from complicated conditions and has been employed for the general treatment of problems. In the case before us it has solved the entropy puzzle which has exercised physicists, as well as engineers, for decades, and it has thereby emancipated the Second Law from all anthropomorphism, from all dependence on human experimental skill. When we take the broadest possible view of its character, this Calculus enables us to read the present riddle of our universe, namely, why it is in its present improbable state. We have therefore in this Calculus an engine for investigation which is of great power and is likely to play a large part in the future in the ascertainment of physical truth. Of course it must then be in the hands of masters. It is they and they alone who can properly and adequately interpret such a physical problem as the one before us. In scientific work our last court of appeal must be Nature, and we therefore say: The best justification for the use of the Theory of Probabilities in our problem is that its results are in such complete accord with the facts.

In dealing with this physical engine of investigation, we must again call attention to some of the features of haphazard necessary [Pg 25] for its legitimate application. Of course the statement of these features will vary with the mechanical or non-mechanical character of the problem to which it is applied. As we are here dealing mainly with the former, we will limit ourselves to its features: (a) The elements dealt with must be very numerous, strictly speaking, infinite; (b) as a phase of (a) we may say also that when we speak of the probability of a state we express the thought that it can be realized in many different ways; (c) when we speak of the relative directions of a pair of molecules all possible directions must be considered; (d) we must so weight the elements say, in (a), (b), and (c) that they are equally likely; (e) every one of the entering elements must possess constituents of which each individual is independent of every other; for instance, (f) in a gas the place where a molecule collided must be independent of the place where it collided before. In our physical problem all of these features are not always realized; for instance, the number of particles of gas are only finite instead of being infinite; again, all relative velocities after collision of a pair of molecules are not equally likely; BOLTZMANN and BURBURY provide for these shortcomings by very truly asserting that in actual cases we are not dealing with isolated systems, that the surrounding walls are not impervious to external influences, and that the latter come at haphazard without regard to internal state of the system at the time, thus renewing and maintaining the desired state of haphazard.

Methods. This Calculus works largely by the determination of averages and its results must be interpreted accordingly. Moreover, for the present we will take a popular, practical view of these results and consider a very great improbability as equivalent to an impossibility. Numerical computations are essential in most uses of this Calculus, but here they will be entirely omitted.

[7]The H-theorem considers a process (consisting of a number of separate, reversible processes) which is irreversible in the aggregate.

[Pg 26]



(2) What is Meant by the Probability of a State? Example

To come back to the matter in hand we will now show what is here meant by the probability of any state.

When we speak of the probability  of a particular "elementar-ungeordnete" state, we thereby imply that this state may be variously realized. For every state (which contains many like independent constituents) corresponds to a certain "distribution," namely, a distribution among the gas molecules of the location co-ordinates and of the velocity components. But such a distribution is a permutation problem, is always an assignment of one set of like elements (co-ordinates, velocity components) to a different set of like elements (molecules). So long as only a particular state is kept in view, it is of consequence as to how many elements of the two sets are thus interchangeably assigned to each other and not at all as to which individual elements of the one set are assigned to particular individual elements of the other set.[8] Then a particular state may be realized by a great number of assignments individually differing from one another, but all equally likely to occur.[9] If with PLANCK we call such an assignment a "complexion,"[10] we may now say that in general a particular state contains a large number of different, but equally likely, complexions. This number, i.e., the number of the complexions included in a given state can now be defined as the probability  of the state.[11] Let us present the matter in still another form. BOLTZMANN derives the expression for magnitude of the probability by at [Pg 27] once distinguishing between a state of a considered system and the complexion of the considered system. A state of the system is determined by the law of locus and velocity distribution, i.e., by a statement of the number of particles which lie in each elementary district of space and the number of particles which lie in each elementary velocity realm, assuming that among themselves these districts and realms are alike and each such infinitesimal element still harbors very many particles. Accordingly a particular state of the system embraces a very large number of complexions. For if any two particles belonging to different regions swap their co-ordinates and velocities, we get thereby a new complexion, but still the same state. Now BOLTZMANN assumes all complexions to be equally probable and therefore the number of complexions included in a particular state furnishes at the same time the numerical value for the Probability of the state in question. Illustration taken from the simultaneous throwing of two, ordinary, cubical dice. Suppose that the sum is to be 4 for each throw, then this can be realized by the following three complexions:

First cube shows 1, the second cube shows 3;
First cube shows 2, the second cube shows 2;
First cube shows 3, the second cube shows 1.

The requirement that the sum on the two cubes shall be 2, however, involves but one complexion. Under the circumstances therefore the probability of throwing the sum 4 is three times as great as throwing the sum 2.

In closing this part of our presentation, we may make what is now an almost obvious remark. The long-lasting difficulty in giving a physical meaning to entropy and the Second Law is due to the fact of its intimate dependence on considerations of probability. It is only quite recently that such considerations have attained the dignity of a great working principle in the domain of Physics.

[8]For an example of such permutations see pp. 28 and 61, 62.

[9]LIOUVILLE'S theorem is the criterion for the equal possibility or equal probability of different state distributions.

[10]A happy term, but one not in vogue among English-speaking physicists.

[11]The identity of entropy with the logarithm of this state of probability  is established by showing that both are equal to the same expression. It seems an easy step from this derivation to BOLTZMANN'S definition of entropy as the "measure of the disorder of the motions in a system of mass points."

[Pg 28]

SECTION C

(1) Existence, Definition, Measure, Relations, Properties, and Scope of Irreversibility and Reversibility.

In establishing the existence of irreversibility, we can use one or both of the two general methods of approaching any physical problems (see Introduction, pp. 2, 3) we can approach by way of the atomic theory or by considering the behavior of aggregates in Nature. Enough has already been said in this presentation of atomic behavior and arrangements to justify the statement that irreversibility is not inherent in the elementary procedures themselves but in their irregular arrangement. The motion of each atom is by itself reversible, but their combined mean effect is to produce something irreversible.[12]

This has been rigorously demonstrated by BOLTZMANN'S H-theorem for molecular physics, and when sufficiently general co-ordinates are substituted it is also available for the other domains of natural events. When we consider the behavior of aggregates we recognize at once a general, empirical law, which has also been called the one physical axiom, namely, that all natural processes are essentially irreversible. When we use this method of approach we confessedly rest entirely on experience, and then it does not make any logical difference whether we start with one particular fact or another, whether we start with a fact itself or its necessary consequence: For instance we may recognize that the universe is permanently different after a frictional event from what it was before, or we may start, as PLANCK does, by putting forward the following proposition:

"It is impossible to construct an engine which will work in [Pg 29] a complete cycle,[13] and produce no effect except the raising of a weight and the cooling of a heat reservoir."[14]

Now up to this time no natural event has contradicted this theorem or its corollaries. The proof for it is cumulative, wholly experiential and therefore exactly like that for the law of conservation of energy.

Returning to irreversibility, the matter for immediate discussion, we premise that it will here clarify and simplify our ideas if we consider all the participating bodies as parts of the system experiencing the contemplated process. It is in this sense that we must understand the statement: Every natural event leaves the universe different from what it was before. Speaking very generally, we may say that in this difference lies what we call irreversibility.

Now irreversibility is what really does exist, everywhere in Nature, and our idea of reversibility is only a very convenient and fruitful fiction; our conception of reversibility must, therefore, ultimately be derived from that of irreversibility.

"A process which can in no way be completely reversed is termed irreversible, all other processes reversible. That a process may be irreversible, it is not sufficient that it cannot be directly reversed. This is the case with many mechanical processes which are not irreversible (See p. 32). The full requirement is, that it be impossible, even with the assistance of all agents in Nature, to restore everywhere the exact initial state when the process has once taken place."

Examples of irreversible processes, which involve only heat and mechanical phenomena, may be grouped in four classes:

(a) The body whose changes of state are considered is in contact with bodies whose temperature differs by a finite amount [Pg 30] from its own. There is here flow of heat from the hotter to the colder body and the process is an irreversible one.

(b) The body experiences resistance from friction which develops heat; it is not possible to effect completely the opposite operation of restoring the whole system to its initial state.

(c) The body expands without at the same time developing an amount of external energy which is exactly equal to the work of its own elastic forces. For example, this occurs when the pressure which a body has to overcome is essentially (i.e., finitely) less than the body's own internal tension. In such a case it is not possible to bring the whole system (of which the body is a part) completely back into its initial state. Illustrations are: steam escaping from a high-pressure boiler, compressed air flowing into a vacuum tank, and a spring suddenly released from its state of high tension.

(d) Two gases at the same pressure and temperature are separated by a partition. When this is suddenly removed, the two gases mix or diffuse. This too is an essentially irreversible process.

Outside of chemical phenomena, we may instance still other examples of irreversible processes: flow of electricity in conductors of finite resistance, emission of heat and light radiation, and decomposition of the atoms of radio-active substances.

"Numerous reversible processes can at least be imagined, as, for instance, those consisting throughout of a succession of states of equilibrium, and therefore directly reversible in all their parts. Further, all perfectly periodic processes, e.g., an ideal pendulum or planetary motion, are reversible, for, at the end of every period the initial state is completely restored. Also, all mechanical processes with absolutely rigid bodies and incompressible liquids, as far as friction can be avoided, are reversible. By the introduction of suitable machines with absolutely unyielding connecting-rods, frictionless joints, and bearings, inextensible belts, etc., it is always possible to work the machine in such a [Pg 31] way as to bring the system completely into its initial state without leaving any change in or out of the machines, for the machines of themselves do not perform any work."

Other examples of such reversible processes are: Free fall in a vacuum, propagation of light and sound waves without absorption and reflection and unchecked electrical oscillations. All the latter processes are either naturally periodic, or they can be made completely reversible by suitable devices so that no sort of change in Nature remains behind; for example, the free fall of a body by utilizing the velocity acquired to bring the body back to its original height, light and sound waves by suitably reflecting them from perfect mirrors.

[12]This would seem to imply the existence of a broader principle, the properties of systems as a whole are not necessarily found in their parts.

[13]Such an engine if it would work might be called "perpetual motion of the second kind."

[14]The term perpetual is justified because such an engine would possess the most esteemed feature of perpetual motion—power production free of cost.



(2) Character of Process Decided by the Limiting States

"Since the decision as to whether a particular process is irreversible or reversible depends only on whether the process can in any manner whatsoever be completely reversed or not, the nature of the initial and final states, and not the intermediate steps of the process, entirely settle it. The question is, whether or not it is possible, starting from the final state, to reach the initial one in any way without any other change.... The final state of an irreversible process is evidently in some way discriminate from the initial state, while in reversible processes the two states are in certain respects equivalent.... To discriminate between the two states they must be fully characterized. Besides the chemical constitution of the systems in question, the physical conditions, viz., the state of aggregation, temperature, and pressure in both states, must be known, as is necessary for the application of the First Law."

"Let us consider any process whatsoever occurring in Nature. This conducts all participating bodies from a particular initial condition  to a certain final condition  . The process is either reversible or irreversible, any third possibility being [Pg 32] excluded. But whether it is reversible or irreversible depends solely and only on the constitution of the two states  and  , not upon the other features of the course; after state  has been attained, we must here simply answer the question whether the complete return to  can or cannot be effected in any manner whatsoever. Now if such complete return from  to  is not possible then evidently state  in Nature is somehow distinguished from state  . Nature may be said to prefer state  to state  . Reversible processes are a limiting case; here Nature manifests no preference and the passage from the one to other can take place at pleasure, in either direction. [In the common case of isentropic expansion from  to  , there is no exchange of heat with the outside; external work is performed at the expense of the inner energy of the expanding body. When state  is attained we can effect a complete return to  by compressing isentropically, thus consuming the external work performed on the trip from  to  and restoring the internal energy of the body.]

"Now it becomes a question of finding a physical magnitude whose amount will serve as a general measure of Nature's preference for a state. This must be a magnitude which is directly determined by the state of the contemplated system, without knowing anything of the past history of the system, just as is the case when we deal with the state's energy, volume, etc. This magnitude would possess the property of growing in all irreversible processes, while in all reversible processes it would remain unchanged. The amount of its change in a process would furnish a general measure for the irreversibility of the process."

"Now R. CLAUSIUS really found such a magnitude and called it entropy. Every bodily system possesses in every state a particular entropy, and this entropy designates the preference of nature for the state in question; in all the processes which occur in the system, entropy can only grow, never diminish. If we wish to consider a process in which [Pg 33] said system is subject to influences from without, we must regard the bodies exerting such influences as incorporated with the original system and then the statement will hold in the above given form."

From what has gone before it is evident that the following commonly drawn conclusions are correct:

An irreversible process is a passage from a less probable to a more probable state of the system.

An irreversible process is a passage from a less stable to a more stable state of the system.

An irreversible process is essentially a spontaneous one, inasmuch as once started it will proceed without the help of any external agency.

We have in a general way reached the conclusion that entropy is both the criterion and the measure of irreversibility. But now let us become more specific and go more into certain details, namely, the common features in all irreversibility. The property of irreversibility is not inherent in the elementary occurrences themselves, but only in their irregular arrangement. Irreversibility depends only on the statistical property of a system possessing many degrees of freedom, and is therefore essentially based on mean values; in this connection we may repeat an earlier statement, the individual motions of atoms are in themselves reversible, but their result in the aggregate is not.



(3) All the Irreversible Processes Stand or Fall Together

This is proved with the help of the theorem (p. 30) which denies the possibility of perpetual motion of the second kind.[15] The argument is this: take any case in any one of the four classes of irreversible processes given on p. 31. Now if this [Pg 34] selected case is in reality reversible, i.e., suppose a method were discovered of completely reversing this process and thus leave no other change whatsoever, then combining the direct course of the process with this latter reversed process, they would together constitute a cyclical process, which would effect nothing but the production of work and the absorption of an equivalent amount of heat. But this would be perpetual motion of the second kind, which to be sure is denied by the empirical theorem on p. 30. But for the sake of the argument we may just now waive said impossibility; then we would have an engine which, co-operating with any second (so-called), irreversible process, would completely restore the initial state of the whole system without leaving any other change whatsoever. Then under our definition on p. 30 this second process ceases to be irreversible. The same result will obtain for any third, fourth, etc. So that the above proposition is established. "All the irreversible processes stand or fall together." If any one of them is reversible all are reversible.[16]

[15]At this stage we appreciate that any irreversible process is a passage from a state  of low entropy to a state  of high entropy. We may simplify our proof by considering the return passage from  to  to in part occur isothermally and in part isentropically; then external agencies must produce work and absorb an equivalent amount of heat.

[16]With the help of the preceding footnote this argument can be followed through in detail for each of the cases enumerated on p. 31; only the complicated case of diffusion presents any difficulty.



(4) Convenience of the Fiction, the Reversible Processes

A reversible process we have declared to be only an ideal case, a convenient and fruitful fiction which we can imagine by eliminating from an irreversible process one or more of its inevitable accompaniments like friction or heat conduction. But reversible (as well as irreversible) processes have common features. "They resemble each other more than they do any one irreversible process. This is evident from an examination of the differential equations which control them; the differential with respect to time is always of an even order, because the essential sign of time can be reversed. Then too they (in whatever domain of physics they may lie) have the common property that the Principle of Least Action [Pg 35] can represent all of them completely and uniquely determines the sequence of their events." They are useful for theoretical demonstration and for the study of conditions of equilibrium.

There is a certain, limited, incomplete sense in which we say that we can change from one state of equilibrium to another in a reversible manner. For example, we can, considering only the one converting (or intermediate) body, effect said change by a successive use of isentropic and isothermal change. But this ignores all but one of the participating bodies and this is not permissible if we strictly adhere to the true definition of complete reversible action.

We must remember too that no other universal measure of irreversibility exists than entropy. "Dissipation" of energy has been put forward as such a measure, but we know already of two irreversible cases where there is no change of energy, namely, diffusion and expansion of a gas into a vacuum. [Unavailable, distributed, scattered energy are terms which could be used here, free from all objection.]

But of course, the full equivalent of entropy can be substituted as a universal measure of irreversibility. On p. 27 we have pointed out that the number of complexions included in a given state can be defined as the probability W of the state, then in a footnote, attention is called to the identity of entropy with the logarithm of this state of probability = logarithm of the number of complexions of the state. This makes entropy a function of the number of complexions, so that one may in this sense be regarded as the equivalent of the other. We may now properly speak of the number of complexions of a state as the universal measure of its irreversibility. The physical meaning of irreversibility becomes apparent when put in this form. The greater the number of complexions included in a state the more disordered is its elementary condition and the more difficult (more impossible, so to speak), is it to directly so influence the constituents of the whole that they will reverse the sequence of the mean values the aggregate tends of itself to assume. An illustration [Pg 36] will help to make this clear; the irreversible case in which work (i.e., friction) is converted into heat. "For example, the direct reversal of a frictional process is impossible because this would presuppose the existence of an elementary order among adjacent, mutually interacting molecules. For then it must predominantly be the case that the collisions of each pair of molecules must bear a certain distinguishable character inasmuch as the velocities of two colliding molecules must always depend in a determinate manner on the place where they meet. Only thereby can it be attained that there will result from the collisions predominantly like directed velocities."

The outcome of the whole study of irreversibility results in the briefly stated law: "There exists in Nature a quantity which changes always in the same sense in all natural processes."

This boldly asserts the essential one-sidedness of Nature. The proposition stated in this general form may be correct or incorrect; but whichever it may be it will remain so independently of human experimental skill.

SECTION D

(1) The Gradual Development of the Idea that Entropy Depends on Probability

Entropy is difficult to conceive, in that, as it does not directly affect the senses, there is nothing physical to represent it; it cannot be felt like temperature. It has no analogue in the whole of Physics; Zeuner's heat weight will perhaps serve as such for reversible states, but is inadequate for irreversible ones. This is not surprising when we consider the outcome, namely, that it depends on probability considerations.

CLAUSIUS coined the term Entropy from the Greek, from a word meaning transformation; with him the transformation value was equal to the difference between the entropy of the final and initial states. As there is a general expression for entropy, we [Pg 37] can readily write the equivalent of any transformation between two particular states.

Strictly speaking, however, entropy by itself depends only on the state in question, not on any change it may experience, nor on its past history before reaching the state contemplated. Of course, this was appreciated by such a master mind as CLAUSIUS, and, indeed, he defined the entropy as the algebraic sum of the transformations necessary to bring a body into its existing state. Moreover, as the formula for it was in terms of other more or less sensible thermodynamic quantities, its relation to these was at first more readily grasped, could be represented diagrammatically, and had to do duty for the true, but still unknown, physical idea of entropy itself. It was early understood, too, that growth of entropy was closely connected with the degradation or waste of energy; that it was identical with the Second Law. The frequently given, but not always valid, relation,  [17] led to entropy being called a factor of energy. But all these were change relations and did not go to the root of the difficulty, as to what constituted the physical nature of unchanged entropy.

Quite early, too, there was a realization of the fact that entropy had somehow a statistical character, that it had to do with mean values only. This was well brought out by the long known, and much quoted, "demon" experiment suggested by Maxwell, in which a being of superhuman power separated, without doing any work, the colder and hotter particles of a gas, thus effecting an apparent violation of the Second Law. This, to be sure, was getting close to the crux of the whole matter, but still lacked much to give entropy a precise physical meaning. Nevertheless, we see here a notable approach to the fundamental [Pg 38] requirement that entropy must be tied down to the condition of "elementary chaos" (elementare-unordnung).

We have already dwelt somewhat fully on this hypothesis of "elementary chaos."

"It follows from this presentation that the concepts of entropy and temperature in their essence are tied to the condition of "elementare Unordnung." Thus a purely periodic absolute plane wave possesses neither entropy nor temperature because it contains nothing whatever in the way of uncheckable, non-measurable magnitudes, and therefore cannot be "elementar-ungeordnet," just as little as can be the case with the motion of a single rigid atom. When there is [an irregular co-operation of many partial oscillations of different periods, which independently of each other propagate themselves in the different directions of space, or] an irregular, confused, whirring intermingling of many atoms, then (and not till then) is there furnished the preliminary condition for the validity of the hypothesis of "elementare Unordnung and consequently for the existence of entropy and of temperature."

"Now what mechanical or electro-dynamic magnitude represents the entropy of a state? Evidently this magnitude depends in some way on the "Probability" of the state. For because "elementare Unordnung" and the lack of every individual check (or measurement) is of the essence of entropy it follows that only combination or probability considerations can furnish the necessary foothold for the computation of this magnitude. Even the hypothesis of "elementare Unordnung" by itself is essentially a proposition in Probability, for, out of a vast number of equally possible cases, it selects a definite number and declares they do not exist in Nature."

Now since the idea of entropy, and likewise the content of Second Law, is a universal one, and since, moreover, the theorems of probability possess no less universal significance, we may conjecture (surmise) that the connection between Entropy and Probability will be a very [Pg 39] close one. We therefore place at the head (forefront) of our further presentation the following proposition: "The Entropy of a physical system in a definite condition depends solely on the probability of this state." The permissibility and fruitfulness of this proposition will become manifest later in different cases. A general and rigorous proof of this proposition will not be attempted at this place. Indeed, such an attempt would have no sense here because without a numerical statement of the probability of a state it could not be tested numerically.

[17]This relation is not a valid one, unless the external work performed by a gas during its change is equal to  .



(2) Planck's Formula for the Relation between Entropy and the Number of Complexions

Now we have already seen, from the permutation considerations presented on p. 27, that the Theory of Probabilities leads very directly to the theorem, "The number of complexions included in a given state constitutes the probability W of that state." The next step (omitted here) is to identify the thermodynamically found expression for entropy of any state with the logarithm of its number of complexions.

PLANCK'S formula for entropy  is:  here  is an arbitrary constant without physical significance and can be omitted at pleasure; the numerical value in the first term of the second member is the quotient of energy (expressed in ergs) divided by temperature ( ). This certainly gives a physical definiteness and precision to entropy which leaves nothing to be desired.

PLANCK, in reproducing from probability consideration the dependence of entropy  on probability  , finds the relation  when the dimensions of  evidently depend on those of constant  . [Pg 40]

Here  is BOLTZMANN'S value— , which always changes in one direction only;  is the universal integration constant which is the same for a terrestrial as for a cosmical system, and when it is known for one, it is known for the other; when  is known for radiant phenomena it is also known and is the same for molecular motions.

There are some general statements which indicate more or less rigorously some of the properties or features of the entropy of a state.

(a) Entropy is a universal measure of the "disorder" in the mass points of a system.

(b) Entropy is a universal measure of the irreversibility of a state and is its criterion as well.

(c) Entropy is a universal measure of nature's preference for the state.

(d) Entropy is a universal measure of the spontaneity with which a state acts when it is free to change.

(e) Entropy of a system can only grow.

(f) Entropy asserts the essential one-sidedness of Nature.

(g) There exists in Nature a magnitude which always changes in the same sense.

(e), (f), and (g) imply change and therefore, strictly speaking, should not be mentioned here but postponed to a later section.

SECTION E

EQUIVALENTS OF CHANGE OF ENTROPY IN MORE OR LESS GENERAL PHYSICAL TERMS

Here we are really considering the Second Law, for change of entropy is the kernel of this law, in fact is identical with it. It will be profitable, however, to view this law in all its many physical aspects. To be sure, in times past it has been accounted a reproach to the Second Law that it should be stated in so many different forms,[18] [Pg 41] but now that we know precisely that it stands for the growth in the number of complexions we can more easily trace the connection between any of these rather vague statements and the present precise definition. As we have in the main reserved physical interpretations to a later section we need here only bear in mind certain general principles of comparison:

Any complete summary of the premises necessary for establishing the inevitable growth of the number of complexions of a system is a valid statement of the second law.

Any general corollary from said growth is a valid statement of the second law.

When instituting any comparison we must keep in mind also the two principal points of view of regarding any physical problem, namely, the view of it in the aggregate and that which sees it in its constituent parts.

While we cannot here sharply separate these two points of view, we have on the whole sought to present first those statements which are based on experience and next those based on the atomic theory.

(1) Growth of entropy is a passage from more to less available energy. By available is here meant energy which we can direct into any required channel. With the growth in the number of complexions we can readily see there is greater inability, on the part of the molecules, for that concerted and co-operative action which is necessary for the putting forth of the energy of a system.

(2) Growth of entropy is a passage from a concentrated to a distributed condition of energy. Energy originally concentrated variously in the system is finally scattered uniformly in said system. In this aggregate [Pg 42] aspect it is a passage from variety to uniformity.

(3) Net growth of entropy in all bodies participating in an occurrence means that the system as a whole has experienced an irreversible change of state. This change is of course in harmony with the first law of energy, but this growth gives additional information as it indicates the direction in which a natural process occurs.

(4) Growth of entropy is from less probable to more probable states.

Growth of entropy is passage to a state more greatly preferred by nature.

Growth of entropy is what obtains whenever a natural process occurs "spontaneously."

Growth of entropy is a passage from  to more  conditions.

All these statements are conspicuously based on the theory of probabilities.

(5) Growth of entropy is a passage from a somewhat regulated to a less regulated state. It represents, in a certain sense, Nature's escape from thralldom.

Growth of entropy is a passage from a somewhat ordered to a less ordered molecular arrangement.

Growth of entropy is an increase in the disorder of a system of mass points.

Growth of entropy corresponds to an increase in the number of molecular complexions.

(6) Finally we give a mathematical concept which covers the whole domain of physics: "Any function whose time variation always has the same sign until a certain state is reached and is then zero, may be called an entropy function."

[18]This need cause no surprise, for it is only very recently that the conviction is gaining ground that the Second Law has no independent significance, but that its full content will only be grasped when its roots are sought in the Theorems of the Calculus of Probabilities.

[Pg 43]

SECTION F

MORE PRECISE AND SPECIFIC STATEMENTS OF THE SECOND LAW

We have here classified these statements in the same way as that followed in the preceding section, when grouping the general equivalents of the Second Law under the head of change of entropy. In making comparisons we must, here as there, bear in mind the following three helpful propositions:

(a) The summary of all the necessary prerequisites (or conditions) for determining entropy may be regarded as a complete and valid statement of the second law.

(b) Any general consequence of any one correct statement of the second law may be regarded as itself a valid and complete statement of the second law.

(c) All cases of irreversibility stand or fall together; if any one of them can be completely reversed all can be so reversed.

In the preceding section we have already given the most precise physical statement of the Second Law, namely, when all the participating bodies of the system are considered, every natural event is marked by an increase in the number of complexions of the system. We have numbered the following statements of the second law, for convenience of reference:

(1) J. W. GIBBS. "The impossibility of an uncompensated decrease in entropy seems to be reduced to an improbability." This of course considers all the participating bodies of the system.

(2) All changes in nature involve a net growth in entropy; when such a change is measured in reversible ways, the growth is indicated by the summation:  , when the  sign refers to processes which on the whole are completely  . Of course it is now thoroughly understood that the latter case is a purely [Pg 44] ideal one, which is really never realized in nature and is only a convenient and fruitful fiction in theoretical demonstrations.

(3) M. PLANCK. "It is not possible to construct a periodically functioning motor which effects nothing more than the lifting of a load and the cooling of a heat reservoir."

The proof of this is purely experimental and cumulative and in this respect is exactly like that for the First Law, the Conservation of Energy, and has exactly the same sort of validity.

(4) Perpetual motion of an isolated system, such as a mechanism with friction, is impossible and not even approximately realizable.

This refers to perpetual motion of the second class, a clear illustration of which is given on p. 8 of Goodenough's Notes on Thermodynamics: "A mechanism with friction is inclosed in a case through which no energy passes. Let the mechanism be started in motion. Because of friction work is converted into heat which remains in the system, since no energy passes through the case. Suppose that the heat thus produced could be completely transformed into work; then this work would be used again to overcome friction and the heat thus produced would be again transformed into work. We should then have perpetual motion in a mechanism with friction without the addition of energy from an external source." This can be shown to be equivalent but not identical with the "perpetual motion of the second kind," touched upon in p. 30; the latter does confessedly draw on external energy and furnishes a surplus of power for use, say, in technical service.

Nominally, such a machine is a case of perpetual motion, but not in the usually accepted sense, for it furnishes no surplus of power; it is the getting of something for nothing, of getting cost-free power, which has always been the attractive feature of so-called perpetual motion. Still this machine is as much at variance with experience as PLANCK'S perpetually working motor of the [Pg 45] second kind. The former may be readily reduced to the latter, for it is easy to conceive of such legitimate modification of the former as will make it only a special case of the latter.

(5) The following statements are by distinguished physicists and had better here be considered as confined to events occurring in closed cycles.

CLAUSIUS. It is impossible for a self-acting machine unaided by any external agency to convey heat from one body to another of higher temperature.

CLERK MAXWELL. It is impossible by the unaided action of natural processes to transform any part of the heat of a body into mechanical work, except by allowing heat to pass from that body to another of lower temperature.

THOMSON. It is impossible by means of inanimate material agency to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of surrounding objects.

(6) The efficiency of a perfect engine is independent of the working fluid.

(7) Waste of energy once incurred cannot be diminished in the universe, or in any part of it which neither takes in nor gives out energy.

We understand here by waste that residual part of heat of which none can be elevated back into work.

The measure of such waste =  , when  = lowest temperature and  = change of entropy in a process. This brings out emphatically that the Second Law is not a law of conservation, it is a law of waste, a law of wasted opportunities for utilizing technically available energy.

(8) The second law and irreversibility do not depend on any special peculiarity of heat motion, but only on the statistical property of a system possessing an extraordinary number of degrees of freedom.

(9) M. PLANCK. The second law, in its objective physical [Pg 46] form (freed from all anthropomorphism) refers to certain mean values which are formed from a great number of like "chaotic" elements.

(10) When all the participating bodies of the system are considered, every natural event is marked by an increase in the number of complexions of the system. We repeat, this is the most precise physical statement of the second law and covers the whole domain of science.

We will not comment further on these statements at this time, leaving such discussion of their relations to the section on physical interpretations. [Pg 47]

PART II

ANALYTICAL EXPRESSIONS FOR A FEW PRIMARY RELATIONS

At the beginning of this presentation we disclaimed any purpose of giving a rigorous proof for any of the many formulas with which this subject bristles. We propose only to give in some cases an outline of the main steps of the demonstration and merely for the purpose of getting a clearer physical insight into certain states and relations. Pre-eminent in importance is the state of thermal equilibrium (see pp. 19, 52, 53) and we will therefore consider first its main characteristic:

SECTION A

MAXWELL'S LAW OF DISTRIBUTION OF MOLECULAR VELOCITIES

Without giving a full proof of the law we will give the main steps which lead to its analytical statement, in so doing following the presentation given by HANS LORENZ on pp. 526-529 of his "Technische Wärmelehre," and will then point out its main features and consequences.

We suppose the gas to contain in a unit of volume  molecules each possessing a different velocity and direction. Let there be a system of three co-ordinate axes,  . A fraction  of the total number of molecules will possess a velocity in the  direction, whose values lie between  . The number of molecules which at the same time possess velocities in the  direction, lying between  , will be  , since no preference can be given to either the  or  direction. Similarly and finally the [Pg 48] number of molecules whose velocity co-ordinates concurrently lie between  , and  will be represented by the product  where the only thing known about function  is that the sum of the fractions  extended over all the values of  must = unity, so that

Now if we suppose all of the velocities of the  molecules to be laid off as vectors from a pole  , the three directions  will constitute about  a perfectly arbitrary system of co-ordinates in which  designates a volume element[19] and the velocity  of a molecule is given by  [Pg 49]

Now if we put through the origin  another system of co-ordinates of which one axis coincides with any arbitrarily chosen velocity  , then in this axis the above-found product will be  because the two other co-ordinates (outside of  ) of the volume element  will equal zero and no preference can be given to any direction. Then it can be shown that the form of the function is given by  where  ,  are integration constants which stand in a certain relation to each other, namely,

Further mathematical manipulation eliminates the different velocity directions and gives  for the number of molecules possessing absolute velocities between  and  .

This expression (5) is called MAXWELL'S Law of Distribution; it is identical with that found for the probable distribution of error in a great number of observations and is graphically shown by the following figure, with maximum number of molecules for velocity  . The constant  is therefore a velocity from which [Pg 50] most of the molecules differ but little. The development shows that this self-same distribution exists for every straight line that can be drawn in the volume under consideration.

fig01

BOLTZMANN, in his Gas Theorie, has shown that for such a state the "number of complexions" is a maximum, that is, the entropy is then a maximum.

From the preceding expression (5) follows that the kinetic energy  of the system is  where  is the mean square of the velocity. Integration gives

It is known that the measure of temperature is the mean kinetic energy of the individual molecule and not simply the mean square of its velocity, and we possess here therefore a perfectly precise [Pg 51] definition of temperature. We see also from (7) that temperature in a particular gas is directly proportional to either  or  .

MAXWELL further shows without any assumption as to the nature of the molecules, or the forces acting between them, that the derived law of distribution is valid for any gas mixture, but that is it modified when the gas is exposed to the action of external forces.

BOLTZMANN found (Wien. Akad. Sitzber. LXXII B, 1875, p. 443) for monatomic gases that in spite of the effect of external forces (a) the velocity of any molecule is equally likely to have any direction whatever, (b) the velocity distribution in any element of space is exactly like that in a gas of equal density and temperature upon which no external forces act, the effect of the external forces consisting only in varying the density from place to place as in hydrodynamics.

BOLTZMANN says this "normal" state is permanent (stationary) for given external conditions because magnitude  does not vary; such a normal state has many configurations, but all agree in having same number of complexions.

Also, "MAXWELL'S Velocity Distribution is not a state which assigns to each molecule a particular place (locus) and a particular velocity, which are reached say by the locus and velocity of each molecule asymptotically approaching said assigned locus and velocity. With a finite number of molecules MAXWELL'S state will never be exactly but only approximately realized. MAXWELL'S velocity is not a singular one which is confronted by an immense number of non-Maxwellian velocity distributions. On the contrary, among the immense number of possible velocity distributions by far the greater number possess the characteristics of the MAXWELL velocity distribution."

MAX PLANCK (Festschrift, p. 113) lucidly dwells on thermal equilibrium, entropy and temperature, as follows:

"The mechanical significance of the temperature idea is most closely connected with the mechanical significance of entropy, [Pg 52] for the two are connected by  . By answering one of these questions we at the same time settle the other."

In the earlier days interest was naturally centered in the directly measurable magnitude temperature and entropy appeared as a more complicated idea which was to be derived from the former. Nowadays this relation is rather reversed and the prime question is to first explain entropy mechanically and this will then define temperature. The reason for this change of attitude is that in all such explanatory efforts to present Thermodynamics mechanically and give temperature a complete mechanical definition it is necessary to come back to the peculiarities of "thermal equilibrium." But the full significance of this equilibrium conception is only to be reached from the standpoint of irreversibility. For thermal equilibrium can only be defined as the final state toward which all irreversible processes strive. In this way the question as to temperature leads necessarily to the nature of irreversibility and this in turn is solely founded on the existence of the entropy function. This magnitude is therefore the primary, general conception which is significant for all kinds of states and changes of state, while temperature emerges from this with the help of the special condition of thermal equilibrium, in which condition the entropy attains its maximum.

[19]In MAXWELL'S distribution the molecules are assumed to be uniformly scattered throughout the unit volume; it is the velocities only that are variously distributed in the different elementary regions. To realize the haphazard character (necessary in Calculus of Probabilities) of the motions of the molecules, we must bear in mind that each of the molecules in the unit volume has a different velocity and direction; here no direction has preference over another, i.e., one direction of a molecule is as likely as another. Here at first we write expression for the number of molecules whose velocities parallel to the co-ordinate axes are respectively confined between the velocity limits:  To find the number of molecules thus limited the procedure given above is essentially as follows: Expressed as a fraction  = probability of velocities parallel to  axis having values between  and expressed as a number  = number of molecules having such velocities between the assigned limits; similarly,  = probability of velocities parallel to  axis having velocities between  . As these are two independent sets of velocities, the probability of their concurrence is the product  and the number of molecules thus concurring is equal to  . Similarly, the number of molecules concurrently possessing velocities parallel to each of the three axes is  The problem is simpler in this Maxwellian case than in the more general case of any state of the body in which there is an unequal distribution in space of the molecules.

SECTION B

SIMPLE ANALYTICAL EXPRESSION FOR DEPENDENCE OF ENTROPY ON PROBABILITY

Here also we will dispense with a full proof and content ourselves with the main steps which lead to the desired expression. We will follow PLANCK'S elegant presentation on pp. 136-148 of his Wärmestrahlung. On p. 22 we have dwelt on the usefulness and the necessity for the probability idea in general physics and in this particular case. We can start, therefore, with PLANCK'S theorem: [Pg 53]

"The entropy of a physical system in a particular state depends solely on the probability of this state."

No rigorous proof is here attempted, nor any numerical computations; for present purposes it will suffice to fix in a general way the kind of dependence of entropy on probability.

Let  designate the entropy and  the probability of a physical system in a particular state, then the above theorem enunciates that  where  signifies a universal function of the argument  . Now, however  may be defined we can certainly infer from the Calculus of Probabilities that the probability of a system, composed of two entirely independent systems, is equal to the product of the separate probabilities of the individual systems. For example, if we take for the first system any terrestrial body whatever and for the second system any hollow space on Sirius, which is traversed by radiations, then the probability  , that simultaneously the terrestrial body will be in a particular state 1 and said radiation in a particular state 2, will be given by  where  ,  respectively represent the separate probabilities of said two states. Now let  ,  respectively represent the entropies of the separate systems corresponding to said states 1 and 2, then according to Eq. 8, we have  But, according to the Second Law of Thermodynamics, the total entropy of two independent systems is  , and consequently according to (8) and (9),  [Pg 54]

From this functional equation  may be determined. After successive differentiation there is obtained a differential equation of the second order and its general integral is  which determines the general dependence of entropy on probability. The universal integration constant  is the same for a terrestrial system as for a cosmical system, and when its numerical value is known for either system it will be known for the other; indeed, this constant  is the same for physically unlike systems, as above, where concurrence between a molecular and a radiating system was assumed. The last, additive, constant has no physical significance because entropy has an arbitrary additive constant and therefore this constant in (10) may be omitted at pleasure.

Relation (10) contains a general method of computing the entropy  from probability considerations. But the relation becomes of practical value only when the magnitude  of the probability of a system for a certain state can be given numerically. The most general and precise definition of this magnitude is an important physical problem and first of all demands closer insight into the details of what constitutes the "state" of a physical system. [This has been adequately done in the earlier part of this presentation. Later on pp. 27, 28, permutation considerations led us to define the probability W of a state as the number of complexions included in the given state.] [Pg 55]

SECTION C

DETERMINATION OF A PRECISE, NUMERICAL, EXPRESSION FOR THE ENTROPY OF ANY PHYSICAL CONFIGURATION

PLANCK modestly says that everything essential in the determination of this expression has been done by L. BOLTZMANN, in his wide range of physical investigation. PLANCK'S discussion, however, is so compact and lucid that it is best suited for our purpose. Keeping this purpose in mind we will here also abbreviate by dispensing with parts of PLANCK'S fuller proof and content ourselves with the main steps which lead to the desired expression. These main steps are as follows:

(a) Determination of the general expression for the  of a given physical configuration of a known macroscopic state;

(b) Determination of the general expression for the Entropy  of a given physical configuration of a known macroscopic state;

(c) Special case of (b) namely, expression for the Entropy  of the state of thermal equilibrium of a monatomic gas;

(d) Confirmation, by equating this value of  with that found thermodynamically and then deriving well-known results.

(e) PLANCK'S conversion of the expressions of (b) and (c) into more precise ones by finding numerical values of  in C. G. S. units; in F. P. S. units;

(f) Determination of the dimensions of the universal constant  and therefore also of entropy in general. [Pg 56]



Step a
Determination of the Number of Complexions of a given Physical Configuration of a Known Macrostate

We will, for simplicity's sake, consider here an ideal gas in a given macro-state and consisting of N-like, monatomic, molecules. By generalizing the meaning of our co-ordinates, the following presentation could be made equally applicable to the more general case of Physics contemplated under this heading.

Of course we must here have clearly in mind what is meant by the state of a gas. For this we may refer to p. 10 (lines 13 to 24) and to p. 19 (lines 8 to 24). The conditions there imposed are all fulfilled if we suppose the state given in such a way that we know: (1) The number of molecules in any macroscopically small space (volume element); and (2) the number of molecules which lie in a certain macroscopically small velocity region (soon to be more fully described). To have the Calculus of Probabilities applicable, each of the tiny regions contemplated under (1) and (2) must still contain a large number of molecules and their motions must besides have all the features of haphazard detailed on pp. 25, 26; all this is necessary in order that the contemplated motions may possess all the characteristics of "elementary chaos."

Before proceeding further on our main line, we will define more fully what is meant by the two elementary regions in which lie respectively the molecules and their velocity ends. After this has been done we will, for convenience, combine these two regions into a fictitious elementary region, say, a six-dimensional one.

First there is the volume element  , in which any molecule having co-ordinates lying between  is located; this element can be conceived as a parallelopipedon [Pg 57] whose edges are parallel to the co-ordinate axes; this is the simplest of the elementary regions here to be considered. To conceive of the elementary region  containing the velocity ends of the molecules, let us suppose any origin  for velocities in a unit volume and from this as a pole lay off as vectors the molecular velocities lying between the limits,  where  are the components of said velocities parallel to the respective co-ordinate axes. Then, under the velocity limitations imposed, the end of the velocity of each such molecule will lie in the elementary parallelopiped  , one vertex of this parallelopiped having of course the co-ordinates  . This parallelopiped can be regarded as a constructed volume within which the velocity end must lie. We have therefore here two elementary volumes  and  , which are independent of each other. Now remembering that the probability of any properly endowed molecule being found in one of these volumes is in each case equal to the number of molecules belonging or corresponding to the volume considered. Assuming, for the moment, an equal distribution of molecules and velocities throughout the whole volume, we may say that the number of molecules occurring, in each of the said elementary volumes, is proportional to their respective sizes; this is here equivalent to saying that the probability of any molecule thus occurring in said elementary volumes is proportional to their respective sizes. Having stated the probability of each contemplated occurrence, we can now say the probability of these two events concurring is equal to the product of the probabilities of said two separate occurrences. Moreover, as the probability of each occurrence is proportional to the [Pg 58] size of its own elementary volume, the product of said probabilities will likewise be proportional to the product  of the two elementary volumes. Here  can be regarded as a sort of fictitious volume or region, constructed, say, in a six-dimensional space.[20]

The extent of such an elementary region is very minute in comparison with the total space under consideration, but still it must be conceived as sufficiently large to embrace many molecules, otherwise its state would not be one of "elementary chaos." On account of the equivalence here of probability and number of concurring molecules, we may for the present say that the number of the latter is proportional to the magnitude of this elementary region  . But before we proceed further this last statement must be subjected to a correction, for we temporarily assumed above that there was an equal distribution of molecules and velocities throughout the whole volume. Now at the start, in defining the contemplated state, it was distinctly announced that there was an unequal distribution of such elementary conditions, the law of their distribution being given by the known number of molecules in each elementary volume  and in each constructed elementary velocity volume  . This correction is effected by the introduction of a finite proportionality factor,  which can be any given function of the location and velocity co-ordinates, so long as it fulfills the one condition (put in abbreviated form),  where  .

[Pg 59]

Strictly speaking, the expression  for the fictitious elementary region  , formed by the product of  and the constructed-volume element  , should be replaced by the expression  , where  is the mass of a molecule. The reason for this substitution is found in the fact that the magnitude of the constructed-volume element  varies with time due to the variation of velocities effected by molecular collisions. Now this variation of magnitude is not permissible with the probability considerations which here obtain. For the probability of a state which necessarily follows from another state must be like that of the latter. As the momenta after collision are the same as before collision, we have now in the momenta, co-ordinates which do not vary with time like their constituent velocities. Therefore if we substitute in (12) for the velocities  their corresponding momenta, the variation with time of the constructed-volume will cease and the objection cited will no longer be a valid one.

Now let us take up the determination of the number of complexions  in the given state. For this purpose think of this whole state as represented by the sum total of all these equal elementary regions  ; for convenience of reference let us call this whole state the "state-region." The probability that a particular molecule will belong to a particular elementary region is equally great for all the elementary regions. Let  represent the number of these equal elementary regions. Now we will proceed with the help of a parallel case. Let us think of as many dice  as there are molecules and let each die be provided with  faces. On each of these faces we will write in their order the digits 1, 2, 3, ...  , so that each of the  faces will designate a particular elementary region. Then each throw of the dice will result in representing a particular state of the gas, because the number of dice which show uppermost a particular digit will give the number of molecules belonging to the elementary region represented by said digit. In this parallel case each die is equally likely to show up any one of the digits 1 to  , corresponding [Pg 60] to the circumstance that each individual molecule is equally likely to belong to any one of the elementary regions. The desired probability  of the given state of the molecules corresponds therefore to the number of different kinds of throws (complexions), by which the given distribution  can be realized. For example, if we take  molecules (dice) and  elementary regions (dice faces), and assume that the state is so given that it is represented by:  Then this state can be realized by one throw, in which the 10 dice show the following digits:

Under each of the 10 dice stands the digit shown uppermost in the throw. In fact, we see

In like manner the same state can be realized by many other such complexions. The desired number of all possible complexions can be found by considering the digit row designated above by (15). For, since the number of molecules (dice) is given, [Pg 61] the digit row will have a particular number of places ( ). Besides, since the number of molecules belonging to each elementary space is given, each digit will occur equally often in the row in all permissible complexions. Moreover every change in digit arrangement effects a new complexion. The number of the possible complexions or the probability  of the given state is therefore equal, under the conditions specified, to the possible "permutations with repetition."[21] In the simple example chosen we have for such permutation, according to a known formula,  Consequently in the general case, we have  where the sign  signifies the product extended over all the  elementary regions.

Result contained in (16) is equally true for any other physical system, say, one involving radiant energy. For the conditions and the variables are similar to those of the molecular system just employed. The chosen model, the dice system, which served as an easily conceived parallel case, would be equally serviceable in dealing with the elements of radiation.

[20]Such a fictitious space does not occur in the proof of MAXWELL'S distribution, because there conditions are simpler. See footnote to p. 49.

[21]Compare with pp. 27 and 28 where this permutation process is discussed somewhat fully.

[Pg 62]



Step b
Determination of a General Expression for the Entropy S of any given Natural State

This step is an easy one. We have in Eq. (10) the relation expressing the universal dependence of entropy  on probability  . Substituting and writing out the logarithm of the quotient given in (16), we have

The summation  must be extended over all the elementary regions  . With the help of STIRLING'S formula, and remembering that both  and  are constant for all changes of state, the above expression (17) is reduced to the form  This magnitude  is numerically the same as  for which BOLTZMANN proved that it changed in a one-sided way in all changes of state. We must bear in mind too that function  represents, for every state of the gas, the given space and velocity distribution of the gas molecules. The permanent, stationary, state of the gas known as thermal equilibrium is only a special case of the general case (18), this special case being widely known as MAXWELL'S Law of Distribution of Velocities.



Step c
Special Case of (b), Namely, Determination of Entropy S for the Thermal Equilibrium of a Monatomic Gas

This case PLANCK derives very easily from the general case represented by (18). As the desired result has already been found in another way in pp. 48-53 when dealing with MAXWELL'S [Pg 63] Law (of distribution of molecular velocities), we will not repeat PLANCK'S derivation of the law from (18). It will suffice here to give the results: The law of distribution is given by function  where  and  are constants and  the total energy. As this expression for function  is free from all location co-ordinates  , we see that this state of thermal equilibrium is independent of these space co-ordinates and conclude that in this state the molecules are uniformly distributed in space; only the velocities are variously distributed, all of which accords with the earlier presentation. Substituting the results of (19) in the general equation (18) there results for the entropy S of the state of equilibrium of a monatomic gas,  To make Eq. (20) practically serviceable we need to know the constants  and  and they will be found later on.



Step d
Confirmation, by Equating this Probability Value of S with that found Thermodynamically and Securing well-known Results

We know from Thermodynamics that the change of entropy is defined in a perfectly general way (for physical changes)[22] by

Deriving the partial differential coefficients and making use of (20), there follows:  [Pg 64] where  = number of gram-molecules (referred to  ) and  = absolute gas constant [1545 in F.P.S. system]. Here the first of Eqs. (22), represents the combined laws of BOYLE, GAY-LUSSAC, and AVOGADRO. We get besides from the equating of (20) and (21), the additional relations,  where mechanical equivalent  C.G.S. system.

From this follows  as is known for monatomic gases.

Furthermore, we find for the mean kinetic energy  of a molecule

We also have

With the help of the specific heats and the characteristic equation of the gas, the whole thermodynamic behavior of the gas is disclosed. All this has resulted from the identification of the mechanical and thermodynamic expressions for entropy and is an indication of the fruitfulness of the method pursued. PLANCK also shows that this method leads to the finding of results heretofore unknown.

[22]This differential equation is valid only for changes of temperature and volume of the body but not for its changes of mass and of chemical composition, for in defining entropy nothing was said of these latter changes.

[Pg 65]



Step e
Conversion of the General Expressions in (b) and (c) into more Precise ones by Finding and Inserting the Numerical Value of the Universal Constant k; Some of the Results

From the consideration of certain phenomena of radiation PLANCK found  where  and  are constants found by experiment while  and  are exactly known values, mathematically derived. The present accuracy of (27) therefore rests on the accuracy of the experiments from which  and  were found. In discussing Eq. (10) it was pointed out that  was a universal constant, applicable to all physical systems and consequently may be used for the molecular configurations mainly considered in this presentation. But before introducing numerical value of  in the general expressions contained under headings  and  , we will add other numerical values of interest.

PLANCK gives  = number of molecules in 1 ccm. of an ideal gas at freezing-point ( C.) and atmospheric pressure; he also gives for the ratio  = number of molecules per  grams; the corresponding numbers in English units are, approximately,  = number of molecules in one cubic foot of an ideal gas and  = number of molecules in one pound of an ideal gas. Assuming air to be an ideal gas and its "apparent" molecular weight about 28.88, the number of molecules [Pg 66] in one pound of air would be  .

Substituting the numerical value of universal constant  in Eq. (10) we get Eq. (28).

For C. G. S. system, Entropy of any natural state, Eq. (28) is

For F. P. S. system, Entropy of any natural state, Eq. (28) is

To each of these may be added an arbitrary constant. In Eq. (20) we may substitute directly the equivalent of the product  found from Eq. (22), and then get for the entropy  of a monatomic gas in the state of thermal equilibrium,

When the volume  is known we can now readily find  and then  numerically, and place this number as a coefficient in Eq. (20).



Step f
Determination of the Dimensions of k or of the Entropy S

It is at once evident from an inspection of the perfectly general Eq. (10) that the dimensions of Entropy  depend solely on those of the universal constant  . The relation given in Eq. (21) shows at once that dimensions of  depend upon the quotient found by dividing energy by temperature and the relations given in Eqs. (22) and (25) that the dimensions of constant  also depend on this same quotient. The dimensions of Entropy  and of constant  are therefore identical and this might suffice to show that here neither  nor  is to be regarded as a mere ratio or abstract [Pg 67] number. A word further in this connection may, however, be helpful. In reversible processes we have the well-known relation  . To simplify matters, let us suppose heat  supplied while volume is kept constant, then  or  Here Entropy  has the same dimensions as  ; now in the relation  if we regard  as an abstract number then, in order that the equation shall be homogeneous the factor ( ) must represent heat energy like  , and this is sometimes done; in such case (if  retains its ordinary meaning) the quotient  in Eq. (30) is no longer a mere ratio or abstract number, but a quotient of the dimensions of energy divided by temperature. On the other hand, if  be regarded as of the dimensions of the quotient of energy divided by temperature, then we may consider  in (30) as an abstract number or ratio and  of the same dimensions as  . When an absolute system of units is employed, which possesses as one of its features the expression of temperature in units of energy, then  ,  and  will all be mere ratios or abstract numbers.[23]

[23]See C. V. BURTON'S article in Philosophical Transactions, Vol. 23-24, 1887.

[Pg 68]

PART III

PHYSICAL INTERPRETATIONS

SECTION A

OF THE SIMPLE REVERSIBLE OPERATIONS IN THERMODYNAMICS

Change under Constant Volume

We found above that the entropy of a state was precisely defined in a physical way by the number of complexions of that state. Now let us see what happens to this number of complexions when an ideal gas experiences some of the simpler changes, of a reversible (non-cyclical) character. We will begin with the case in which the volume of the gas remains constant while its temperature rises, the final state of the gas having a higher temperature than its initial state.

We see from Eq. (7), p. 51, that  grows and from Eq. (4), p. 50, that  diminishes. MAXWELL'S Law, given by Eq. (5), p. 50, shows for a given velocity  that the number  of molecules possessing the given velocity is less in the final state than it was in the initial state, and as the total number  of molecules in the gas is unchanged, there will be a greater variety of velocities in the final state. This makes the number of possible permutations greater in the final state, thus increasing the number of complexions; consequently, as entropy varies with the logarithm of the number of complexions, we see that the entropy of the final state is greater than in the initial state and this agrees with experience. [Pg 69]



Isobaric Change

Next we interpret how the number of complexions are affected by isobaric change during a reversible process, again assuming that the temperature in the final state is greater than in the initial one. Here the steps and the conclusion are exactly the same as in the preceding case. In both cases just the opposite result is reached when there is a fall in temperature.

As the  diagram contains the co-ordinates  , and represents mainly the mechanical changes in the body under consideration, we can, by suitable combination, similarly interpret any other reversible change of state represented in this  diagram.



Isothermal Change

However, because of its general importance and because of its bearing on the temperature-entropy diagram, we will here also tell, in the same physical terms, what happens when our ideal gas undergoes isothermal change with increase of volume. As the temperature in the final state is equal to that in the initial one, the quantity  does not change and therefore  does not change nor (see Eq. (5), p. 50) does the number  of molecules possessing the velocity  change. The variety of velocities in the final state is therefore the same as in the initial state and does not at all contribute to that necessary increase in the number of complexions (configurations) for which we are looking.

The direction of the velocity of a molecule would be another variety element, but as the final volume evidently possesses as many velocity directions as the initial volume, this element or co-ordinate will not contribute to increased complexity in the final state. But, as the volume has increased, the final state will contain more unit volumes (and these can be taken as small [Pg 70] as we please) than the initial state. As it is here equally likely that a particular molecule will be found in any one of these unit volumes, it is evident that the increase of volume will add increased variety to the location or configuration of the molecules and by indulging in the swapping of places inherent in the production of complexions, we see that said increment of volume will make the number of complexions in the final state greater than in the initial state, which in turn means that the entropy in the final state is also greater. This accords with experience, but it can also be seen from the formula  derived by PLANCK (p. 63 of Vorl. ü. Theor. Physik), from probability considerations, for the state of thermal equilibrium. Here  is the universal constant (see p. 66) and the other terms have the same meaning as before.



Isentropic Change

The last reversible process, to be here physically interpreted, is isentropic change from the initial state of thermal equilibrium to its final state. Evidently only the physical elements underlying the bracketed term in Eq. (31) need to be considered.

As we are considering isentropic change ( ), it does not make any difference whether on the one hand we think of this isentropic change as accompanied by an increase in temperature and decrease in volume, or on the other hand think of said change as taking place with decrease of temperature and increase of volume. Suppose we assume the latter kind of change. Then from what has preceded we know that increase of volume by itself would increase the number of complexions of the final state, also, from what has gone before, we know that the drop in temperature by itself will lead to decrease in the number of complexions in the final state. These two influences acting [Pg 71] simultaneously therefore tend to neutralize each other and if they exist in the proper ratio, derivable from the bracketed quantity in Eq. (31), they will completely balance each other and produce no change whatever in the number of complexions while passing from the initial to the final state of equilibrium, i.e., will produce no change whatever in the entropy of the gas under consideration. In isentropic change Nature has no preference for its various states.

The temperature-entropy diagram considers mainly thermal changes, and as we have considered the influence of both of its co-ordinates in the number of complexions, we can ascertain by proper combination, for any reversible change of state, the corresponding character of the change in the number of complexions. It is evident, too, that in the diagram any reversible change of state is equivalent, so far as the change of entropy in the one body is concerned, to an isentropic change combined with an isothermal change, the latter only affecting the result, so far as change in number of complexions is concerned.

SECTION B

OF THE FUNDAMENTALLY IRREVERSIBLE PROCESSES

If we consider only heat and mechanical phenomena and do not include electrical occurrences, the irreversible processes may be grouped in four classes:

(a) The body whose changes of state are considered is in contact with one or more bodies whose temperatures differ by a finite amount from its own. There is here flow of heat from hot to cold and the process is an irreversible one.

(b) When the body experiences friction which develops heat it is not possible to effect completely the opposite operation.

(c) The third group includes those changes of state in which a body expands without at the same time developing an amount of external energy which is exactly equal to the work of its elastic [Pg 72] forces. For example this occurs when the pressure which a body has to overcome is essentially (that is, finitely) less than the body's own internal tension. In such a case it is not possible to bring said body back to its initial state by a completely opposite procedure. Examples of this group are: steam escaping from a high-pressure boiler, compressed air flowing into a vacuum tank and a spring suddenly released from its state of high tension.

(d) Suppose two gases existing at the same pressure and temperature are on opposite sides of a partition; when the partition is quickly removed the two gases will diffuse or mix. These gases will not unmix of themselves and the diffusion process is an irreversible one and is somewhat like the process considered under (c).

The foregoing facts and propositions have in the main already been stated in this presentation and it will be profitable to make comparisons with the definition of irreversible and reversible events given on p. 30 and with the examples on pp. 31, 32.



HEAT CONDUCTION

The group under head (a) represents the irreversible processes which perhaps occur most often, namely, the direct passage of heat, by conduction or radiation, from a hot body to a cold body, here say from a hot gas to a cold gas. The former loses in heat energy what the latter gains. As radiation phenomena have very special features of their own and for the present may be said to be outside of our selected province, we will confine our attention to heat conduction alone. Moreover, for our present purpose, we will suppose said flow or change to take place without alteration of volume of either the hot or the cold gas. Then will the hot gas experience a drop in temperature and the cold one a rise in temperature. We have already treated such isometric changes and know that the number of complexions is thereby diminished in the originally hotter body and increased in the originally colder [Pg 73] one. If this increment is greater than the accompanying decrement, then the final outcome of this direct passage from hot to cold is an increase in the total number of complexions of the two gases. There will then, by our precise definition, be a corresponding increase in the total entropy of the two systems. It is foreign to our present purpose to prove in an independent, purely mechanical way, that such excess does finally exist and will here content ourselves with the well-known and simple thermodynamic expression for this excess,  where  is the heat energy thus directly transferred from the hot to the cold body,  the absolute temperature of the hot body and  that of the cold body.



THE WORK OF FRICTION IS CONVERTED INTO HEAT

The group under head (b) contains a class of events which usually attends, in one form or another, most natural phenomena.

We will here consider an interesting (but perhaps too special) case, namely, the experiment performed by W. THOMSON and JOULE on the flow of gas through a porous plug. The plug obstructed the uniform, non-conducting, passage through which the gas was forced without sensibly changing its velocity of flow: (See LORENZ' Technische Wärmelehre, p. 275). It can easily be shown (in L., p. 274) that with an ideally perfect gas,  As a matter of fact, there was an actual though slight drop in temperature found to exist with the most perfect gases available. Evidently the process was a throttling one, reducing the larger initial pressure to the smaller final one, which reduction was of course accompanied by a corresponding increase in volume. [Pg 74]

Assuming that an ideally perfect gas was employed in the experiment, and that the final state for our consideration is that corresponding to its attainment of thermal equilibrium, we see that because of the unchanged temperature there is here no loss of internal energy, for the work consumed by the friction of the porous plug is all returned to the gas by the heat developed by said friction. Moreover, the + and - external work in this experiment also balance. Now although there has been no loss of energy there has been a growth of entropy corresponding to the evident increase in the number of complexions. This increase is exactly equal to that found for reversible isothermal change of state when accompanied by an increase in volume, and the discussion is therefore not repeated here.

One phase of the above process is the conversion of mechanical work into heat through the medium of friction.



INCREASE OF VOLUME WITHOUT PERFORMANCE OF EXTERNAL WORK BY ELASTIC FORCES OF THE GAS

This case of an irreversible process comes into group c. We will consider here JOULE'S well-known experiment with the air tanks, in which the compressed air, initially stored in the one tank, was allowed to discharge into the other tank which, at the start, contained only a vacuum. At the end of the experiment, when thermal equilibrium obtained, the temperature in the two, now connected, tanks was the same as originally existed in the compressed-air tank. Here of course it is assumed that the air exchanged no heat whatever with the outside.

As the final state, like the initial state, is in thermal equilibrium, and possesses the same temperature, we can ascertain the total change in the number of complexions as we did when discussing isothermal and reversible changes and because of the accompanying increase in the volume of the air, find that here as there the number of complexions has increased and that therefore the entropy of the air has increased in this case. [Pg 75]

We might rest satisfied with this conclusion, but additional light will be shed on entropy significance if we consider more in detail the intermediate stages of this evidently irreversible process. The rush of air from the full to the empty tank produces whirls and eddies of a finite character and it is only when these have subsided, by the conversion of the visible or sensible kinetic energy of their particles into heat, that thermal equilibrium obtains. But at each intermediate stage (while still visibly whirling and eddying) the gas possesses entropy, even while in the turbulent condition. This is clear from our present physical definition of entropy, namely, the logarithm of the number of complexions of the state, for it is evident that even in this turbulent state it possesses a certain number of complexions, however difficult mathematically it may actually be to find this number. Boltzmann found an expression for any condition; PLANCK gave it the form of Eq. (18), p. 63,  where  (in the C.G.S. system) is a universal constant, function  is the law of distribution of the particles and their velocity elements and  is a sort of fictitious elementary region in a six-dimensional space. From its derivation and definition the value given for entropy  in Eq. (33) depends only on the state of the body at the instant in question and does not at all depend on its history preceding this instant.

The difference between the value of  for the final state (say, as given for a gas by Eq. 20) and the value of  as given by Eq. (33) for the instant, constitutes the driving motive which urges the gas toward thermal equilibrium. A similar difference or driving motive is the underlying impelling cause of all natural phenomena. [Pg 76]



OF THE DIFFUSION OF GASES

This case of an irreversible process comes under group d. Concerning this phenomenon J. W. GIBBS established the following proposition:

"The entropy of a mixture of gases is the sum of the entropies which the individual gases would have, if each at the same temperature occupied a volume equal to the total volume of the mixture."

That the total entropy will be larger as a result of the mixing detailed under d, p. 73, may be inferred from the following consideration: When two gases are thus brought together, it is more probable that in any part of the total space available for this mixture there will be found both kinds of molecules than only one kind of these molecules.

But this irreversible process can be explained in a more distinctly physical way. The two gases are originally at the same pressure and temperature; they mix without other changes occurring in surrounding bodies; the mixture (when diffusion is completed) is at the same pressure and temperature as the original gaseous constituents. Considering each gas by itself, what has happened as the result of diffusion is that each gas in its final state occupies a larger volume than in its initial, unmixed, state. The presence of the other gas in the mixture in no wise changes this fact. Of course this increment in volume is accompanied by a corresponding decrement in its pressure, without change in temperature. A sort of isothermal change of state has taken place in the passage from one condition of thermal equilibrium to the other. We have already seen that then the number of complexions of the gas increases and consequently also its entropy. The sum of the increments of the number of complexions separately experienced by the two diffusing gases constitutes an increase in the total number of complexions over and above the total number of complexions existing in [Pg 77] both gases before diffusion. There is of course a corresponding increase in entropy due to such diffusion.

All these irreversible processes are passages from less stable to more stable conditions, from less probable to more probable states, or summarizing:

There is in Nature a constant tendency to equalize temperature differences, to convert work into heat, to increase disgregation and to promote diffusion.

This tendency has also been described as the tendency in Nature to pass from concentrated to distributed conditions of energy.

The four irreversible processes just discussed are all spontaneous ones, i.e., they occur without the help of agencies external to the bodies directly engaged in the transformations.

It is evident that the foregoing statements are really identical, expressing the same thought in different ways.

SECTION C

NEGATIVE CHANGE OF ENTROPY; SOME OF ITS PHYSICAL FEATURES OR NECESSARY ACCOMPANIMENTS

A negative transformation in any part of a system is the diminution of entropy which it experiences, and this we know means a diminution in the number of complexions of the part considered. But there are some features of such negative transformations which, while they do not in themselves constitute any additional principle, deserve special mention.

Before we make such mention, however, we will anticipate a little, and state the Second Law in forms which will make said features obvious:

In an irreversible cycle the sum of the changes of entropies experienced by all the bodies concerned is greater than zero. When the cycle is reversible in all of its parts, then said sum of entropy changes is equal to zero.

A corollary from this theorem is that, in a cycle, [Pg 78]

All the negative transformations present  all the positive transformations that occur.

When there is simply  process without the cyclic feature, then the sum of the entropies of all the bodies participating in any one occurrence is, at the end of the change of condition  that at the beginning.

From this we see that a negative change of entropy always keeps company with an equal or greater positive change of entropy.

Again, for sake of simplicity, use a gas as an illustration; then we may say: (1) Every possible negative transformation in a gas is always accompanied by a net positive transformation in the other and necessary external agencies. (2) All possible negative transformations in a gas are reversible ones. We here use the word possible because there is an impossible class of negative transformations, namely, those which, so far as order and directness are concerned, are the very opposites of the so-called spontaneous changes of state.

It will suffice here to enumerate these opposites: Without external help (a) to pass heat from a cold to a hot body, (b) to decrease the volume of a gas, (c) to convert the heat of friction directly back into the work which called it forth, (d) to separate the gaseous constituents of a mixture.

By way of contrast we may add, that the so-called spontaneous (irreversible) processes were all positive transformations which took place without any change whatever in surrounding bodies. [Pg 79]

SECTION D

PHYSICAL SIGNIFICANCE OF THE EQUIVALENTS FOR GROWTH OF ENTROPY GIVEN ON PAGES 42-43

According to equivalent (1) growth of entropy is a passage from more to less available energy. The comment already made on p. 42 indicates sufficiently that this increase in unavailability is due to the growth of the ungovernable features of molecular motions as number of complexions increases.

Equivalent (2) states growth of entropy to be a passage from a concentrated to a distributed condition of energy. In this scattered state the energy is certainly less controllable and for the same reason as that given concerning equivalent (1).

Equivalent (3) is based on the idea of irreversibility, and we saw on p. 36 that the growth in the number of complexions is the measure as well as the criterion of irreversibility. This growth is therefore a sufficient and necessary feature of this equivalent.

The equivalents grouped under (4) are all based on the theory of probabilities. We have seen on pp. 36, 62, and elsewhere, that the probability  of a state is the logarithm of the number of complexions of the state. This number is therefore a necessary feature of this set of equivalents and hence constitutes its physical significance.

The set of equivalents grouped under (5) are all closely related, their dependence being more or less indicated by the order in which they are there stated. The outcome of the series is that growth of entropy corresponds to an increase in the number of complexions.

The mathematical concept stated under (6) covers more than molecular configurations; it covers configurations whose elements are those of energy as well, and has been successfully applied by PLANCK in problems dealing with the energy of radiation. Every such configuration has a number of complexions. [Pg 80]

SECTION E

PHYSICAL SIGNIFICANCE OF THE MORE SPECIFIC STATEMENTS OF THE SECOND LAW GIVEN ON PAGES 44-47

In making here the contemplated comparisons and interpretations we must keep in mind the three helpful propositions given on p. 44.

The conservative statement under (1) is confessedly based on the Calculus of Probabilities as applied to a mechanical system. We repeat here therefore what was said about (4) of the preceding series of equivalents, namely, that the number of complexions of the state is a necessary feature of this statement of the second law and therefore constitutes its physical significance.

The statement under (2) is a common one. As each of the exact definitions of the entropy for every natural event has been shown to depend solely on the number of complexions of a system (all the bodies participating in the event being considered a part of the system) we have here likewise in this number an adequate physical explanation of the second law.

Statements (3), (8) and (9) have already been derived and explained in this presentation (see pp. 45, 46) as the result of the growth of the number of complexions in every natural event, when all the bodies participating in the event are considered.

Statement (4) is only a slight variation of (3) and needs no special comment here.

The same may be said of the three forms under (5).

The statement in (6) is only a corollary resulting from the use of (3) or (4) or (5).

The statement in (7) of the second law may be objected to because the underlying definitions are not entirely free from ambiguity and because it lacks a scientifically general character. But it expresses compactly a matter of great consequence in technical circles. Moreover its explanation in our physical terms [Pg 81] is very simple and direct, viz., when waste is incurred there is a growth in the number of complexions, the complexity of the molecular motions has been increased. Less of the stored-up energy is available, less is capable of being directed into certain technical channels. Evidently the greater the complexity of the molecular motions the less governable they are by any direct external force or influence we can bring to bear. This is because we are unable to act directly on the individual molecules and sway them to our special technical purpose. Our external forces and agencies can only operate on the aggregates comprising our system and must obey the one-sided law imposed on all such aggregates. [Pg 82]

PART IV

SUMMARY:

THE CONNECTION BETWEEN PROBABILITY, IRREVERSIBILITY, ENTROPY AND THE SECOND LAW

SECTION A

(1) Prerequisites and Conditions Necessary for the Application of the Theory of Probabilities

These may be briefly stated to be (a) atomic theory, (b) the likeness of particles (or elements), (c) very numerous particles, and (d) "elementary chaos."

The first prerequisite is that the body (here a gas) is made up of small, discrete particles. This atomic theory has long been the foundation stone of chemistry, and is again coming into deserved esteem in Physics pure and simple. (See simple and clear article in Harper's Monthly, June, 1910). But this minute subdivision must be accompanied by the particles being of the same kind, or at least belonging to comparatively few groups, each containing many particles of the same sort. This likeness is necessary; for only from this likeness results law and order in the whole from disorder in the parts. If the constituents were of many different kinds, the results in the aggregate would not be so simple as we actually find them to be. There is an example of this sort of complexity in chemistry. We have already intimated that the particles of each kind must be very numerous, but special emphasis must be laid on this prerequisite. If we ask how numerous these [Pg 83] elements must be in order that the Theory of Probabilities may be applicable, the answer is, as many constituents as are necessary to determine the mean values which define the state in the macroscopic sense (i.e., in the aggregate condition). An idea of the extent to which Nature carries this subdivision is furnished by the fact that one grain (avoirdupois) of air, under standard conditions, contains,  i.e., millions of billions of particles!

The last one of said prerequisites is "elementary chaos," and needs further elucidation and limitation; we will therefore go into this feature at greater length.

BOLTZMANN has used the term "molekular-ungeordnet" (molecularly disordered) to designate this chaotic condition of the particles, and PLANCK has introduced a more general term still, "elementar-ungeordnet" (elementary disorder or chaos) in order to make the method applicable to phenomena like radiation, in which the elements are not atoms or particles but partial oscillations of different periods. The essence of the matter seems to consist in excluding from consideration all such regularities in the conditions of the elements as would lead to results at variance with the well-known laws of physical phenomena, justifying this exclusion by the assumption that no such elementary regularities obtain in Nature. This only means that not all of the many molecular arrangements, which are conceivable from the purely mechanical standpoint, are actually realized. For instance, in an isolated gaseous system we could conceive of a succession of elementary states at variance with the principle of conservation of energy; such a set would obviously not be realized. This exclusion or limitation leaves room for various hypotheses as to said elementary disarrangement, but to be admissible they must all permit of the legitimate application of the Theory of Probabilities, the best one being ultimately determined by its [Pg 84] agreement in the whole with known facts or laws. Evidently by prearrangement and precomputation there could be obtained molecular arrangements which would establish long-continued regularities, which would furnish mean results in the aggregate, that would be at variance with the well-known behavior of Nature. All such cases are here excluded.

According to PLANCK the unregulated, confused and whirring intermingling of very many atoms (in the case of a monatomic gas) is the prerequisite for the validity of this hypothesis of "elementary chaos."



(2) Differences in the States of "Elementary Chaos"

When we consider the general state of a gas we need not think of the state of equilibrium, for this is still further characterized by the condition that its entropy is a maximum.[24]

Hence in the general or unsettled state of the gas an unequal distribution of density may prevail, any number of arbitrarily different streams (whirls and eddies) may be present, and we may in particular assume that there has taken place no sort of equalization between the different velocities of the molecules. To conceive of said differences we may assume beforehand, in perfectly arbitrary fashion, the velocities of the molecules as well as their co-ordinates of location. But there must exist (in order that we may know the state in the macroscopic sense), certain mean values of density and velocity, for it is through these very mean values that the state is characterized from the aggregate (macroscopic) standpoint. The differences that do exist in the successive stages of disorder of the unsettled state are mainly due to the molecular collisions that are constantly taking place, thus changing the velocity and locus of each molecule.

Let us for sake of brevity speak of the state of permanence [Pg 85] finally attained by this chaotic mass as the normal state, and all the preceding chaotic states as abnormal states.

[24]The rest of the paragraph is a repetition of what was stated at middle of p. 19.



(3) Number of complexions, or probability, of a chaotic state.

It was shown, in an earlier portion of this presentation, that each such chaotic state (abnormal or normal) is characterized by its number of complexions, which is determined by the Theory of Probabilities. This number is a variable one for the successive abnormal states and is a fixed and a maximum one (under given external conditions) for the normal state. Now BOLTZMANN (by the application of the Theory of Probabilities to this chaotic state) has shown that the means of these states vary in one direction only, in such a way that the probable number of complexions of the successive abnormal states continually grows till it attains its maximum in the normal and permanent state.

SECTION B

IRREVERSIBILITY

This one-sidedness of the average action or flux constitutes and sharply defines what is meant by irreversibility. It does not imply that the motion of any particular atom cannot be reversed, but that the order in which these averages (or the number of complexions) occur cannot be reversed. We have here a process, consisting of a number of separately reversible processes, which proves to be irreversible in the aggregate. This is not the only possible characterization of the property of irreversibility inherent in all natural events, but is perhaps as general and exact a one as can be enunciated. Superficially speaking, from the confused and irregular motions contemplated, it is quite evident that this succession of whirls and eddies cannot be worked directly backward to bring about, in reverse order, the finite physical state which initiated them; for the effecting of such an opposite change [Pg 86] would demand a co-operation and concert of action on the part of the elementary constituents which is felt to be quite impossible. It will not be so general and scientific, but perhaps more easily apprehended, if we put this result in terms of human effort, namely, "by asserting that any process is irreversible we assert that by no means within our present or future power can we reverse it, i.e., we cannot control the individual molecules."

SECTION C

ENTROPY

We have seen above that the inevitable growth in the number of complexions is the mark of irreversibility; the number of complexions at any stage can also in a certain sense be regarded as the measure, index or determinant of that stage or state of the system of elements under consideration. Any function of the number of complexions can be regarded as such measure, index or determinant. Now it has been shown by BOLTZMANN that the expression found thermodynamically for the quantity called entropy differs only by a physically insignificant constant from the logarithm of said number of complexions. But the latter may properly be regarded as a true measure of the probability of the system being in the state considered. BOLTZMANN has defined the entropy of a physical system as the logarithm of the probability of the mechanical condition of the system and PLANCK has cast it into the numerical form,  where  is the entropy of any natural state of the body and  is an arbitrary constant, the numerical value of the first term of the second member is the quotient of the energy (expressed in ergs) [Pg 87] divided by the temperature (in centigrade degrees). In English units and the F.P.S. system this numerical value is  .

From the whole development we see that entropy  depends only on the number of complexions; it should not be considered, as is sometimes done, as of the same dimensions of energy or anything that may generally be called a factor of energy.

SECTION D

THE SECOND LAW

It is evident that all these results have for their original basis the Theory of Probabilities. Consequently, because these conclusions are thus based, they must be interpreted according to the general method underlying this Theory. This method essentially is the determination of average (mean) values and calling them the probable ones. We therefore conclude that each state is characterized by the mean number of complexions belonging to that state, that is, by this mean number which changes always in a one-sided way, ever in the same sense, inasmuch as it inevitably and invariably grows till the normal, settled condition is reached.

For the sake of clarity we must keep in mind that the motions of the individual atoms are reversible and that in this sense the irreversible processes are reduced to reversible ones. But the process as a whole is not reversible because, by the very act of complete reversal, we would suspend the general, chaotic character of the elementary motions and give them to this extent a special, prearranged feature which would be more or less hostile to the original definition of "elementary chaos." The irreversibility is not in the elementary events themselves, but solely in their irregular arrangement. It is this which guarantees the one-sided change of the mean value characteristic of each one of the successive states of the process.

Now remembering that the kernel of the Second Law is that all processes in Nature are irreversible, or, that all changes in [Pg 88] Nature vary in one direction only, we can, in the light of what of has just preceded, repeat the following precise, scientific statement:

"The Second Law, in its objective-physical form (freed from all anthropomorphism) refers to certain mean values which are found from a great number of like and 'chaotic' elements."

If we now go back to what constitutes the kernel of the Second Law, we will see the relevance and force of PLANCK'S enunciation of this law:

"It is not possible to construct a periodically functioning motor which effects nothing more than the lifting of a load and the cooling of a heat reservoir."

The proof of this is purely experimental and cumulative, and the same may be said of the earlier statement of this law, "all changes in Nature vary in one direction only." The character of this proof is, moreover, exactly like that for the First Law, the Conservation of Energy, and has the same sort of validity.

When we compared and interpreted the current statements of the Second Law (pp. 44-47) we enunciated and made use of three helpful propositions that will now be repeated:

(a) All cases of irreversibility stand or fall together; if any one can be reversed all can be reversed.

(b) Any general consequence of any one correct statement of the Second Law may be regarded as itself a valid and complete statement of the Second Law.

(c) The summary of all the necessary prerequisites (or conditions) for determining Entropy may be regarded as a complete and valid statement of the Second Law.

In this connection it will also be helpful to remember PLANCK'S statement: "In order that a process may be truly reversible it will not suffice to declare that the mediating body is directly reversible, but that at the end, everywhere in the whole of Nature, the same state must be restored which existed at the beginning of said reversible process." [Pg 89]

As regards the use of helpful proposition (a):

We know that PLANCK'S motor statement of the Second Law was grounded on the well-known irreversible passage of heat from a cold to a hot body. But to show the mutual interdependence (a) of one irreversible change on every other, we will instance in illustration the case of a frictional event, or the conversion of mechanical work into heat.

If this frictional occurrence could by any simple or complex apparatus be made completely reversible so that everywhere, in the whole of Nature, the same state would be restored which existed at the beginning of the frictional occurrence, then such an apparatus would be the motor contemplated in PLANCK'S statement of the Second Law, for this periodically running apparatus would convert heat into work without any other change remaining. A similar line of argument, with a similar result, could be pursued with every other case of irreversibility that could be adduced. It is evident that, with the help of the above-given propositions (a), (b), and (c), the Second Law can be cast into many other valid forms.

We close this presentation of the meaning of the Second Law by the remark that this law has no independent significance, for its roots go down deep into the Theory of Probabilities. It is therefore conceivable that it is applicable to some purely human and animate events as well as to inanimate, natural events; provided, of course, that the former possess numerous like and uncontrolled constituents which may be properly characterized as "elementar-ungeordnet," in other words, provided the variable elements present constitute adequate haphazard for the Calculus of Probabilities. [Pg 90]

PART V

REACH AND SCOPE OF SECOND LAW

SECTION A

ITS EXTENSION TO ALL BODIES

CLAUSIUS extended the operation of the Second Law or, what is the same thing, the scope of entropy, to all bodies. See RÜHLMANN'S "Handbuch d. mech. Wärmtheorie," Vol. I, pp. 395-405.

BOLTZMANN says in this connection: "As regards entropy, solid and liquid bodies do not differ qualitatively from perfect gases; the discussion of the entropy of the former, however, presents greater mathematical difficulties."

Certain features of the entropy of solid and liquid bodies have, however, been derived with the help of ideal gases as temporary auxiliaries. We consider this argument the simplest and therefore now give an outline of PLANCK'S presentation of the matter.[25]

[25]Thermodynamik, 2d Ed., pp. 87-100.



PLANCK'S PROOF THAT ALSO FOR ANY OTHER BODIES THAN GASES THERE REALLY EXISTS A FUNCTION WHICH POSSESSES THE CHARACTERISTICS OF ENTROPY; THE MAIN STEPS ARE NUMBERED

(1) Expression of entropy for an ideal gas and properties of entropy.

 [Pg 91] where the elastic forces do a  ; strictly speaking,  is not differential of  the heat supply.

(3) Two gases (1) and (2) thermally connected, are maintained at same temperature but different pressure and change adiabatically while experiencing change of volumes; then it can be shown that for this finite change,  , that is for the two gases the sum of the final  No other change is effected in any other bodies but in these two gases; here emphasis is laid on preposition in; for the work done may be the lifting or lowering of a load and such change of location in rigid bodies involves no change of inner energy. Changes of density in external bodies can be also avoided by having the two gas tanks located in a vacuum.

(4) A similar proposition can be established for a system of any number of gases by successively treating the gases in pairs as above. The theorem then reads: "If the gas system as a whole possesses the same entropy in two different states then the system can be brought from one state to the other in a reversible manner without changes remaining in other bodies."

(5) We know that the expansion of an ideal gas without doing external work and receiving any heat supply is an irreversible process. The consequence is that the entropy of this gas increases. It follows at once that "it is impossible to diminish the entropy of an ideal gas without changes remaining in other bodies."

(6) The same result obtains for a system of any number of ideal gases. Consequently "there exists in the whole of Nature no means (be they of the mechanical, thermal, chemical or electrical sort) of diminishing the entropy of a system of ideal gases, without changes remaining in other bodies."

(7) "If a system of ideal gases has changed to another state (possibly in an entirely unknown way) without changes remaining in other bodies, then the final entropy can certainly not be smaller, it can only be greater than or equal to the initial condition. In [Pg 92] the former case this process is an irreversible one, in the latter case a reversible one.

"Equality of entropy in the two states therefore constitutes a sufficient and at the same time a necessary condition for the complete reversibility of the passage from one state to the other, provided no changes are to remain behind in other bodies."

(8) "This proposition has a very considerable range of validity; for there was expressly no limiting assumption made concerning the way in which the gas system reached its final condition; the proposition is therefore valid not only for slowly and simply changing processes but also for any physical and chemical processes provided at the end no changes remained in any body outside of the gas system. Nor need we believe that entropy of a gas has significance only for states of equilibrium, provided we can suppose the gas mass (moving in any way) to consist of sufficiently small parts each so homogeneous that it possesses entropy."[26]

Then the summation must extend over all these gas parts. "The velocity has no influence on the entropy, just as little as the height of the heavy gas parts above a particular horizontal plane."

(9) "The laws thus far deduced for ideal gases can in the same way be transferred to any other bodies, the main difference in general being that the expression for the entropy of any body cannot be written in finite magnitudes because the equation of condition is not generally known. But it can always be shown—and this is the decisive point—that for any other body there really exists a function possessing the characteristic properties of entropy."

Now let us assume any physically "homogeneous body, by which is meant that the smallest visible space parts of the system are completely alike. Here it does not matter whether or no the substance is chemically homogeneous, i.e., whether it consists [Pg 93] of entirely like molecules, and consequently it also does not here matter whether in the course of the prospective changes of state it experiences chemical transformation.... When the substance is stationary the whole energy of the system will consist of the so-called 'inner' energy  , which depends only on the mass and inner constitution of the substance, which constitution is conditioned by the temperature and density."

(10) Let us suppose that with such a homogeneous body there is conducted a certain reversible or irreversible cycle process which therefore brings the body exactly back again to its initial condition. Let the external influences on the body consist in the performance of work and in heat supply or withdrawal, which heat exchange is to be effected by any number of suitable heat reservoirs. At the end of the process no changes remain in the body itself, only the heat reservoirs have altered their state. Now let us suppose the heat carriers in the reservoirs to be composed of purely ideal gases, which may be kept at constant volume or under constant pressure, at any rate only be subject to reversible changes of volume. According to the last proved proposition, the sum of the entropies of all the gases cannot have become smaller, for at the end of the process no changes remain in any other body, not even in the body which completed the cycle process.

(11) Let  be the heat gained by the body from some reservoir in an element of time and  the temperature of the reservoir[27] at the same moment, then the change of entropy experienced by the reservoir at this instant will be  and in the whole course of time all the reservoirs together will experience the entropy change  [Pg 94] and then we know that there must be satisfied the condition  which is the form in which CLAUSIUS first enunciated the Second Law.

(12) Another condition for the process considered is furnished by the First Law. For each element of time  , where  is the inner energy of the body and  the work expended upon it in an element of time by external means.

Now let us consider a more special case in which the external pressure at each instant is equal to the pressure p of the supposedly stationary body. Then the external work will be represented by  and then it follows that

(13) Furthermore let the temperature of each heat reservoir, at the instant when it comes into use, be equal to the simultaneous temperature of the body; then the cycle process becomes a reversible one and the inequality of the second law becomes an equality,  and substitution of above value for  gives

In this equation there occur only quantities referring to the state of the body itself and therefore it can be interpreted without [Pg 95] any reference to the heat reservoirs. It contains the following proposition:

"If a homogeneous body by suitable treatment is allowed to pass through a series of continuous states of equilibrium and thus finally to come back to its initial condition, the summation of the differential,  for all the changes of state will be equal to zero. From this follows at once that if the change of state is not allowed to continue to the restoration of the initial condition (1), but is stopped at any state (2), the value of the sum  will depend solely on the final state (2) and on the initial state (1), and not on the course of the passage from 1 to 2."[28]

"The last expression is called by CLAUSIUS the entropy of the body in state 2, referred to state 1 as the zero state. The entropy of a body in a particular state is, therefore, like energy, completely determined down to an additive constant depending on the choice of the zero state."

(14) "Let us again designate the entropy by  , then  [Pg 96] or, what amounts to the same thing, by  which reduced to the unit of mass becomes

"This is evidently identical with the value found for an ideal gas. But it is equally applicable to every body when its energy  and volume  are known as functions, say, of  and  , for the expression for entropy can then be directly determined by integration. But since these functions are not completely known for any other substance we must in general rest content with the differential equation. For the present proof, however, and for many applications of the Second Law it suffices to know that this differential equation really contains a unique definition of entropy."

As with an ideal gas, we can now always speak of the entropy of any substance as a certain finite magnitude determined by the values of the temperature and volume at the instant, and can so speak even when the substance experiences any reversible or irreversible change. Moreover, the differential equation (43) is applicable to any change of state, even an irreversible one.

In thus applying the idea of entropy there is no conflict with its derivation. The entropy of a state is measured by a reversible process which conducts the body from its present state to the zero state, but this ideal process has nothing to do with the changes of state that the body has experienced or is going to experience.

"On the other hand, we must emphasize that differential equation (43) for  is valid only for changes of temperature and volume and is not so for changes of mass or of chemical composition. [Pg 97] For changes of the latter sort were never considered in defining entropy."

(15) "Finally, we may designate the sum of the entropies of several bodies as the entropy of the system of all the bodies, provided the system can be subdivided into infinitesimal elements for which uniform density and temperature can be assumed; but velocity and force of gravitation do not at all enter into the expression for entropy."

[26]If the motion of the gas is so turbulent that temperature and density cannot be defined, then we must have recourse to BOLTZMANN'S broader definition of entropy.

[27]It does not here matter what the temperature of the body is at this instant.

[28]This is evident from the fact that the quantities  ,  ,  , and  , under the integral are each a function of the state only and do not depend on its past history. This falls far short of being true for turbulent states, for which it is difficult to get  and  . PLANCK does not make the preceding statement, but gives instead a rigorous proof based on cyclical considerations.

SECTION B

GENERAL CONCLUSION AS TO ENTROPY CHANGES

Now that the existence and value of entropy have been established for every state of any body, we can proceed to draw general conclusions in much the same way as that followed with the ideal gases. The general result is:

"IT IS IN NO WAY POSSIBLE TO DIMINISH THE ENTROPY OF A SYSTEM OF BODIES WITHOUT HAVING CHANGES REMAIN IN OTHER BODIES."

If we admit to the system all the bodies participating in the process, this theorem becomes:

"Every physical and chemical process occurring in Nature takes place in such a way that there is an increase in the sum of the entropies of the bodies in any way participating in the process."

We will close with BOLTZMANN'S statement:

"The driving motive (or impelling cause) in all natural events is the difference between the existing entropy and its maximum value."

[Pg 98]