Feynman Lectures on Physics

Notes from "Lectures on Physics" by Richard Feynman.  Mechanics, teory of gravitation, general relativity, optics, electromagnetism, quantum mechanics, thermodynamics. We refer the reader to "The complete lectures on Physics"   for the full text.

Contents

Volume I

 

    [ D_N^2= (D_{N-1}+1)^2 quad {rm or} quad (D_{N-1}-1)^2 ]

  The average expectation is then langle D_N^2rangle =langle D_{N-1}^2rangle +1=N

A probability distribution 
Suppose that in addition to a random choice of the direction (+ or - ) of each step, the length of each step also varied in some unpredictable way, the only condi- tion being that on rhe average the step length was one unit. This case is more representative of something like the thermal motion of a molecule in a gas. If we call the length of a step S, then S may have any value at all, but most often will be "near" 1. To be specific, we shall let langle S^2 rangle = 1  or, equivalently, the "root means square S_{rm rms} = 1. Our derivation for langle D^2rangle  would proceed as before except that Eq. (6.8) would be changed now to read 

    [ langle D_N^2 rangle =langle D_{N-1}^2 rangle +langle S^2 rangle =langle D_{N-1}^2 rangle +1=N ]

as before.  What would we expect now for the distribution of distances D? What is, for example, the probability that = 0 after 30 steps? The answer is zero! The probability is zero that will be any particular value, since there is no chance at all that the sum of the backward steps (of varying lengths) would exactly equal the sum of forward steps. 
We expect that for small ∆the chance of landing in the interval is proportional to ∆x, the width of the interval. So we can write 
P(x, x) p(x)  
The function p(x) is called the probability density.
For large N, p(x) is the same for all reasonable distributions in individual step lengths, and depends only on N. We plot p(x) for three values of in Fig. 6-7. You will notice that the "half-widths" (typical spread from = 0) of these curves is sqrt{N}as we have shown it should be. 
You may notice also that the value of p(x) near zero is inversely proportional to sqrt{N}This comes about because the curves are all of a similar shape and their areas under the curves must all be equal. 
The normal or gaussian probability density. It has the mathematical form 

    [ p(x)={e^{-x^2over 2sigma^2}over sigma sqrt{2pi}}]

where σ is called the standard deviation and is given, in our case, by sigma=sqrt{N}.    
We remarked earlier that the motion of a molecule, or of any particle, in a gas is like a random walk. Suppose we open a bottle of an organic compound and let some of its vapor escape into the air. If there are air currents, so that the air is circulating, the currents will also carry the vapor with them. But even in perfectly still air, the vapor will gradually spread out-will diffuse-until it has penetrated throughout the room. We might detect it by its color or odor. The individual molecules of the organic vapor spread out in still air because of the molecular motions caused by collisions with other molecules. If we know the average "step" size, and the number of steps taken per second, we can find the probability that one, or several, molecules will be found at some distance from their starting point after any particular passage of time. 
The motion of a molecule, or of any particle, in a gas is like a random walk. Suppose we open a bottle of an organic compound and let some of its vapor escape into the air. If there are air currents, so that the air is circulating, the currents will also carry the vapor with them. But even in perfectly still air, the vapor will gradually spread out-will diffuse-until it has penetrated throughout the room. We might detect it by its color or odor. The individual molecules of the organic vapor spread out in still air because of the molecular motions caused by collisions with other molecules. If we know the average "step" size, and the number of steps taken per second, we can find the probability that one, or several, molecules will be found at some distance from their starting point after any particular passage of time. 

The uncertainty principle

We can give a probability density p_1(x)such that p_1(x) is the probability that the particle will be found between and +∆ If the particle is reasonably well localized, say near x_0the function  p_1(x) might be given by the graph of Fig. 6-lO(a). Similarly, we must specify the velocity of the particle by means of a probability density p_2(v)with p_2(v) the probability that the velocity will be found between and +∆v
It is one of the fundamental results of quantum mechanics that the two functions p_1(x) and p_2(v)  cannot be chosen independently and, in particular, cannot both be made arbitrarily narrow. If we call the typical "width" of the p_1(x)  curve Delta x  and that of the p_2(v)  curve ∆v, nature demands that the product of the two widths be at least as big as the number hbar<em>/m, where is the mass of the particle and his a fundamental physical constant called Planck's constant. We may write this basic relationship as 

    [ langle Delta x rangle langle Delta x rangle geq hbar /2m ]

This equation is a statement of the Heisenberg uncertainty principle.

This equation says that if we try to "pin down" a particle by forcing it to be at a particular place, it ends up by having a high speed. Or if we try to force it to go very slowly, or at a precise velocity, it "spreads out" so that we do not know very well just where it is. 
The necessary uncertainty in our specification of the position of a particle becomes most important when we wish to describe the structure of atoms. In the hydrogen atom, which has a nucleus of one proton with one electron outside of the nucleus, the uncertainty in the position of the electron is as large as the atom itself! Wecannot,therefore,properlyspeakoftheelectronmovinginsome"orbit" around the proton. The most we can say is that there is a certain chance p(r) ΔVof observing the electron in an element of volume at the distance from the proton. The probability density p(r) is given by quantum mechanics. For an undisturbed hydrogen atom p(r)=A, e^{-r^2/a^2}, which is a bell-shaped function like that in Fig. 6-8. The number is the "typical" radius, where the function is decreasing rapidly. Since there is a small probability of finding the electron at distances from the nucleus much greater than a, we may think of as "the radius of the atom," about 10^-10  meter. 
Our best "picture" of a hydrogen atom is a nucleus surrounded by an "electron cloud" (although we really mean a "probability cloud"). The electron is there somewhere, but nature per- mits us to know only the chance of finding it at any particular place.

We can define a quantity called the wave number, symbolized as k. This is defined as the rate of change of phase with distance (radians per meter).

The wavelength is the distance occupied by one complete cycle. It is easy to see, then, that the wavelength is 2pi/k.

Two dipole radiators

In combining the effects of two oscillators to find the net field at a given point. This is very easy in the few cases that we considered in the previous chapter. We shall first describe the effects qualitatively, and then more quantitatively. Let us take the simple case, where the oscillators are situated with their centers in the same horizontal plane as the detector, and the line of vibration is vertical.

We would like to know the intensity of the radiation in various directions. By the intensity we mean the amount of energy that the field carries past us per second, which is proportional to the square of the field, averaged in time. So the thing to look at, when we want to know how bright the light is, is the square of the electric field, not the electric field itself.

    Suppose the oscillators are again one-half a wavelength apart, but the phase a of one is set half a period behind the other in its oscillation. In the W direction the intensity is now zero, because one oscillator is "pushing" when the other one is "pulling." But in the N direction the signal from the near one comes at a certain time, and that of the other comes half a period later. But the latter was originally half a period behind in timing, and therefore it is now exactly in time with the first one, and so the intensity in this direction is 4 units.

if we build an antenna system and want to send a radio signal, say, to Hawaii, we set the antennas up  and we broadcast with our two antennas in phase, because Hawaii is to the west of us. Then we decide that tomorrow we are going to broadcast toward Alberta, Canada. Since that is north, not west, all we have to do is to reverse the phase of one of our antennas, and we can broadcast to the north. So we can build antenna systems with various arrangements.

Diffraction

The diffraction gratin

Suppose that we had a lot of parallel wires, equally spaced at a spacing d, and a radiofrequency source very far away, practically at infinity, which is generat- ing an electric field which arrives at each one of the wires at the same phase.

Then the external electric field will drive the electrons up and down in each wire. That is, the field which is coming from the original source will shake the electroiJ.s up and down, and in moving, these represent new generators. This phenomenon is called scattering: a light wave from some source can induce a motion of the electrons in a piece of material, and these motions generate their own waves.

A diffraction grating consists of nothing but a plane glass sheet, transparent and colorless, with scratches on it. There are often several hundred scratches to the millimeter, very carefully arranged so as to be equally spaced. The effect of such a grating can be seen by arranging a projector so as to throw a narrow, vertical line of light (the image of a slit) onto a screen. When we put the grating into the beam, with its scratches vertical, we see that the line is still there but, in addition, on each side we have another strong patch of light which is colored. This, of course, is the slit image spread out over a wide angular range, because the angle Bin (30.6) depends upon X, and lights of different colors, as we know, correspond to different frequencies, and therefore different wavelengths. The longest visible wavelength is red, and since d sin B = X, that requires a larger B. And we do, in fact, find that red is at a greater angle out from the central image!

We begin to understand the basic machinery of reflection: the light that comes in generates motions of the atoms in the reflector, and the reflector then regenerates a new wave, and one of the solutions for the direction of scattering, the only solution if the spacing of the scatterers is small compared with one wavelength, is that the angle at which the light comes out is equal to the angle at which it comes in!

Resolving power of a gratin

Supposing that there were two sources of slightly different frequency, or slightly different wavelength, how close together in wavelength could they be such that the grating would be unable to tell that there were really two different wavelengths there?

Rayleigh's criterion: Two frequency are resolved when  the first minimum from one bump sit at the maximum of the other.

The ratio Deltalambda/lambda  is called the resolving power of a grating;

The parabolic anthena

Now suppose that the radio source is at a slight angle 0 from the vertical. Then the various antennas are receiving signals a little out of phase. The receiver adds all these out-of-phase signals together, and so we get nothing, if the angle theta is too big. 

The smallest angle that can be resolved by an antenna array of length L is theta = lambda/L.  

Colored films crystals

if we look at the reflection of a light source in a thin film, we see the sum of two waves; if the thicknesses are small enough, these two waves will produce an inter- ference, either constructive or destructive, depending on the signs of the phases. It might be, for instance, that for red light, we get an enhanced reflection, but for blue light, which has a different wavelength, perhaps we get a destructively inter- fering reflection, so that we see a bright red reflection. If we change the thickness, i.e., if we look at another place where the film is thicker, it may be reversed, the red interfering and the blue not, so it is bright blue, or green, or yellow, or whatnot. So we see colors when we look at thin films and the colors change if we look at different angles, because we can appreciate that the timings are different at different angles.

We used a grating and we saw the diffracted image on the screen. If we had used monochromatic light, it would have been at a certain specific place. Then there were various higher-order images also. From the positions of the images, we could tell how far apart the lines on the grating were, if we knew the wavelength of the light.

This principle is used to dis· cover the positions of the atoms in a crystal. The only complication is that a crystal is three-dimensional; it is a repeating three-dimensional array of atoms. We cannot use ordinary light, because we must use something whose wavelength is less than the space between the atoms or we get no effect; so we must use radiation of very short wavelength, i.e., x-rays. So, by shining x-rays into a crystal and by noticing how intense is the reflection in the various orders, we can determine the arrangement of the atoms inside without ever being able to see them with the eye!

The origin of the refractive index

The index of diffraction

It is approximately true that light or any electrical wave does appear to travel at the speed cjn through a material whose index of refraction is n, but the fields are still produced by the motions of all the charges-including the charges moving in the material-and with these basic contributions of the field travelling at the ultimate velocity c.

We shall try to understand the effect in a very simple case. A source which we shall call "the external source" is placed a large distance away from a thin plate of transparent material, say glass. We inquire about the field at a large distance on the opposite side of the plate.

According to the principles we have stated earlier, an electric field anywhere that is far from all moving charges is the (vector) sum of the fields produced by the external source (at S) and the fields produced by each of the charges in the plate of glass, every one with its proper retardation at the velocity c.

When the electric field ofthe source acts on these atoms it drives the electrons up and down, because it exerts a force on the electrons. And moving electrons generate a field-they constitute new radiators. These new radiators are related to the source S, because they are driven by the field of the source. The total field is not just the field of the source S, but it is modified by the additional contribution from the other moving charges.

Before we proceed with our study of how the index of refraction comes about, we should understand that all that is required to understand refraction is to under- stand why the apparent wave velocity is different in different materials. The bending of light rays comes about just because the effective speed of the waves is different in the materials. To remind you how that comes about we have drawn in the left figure several successive crests of an electric wave which arrives from a vacuum onto the surface of a block of glass. The arrow perpendicular to the wave crests indicates the direction of travel of the wave. Now all oscillations in the wave must have the same frequency. (We have seen that driven oscillations have the same frequency as the driving source.) This means, also, that the wave crests for the waves on both sides of the surface must have the same spacing along !he surface because they must travel together, so that a charge sitting at the boundary will feel only one frequency. The shortest distance between crests of the wave, however, is the wavelength which is the velocity divided by the frequency. On the vacuum side it is lambda_0=2pi c/omega and on the other side it is lambda=2pi v/omega or lambda=2pi c/omega n, if v=c/n is the velocity of the wave. From the figure we can see that the only way for the waves to “fit” properly at the boundary is for the waves in the material to be travelling at a different angle with respect to the surface. From the geometry of the figure you can see that for a “fit” we must have lambda_0/sintheta_0 =lambda/sintheta or sintheta_0/sintheta=n , which is Snell’s law.

The index of refraction is given by

    [ n=1+{ N, q_e^2 over 2epsilon_0, m (w_0^2-w^2)} ]

with N the number of atoms per unit volume of the plate, q_em the charge and mass of an electron, w the angular frequency of the radiation, w_0 the resonant frequency of an electron bound in the atom.

Dispersion

For most ordinary gases (for instance, for air, most colorless gases, hydrogen, helium, and so on) the natural frequencies of the electron oscillators correspond to ultraviolet light. These frequencies are higher than the frequencies of visible light, that is, w0 is much larger than w of visible light, and to a first approximation, we can disregard w2 in comparison with Then we find that the index is nearly constant. So for a gas, the index is nearly constant. This is also true for most other transparent substances, like glass. If we look at our expression a little more closely, however, we notice that as w rises, taking a little bit more away from the denominator, the index also rises. So n rises slowly with frequency. The index is higher for blue light than for red light. That is the reason why a prism bends the light more in the blue than in the red.

The phenomenon that the index depends upon the frequency is called the phenomenon of dispersion, because it is the basis of the fact that light is "dispersed" by a prism into a spectrum.

At frequencies very close to the natural frequency the index can get enor- mously large, because the denominator can go to zero.

If we beam x-rays on matter, or radiowaves (or any electric waves) on free electrons the term w_0^2-w^2 becomes negative, and we obtain the result that n is less than one. That means that the effective speed of the waves in the substance is faster than c! Can that be correct?

It is correct. In spite of the fact that it is said that you cannot send signals any faster than the speed of light, it is nevertheless true that the index of refraction of materials at a particular frequency can be either greater or less than I.

What the index tell us is the speed at which the nodes (or crests) of the wave travel. The node of a wave is not a signal by itself. In a perfect wave, which has no modulations of any kind, i.e., which is a steady oscillation, you cannot really say when it "starts," so you cannot use it for a timing signal. In order to send a signal you have to change the wave somehow, make a notch in it, make it a little bit fatter or thinner.

We should remark that our analysis of the refractive index gives a result that is somewhat simpler than you would actually find in nature. To be completely accurate we must add some refinements. First, we should expect that our model of the atomic oscillator should have some damping force (otherwise once started it would oscillate forever, and we do not expect that to happen). In presence of a damping with coefficient gamma we replace w_0^2-w^2 in the denominator of the refractive index formula by w_0^2-w^2+{rm i} , gamma, w.

Absortion

As the wave goes through the material, it is weakened. The material is "absorbing" part of the wave. The wave comes out the other side with less energy. We should not be surprised at this, because the damping we put in for the oscillators is indeed a friction force and must be expected to cause a loss of energy. We see that the imaginary part  of a complex index of refraction n  represents an absorption (or "attenuation") of the wave.

For instance as in glass, the absorption of light is very small. This is to be expected from our Eq. (31.20), because the imaginary part of the denominator, {rm i} , gamma, w, is much smaller than the term w_0^2-w^2. But if the light fre- quency w is very close to w_0 then the index becomes almost completely imaginary. The absorption of the light becomes the dominant effect. It is just this effect that gives the dark lines in the spectrum of light which we receive from the sun. The light from the solar surface has passed through the sun's atmosphere (as well as the earth's), and the light has been strongly absorbed at the resonant frequencies of the atoms in the solar atmosphere.

The observation of such spectral lines in the sunlight allows us to tell the resonant frequencies of the atoms and hence the chemical composition of the sun's atmosphere. The same kind of observations tell us about the materials in the stars. From such measurements we know that the chemical elements in the sun and in the stars are the same as those we find on the earth.

Polarization

The electric vector of light

In ideally monochromatic light, the electric field must oscillate at a definite frequency, but since the x-component and the yy-component can oscillate independently at a definite frequency, we must first consider the resultant effect produced by superposing two independent oscillations at right angles to each other. When the x-vibration and the y-vibration are not in phase, the electric field vector moves around in an ellipse.  The motion in a straight line is a particular case corresponding to a phase difference of zero (or an integral multiple of pi); motion in a circle corresponds to equal amplitudes with a phase difference of 90 degrees (or any odd integral multiple of pi/2). 

Light is linearly polarized (sometimes called plane polarized) when the electric field oscillates on a straight line.

Relativistic effects of radiation

Moving sources

We recall that the fundamental laws of electrodynamics say that, at large distances from a moving charge, the electric field is given by the formula

    [ {bf E}=-{qover 4pi epsilon_0 c^2} {d^2 {bf e}_{R'}over dt^2 } ]

The second derivative of the unit vector {bf e}_{R'} which points in the apparent direction of the charge, is the determining feature of the electric field. 

Associated with the electric field is a magnetic field, always at right angles to the electric field and at right angles to the apparent direction of the source, given by the formula

    [ B=- {bf e}_{R'}times {bf E}/c ]

Let the coordinates of the charge be (x,y,z) with z measured along the direction of observation. Now the direction of the vector {bf e}_{R'} depends mainly on x and y, but hardly at all upon z. The transverse components of the unit vector are x/and y/R with R the distance from the source. One finds

begin{align*} E_x &=- { q over 4 pi epsilon_0 c^2 R} {d^2 x over dt^2 } nonumber E_y &=- - { q over 4 pi epsilon_0 c^2 R} {d^2 y over dt^2 } end{align*}

If the time of observation is called t  then the time τ to which this corresponds is delayed by the total distance that the light has to go, divided by the speed of light. In the first approximation, this delay is R/c  but in the next approximation we must include the effects of the position in the z-direction at the time τ. Thus τ is determined by 

    [ t=tau+R, c+z(tau) quadquad x'(t)=x(tau) ]

The figure on the left displays the graph x'(t) for a source moving on a circle in the (x,z) plane. For a source moving fast the cusp gets very sharp.
Synchroton radation 

In the synchrotron we have electrons which go around in circles in a uniform magnetic field.; they are travelling at very nearly the speed c, and it is possible to see the above radiation as actual light! First, let us see why they go in circles. We know that the force on a particle in a magnetic field is given by

    [ {bf F}=q , {bf v} times {bf B} ]

and it is at right angles both to the field and to the velocity. As usual, the force is equal to the rate of change of momentum with time. Since the force is at right angles to the velocity, the kinetic energy, and therefore the speed, remains constant. All the magnetic field does is to change the direction of motion. In a short time Delta t, the momentum vector changes at right angles to itself by an amount Delta p=F, Delta t  and therefore p turns through an angle Deltatheta=Delta pover p. But in this same time the particle has gone a distance Δs=vΔ =RΔθ. Combining this with the previous expressions, we find that the particle must be moving in a circle of radius R, with momentum

    [ p=q , B , R ]

angular velocity 

    [ omega ={vover R}= {q, v, Bover p} ]

If q is expressed  in terms of the electronic charge q_e the kinetic energy pc can be measured in units of the electron volt 

    [ pc (eV)=3times 10^8 {qover q_e} , B, R ]

The mks unit of magnetic field is called a weber per square meter. Today, electromagnets wound with superconducting wire are able to produce steady fields of over  10  mks units. The field of the earth is 10^{-5} weber per meter square at the equator.

We could imagine the synchrotron running at a billion electron volts, then, if we had a B corresponding to, say, 1 kms then we see that R would have to be 3.3 meters. The actual radius of the Caltech synchrotron is 3.7 meters, the field is a little bigger, and the energy is 1.5 billion, but it is the same idea. 

We know that the total energy, including the rest energy, is given by W=sqrt{p^2c^2+m^2 c^4}   and for an electron the rest energy corresponding to m, c^2=0.5times 10^6 , eV,  so when p,c=10^9 eV we can neglect the rest energy.   If  W=10^9 , eV, it is easy to show that the speed differs from the speed of light by but one part in eight million!

We turn now to the radiation emitted by such a particle. A particle moving on a circle of radius 3.3 meters, or 20 meters circumference, goes around once in roughly the time it takes light to go 20 meters. So the wavelength that should be emitted by such a particle would be 20 meters—in the shortwave radio region. The effective wavelength is instead much shorter since the time scale is reduced by eight million and the acceleration, which involves a second derivative with respect to time results into the square of that factor, i.e. 64times 10^{12} times smaller  than 20 meters, and that corresponds to the x-ray region.  Thus, even though a slowly moving electron would have radiated 20-meter radiowaves, the relativistic effect cuts down the wavelength so much that we can see it! 

To further appreciate what we would observe, suppose that we were to take such light (to simplify things, because these pulses are so far apart in time, we shall just take one pulse) and direct it onto a diffraction grating, which is a lot of scattering wires. The pulse strikes the grating head-on, and all the oscillators in the grating, together, are violently moved up and then back down again, just once. They then produce effects in various directions, The sum of the reflections from all the successive wires  is an electric field which is a series of pulses, and it is very like a sine wave whose wavelength is the distance between the pulses, just as it would be for monochromatic light striking the grating! So, we get colored light all right. But, by the same argument, will we not get light from any kind of a “pulse”? No. Suppose that the curve were much smoother; then we would add all the scattered waves together, separated by a small time between them. Then we see that the field would not shake at all, it would be a very smooth curve, because each pulse does not vary much in the time interval between pulses.

Bremsstrahlung

When very energetic electrons move through matter they spit radiation in a forward direction. This is called bremsstrahlung

The Doppler effect

Let us suppose that the an atom oscillating at a natural frequencies, $w_0$  is moving along in a direction toward the observer at velocity $v$. At what frequency would they be received by us? The first crest that arrives has a certain delay, but the next one is delayed less because in the meantime the atom moves closer to the receiver. We find that the frequency  is increased by the factor 1/(1v/c). Taking into account the relativistic dilation in the rate of passage of time the observed frequency ω is

begin{align*} w={ w_0sqrt{1-{v^2over c^2} } over 1-{vover c}} end{align*}

The shift in frequency observed in the above situation is called the Doppler effect: if something moves toward us the light it emits appears more violet, and if it moves away it appears more red.

Quantum behaviour

An experiment with bullets

Fig. 37–1 Interference experiment with bullets.

To try to understand the quantum behavior of electrons, we shall compare and contrast their behavior, in a particular experimental setup, with the more familiar behavior of particles like bullets, and with the behavior of waves like water waves.

We consider first the behavior of bullets in the experimental setup shown diagrammatically in Fig. 37–1. We have a machine gun that shoots a stream of bullets. It is not a very good gun, in that it sprays the bullets (randomly) over a fairly large angular spread, as indicated in the figure. In front of the gun we have a wall (made of armor plate) that has in it two holes. Beyond the wall is a backstop (say a thick wall of wood) which will “absorb” the bullets when they hit it. In front of the wall we have an object which we shall call a “detector” of bullets. It might be a box containing sand. The detector can be moved back and forth (in what we will call the x-direction). With this apparatus, we can find out experimentally the answer to the question: “What is the probability that a bullet which passes through the holes in the wall will arrive at the backstop at the distance x from the center?” By “probability” we mean the chance that the bullet will arrive at the detector, which we can measure by counting the number which arrive at the detector in a certain time and then taking the ratio of this number to the total number that hit the backstop during that time. Bullets arrive in lumps, when we find something in the detector, it is always one whole bullet. We shall say: “Bullets always arrive in identical lumps.”

The result of such measurements with this apparatus (we have not yet done the experiment, so we are really imagining the result) are plotted in the graph drawn in part (c) of Fig. 37–1. In the graph we plot the probability to the right and x vertically, so that the x-scale fits the diagram of the apparatus. We call the probability P_{12} because the bullets may have come either through hole 1 or through hole 2. You will not be surprised that P_{12} is large near the middle of the graph but gets small if x is very large. You may wonder, however, why P_{12} has its maximum value at x=0. We can understand this fact if we do our experiment again after covering up hole 2, and once more while covering up hole 1. When hole 2 is covered, bullets can pass only through hole 1, and we get the curve marked P_1 in part (b) of the figure. As you would expect, the maximum of P_1 occurs at the value of x which is on a straight line with the gun and hole 1. When hole 1 is closed, we get the symmetric curve P_2 drawn in the figure. Comparing parts (b) and (c) of Fig. 37–1, we find the important result that

    [ P_{12}=P_1+P_2 ]

The probabilities just add together. The effect with both holes open is the sum of the effects with each hole open alone. We shall call this result an observation of “no interference,” for a reason that you will see later. So much for bullets. They come in lumps, and their probability of arrival shows no interference.

An experiemnt with waves 

 

Fig. 37–2.Interference experiment with water waves.

Now we wish to consider an experiment with water waves. The apparatus is shown diagrammatically in Fig. 37–2. We have a shallow trough of water. A small object labeled the “wave source” is jiggled up and down by a motor and makes circular waves. To the right of the source we have again a wall with two holes, and beyond that is a second wall, which, to keep things simple, is an “absorber,” so that there is no reflection of the waves that arrive there. This can be done by building a gradual sand “beach.” In front of the beach we place a detector which can be moved back and forth in the xx-direction, as before. The detector is now a device which measures the “intensity” of the wave motion.

Now let us measure the wave intensity for various values of x (keeping the wave source operating always in the same way). We get the interesting-looking curve marked I_{12} in part (c) of the figure.

We have already worked out how such patterns can come about when we studied the interference of electric waves. In this case we would observe that the original wave is diffracted at the holes, and new circular waves spread out from each hole. If we cover one hole at a time and measure the intensity distribution at the absorber we find the rather simple intensity curves shown in part (b) of the figure. I_1 is the intensity of the wave from hole 1 (which we find by measuring when hole 2 is blocked off) and I_2 is the intensity of the wave from hole 2 (seen when hole 1 is blocked).

The intensity I_{12} observed when both holes are open is certainly not the sum of I_1 and I_2. We say that there is “interference” of the two waves. At some places (where the curve I_{12} has its maxima) the waves are “in phase” and the wave peaks add together to give a large amplitude and, therefore, a large intensity. We say that the two waves are “interfering constructively” at such places. There will be such constructive interference wherever the distance from the detector to one hole is a whole number of wavelengths larger (or shorter) than the distance from the detector to the other hole.

At those places where the two waves arrive at the detector with a phase difference of ππ(where they are “out of phase”) the resulting wave motion at the detector will be the difference of the two amplitudes. The waves “interfere destructively,” and we get a low value for the wave intensity. We expect such low values wherever the distance between hole 1 and the detector is different from the distance between hole 2 and the detector by an odd number of half-wavelengths.

The quantitative relationship between I_1I_2 and I_{12} is the following. with h_i complex numbers.

    [ I_{12}=|h_1+h_2|^2=I_1+I_2+2 ,I_1, I_2, cosvarphi ]

with I_i=|h_i|^2 the modulus square of the complex numbers h_i and varphi the relative phase between the two waves. You will notice that the result is quite different from that obtained with bullets (Eq. 37.1). The last term in (37.4) is the “interference term.” The intensity can have any value, and it shows interference.

An experiment with electrons 

Fig. 37–3.Interference experiment with electrons.

Now we imagine a similar experiment with electrons. It is shown diagrammatically in Fig. 37–3. We make an electron gun which consists of a tungsten wire heated by an electric current and surrounded by a metal box with a hole in it. If the wire is at a negative voltage with respect to the box, electrons emitted by the wire will be accelerated toward the walls and some will pass through the hole. All the electrons which come out of the gun will have (nearly) the same energy. In front of the gun is again a wall (just a thin metal plate) with two holes in it. Beyond the wall is another plate which will serve as a “backstop.” In front of the backstop we place a movable detector. The detector might be a geiger counter or, perhaps better, an electron multiplier, which is connected to a loudspeaker. The first thing we notice with our electron experiment is that we hear sharp “clicks” from the detector (that is, from the loudspeaker). And all “clicks” are the same. There are no “half-clicks.”

As we move the detector around, the rate at which the clicks appear is faster or slower, but the size (loudness) of each click is always the same. If we lower the temperature of the wire in the gun the rate of clicking slows down, but still each click sounds the same. We would notice also that if we put two separate detectors at the backstop, one or the other would click, but never both at once. We conclude, therefore, that whatever arrives at the backstop arrives in “lumps.” All the “lumps” are the same size: only whole “lumps” arrive, and they arrive one at a time at the backstop. We shall say: “Electrons always arrive in identical lumps.”

Just as for our experiment with bullets, we can now proceed to find experimentally the answer to the question: “What is the relative probability that an electron ‘lump’ will arrive at the backstop at various distances x from the center?” As before, we obtain the relative probability by observing the rate of clicks, holding the operation of the gun constant. The result of our experiment is the interesting curve marked P_{12} in part (c) of Fig. 37–3. The mathematics is the same as that we had for the water waves!
We conclude the following: The electrons arrive in lumps, like particles, and the probability of arrival of these lumps is distributed like the distribution of intensity of a wave. It is in this sense that an electron behaves “sometimes like a particle and sometimes like a wave.”

Watching the electrons

Fig. 37–4.A different electron experiment.

We shall now try the following experiment. To our electron apparatus we add a very strong light source, placed behind the wall and between the two holes, as shown in Fig. 37–4. We know that electric charges scatter light. So when an electron passes, however it does pass, on its way to the detector, it will scatter some light to our eye, and we can see where the electron goes.

Here is what we see: every time that we hear a “click” from our electron detector (at the backstop), we also see a flash of light either near hole 1 or near hole 2, but never both at once! And we observe the same result no matter where we put the detector. From this observation we conclude that when we look at the electrons we find that the electrons go either through one hole or the other. Still, when we succeeded in watching which hole our electrons come through, we no longer get the old interference curve but a new one showing no interference! If we turn out the light P_{12} is restored.
We must conclude that when we look at the electrons the distribution of them on the screen is different than when we do not look. By trying to “watch” the electrons we have changed their motions.

You may be thinking: “Don’t use such a bright source! The light waves will then be weaker and will not disturb the electrons so much. As we turn down the intensity of the light source we change the rate at which they are emitted, so some electrons get by without being seen.

We learned in an earlier chapter that the momentum carried by a “photon” is inversely proportional to its wavelength. If we want to disturb the electrons only slightly we should lower its frequency (the same as increasing its wavelength). Let us use light of a redder color. Let us try the experiment with longer waves. At first, nothing seems to change. The results are the same. Then a terrible thing happens. You remember that when we discussed the microscope we pointed out that, due to the wave nature of the light, there is a limitation on how close two spots can be and still be seen as two separate spots. This distance is of the order of the wavelength of light. So now, when we make the wavelength longer than the distance between our holes, we see a big fuzzy flash when the light is scattered by the electrons. We can no longer tell which hole the electron went through! We just know it went somewhere! And it is just with light of this color that we find that the jolts given to the electron are small enough so that we begin to get some interference effect.

It was suggested by Heisenberg that the then new laws of nature could only be consistent if there were some basic limitation on our experimental capabilities not previously recognized. He proposed, as a general principle, his uncertainty principle, which we can state in terms of our experiment as follows: “It is impossible to design an apparatus to determine which hole the electron passes through, that will not at the same time disturb the electrons enough to destroy the interference pattern.”

First principles of Quantum mechanics

The probability of an event in an ideal experiment is given by the square of the absolute value of a complex number ? which is called the probability amplitude P=|phi|^2

When an event can occur in several alternative ways, the probability amplitude for the event is the sum of the probability amplitudes for each way considered separately. There is interference: P=|phi_1+phi_2|^2

If an experiment is performed which is capable of determining whether one or another alternative is actually taken, the probability of the event is the sum of the probabilities for each alternative. The interference is lost: P=P_1+P_2.

The uncertainty principle

This is the way Heisenberg stated the uncertainty principle originally: If you make the measurement on any object, and you can determine the x-component of its momentum with an uncertainty ?p, you cannot, at the same time, know its x-position more accurately than

    [ Delta xgeq {hbarover 2 Delta p} ]

The Kinetic Theory of gases

The pressure of a gas

We imagine that we have a volume of gas in a box, at one end of which is a piston which can be moved (Fig. 39–1). We would like to find out what force on the piston results from the fact that there are atoms in this box. One way of expressing the force is to talk about the force per unit area: if A is the area of the piston, then the force on the piston will be written as a number times the area. We define the pressure as the force per unit area

    [ P={Fover A} ]

The differential work dW done on the gas in compressing it by moving the piston is then

    [ dW=F , (-dx)=-P, dV ]

If v_x is the velocity of an atom in the direction of the piston, the total momentum delivered to the piston by a particle collision is 2, m , v_x, because it is “reflected.” The number of collisions in a time dt is equal to half the number of atoms which are in the region within a distance v_x , dt (the other half go in the opposite direction). Let N the total number of atoms in a volume V, then the number of collisions against the piston is N dV/V=N A v_x , dt/(2V) and the pressure

    [ P, V= N, m , langle v^2_x rangle =frac{2}{3} , U ]

with U= langle N, m v^2/2 rangle is the total kinetic energy and langle v^2 rangle=3 langle v_x^2 rangle the average square velocity. For somewhat wider generality, we shall write

    [ P, V=(1-gamma) U ]

with gamma=5/3 for a monoatomic gas. A compression in which there is no heat energy added or removed is called an adiabatic compression. For an adiabatic compression all the work done goes into changing the internal energy and therefore PdV=-dU leading to

    [ P, V^gamma ={rm const} ]

For a photon v=c and the atom kinetic energy m, v^2/2 is replaced by p c, so one finds

    [ P, V = {Uover 3} quad quad {rm photon~gas} ]

or equivalently gamma=4/3.

Temperature

The mean molecular kinetic energy is a property only of the “temperature.” Being a property of the “temperature,” and not of the gas, we can use it as a definition of the temperature. We say that the mean molecular kinetic energy is {3over 2} kT with T the absolute temperature and k=1.38times 10^{-23} Joule per degree Kelvin.

The ideal gas law

Now, of course, we can use our definition of temperature to find the law for the pressure of gases as a function of the temperature:

    [ P, V=N, k, T ]

Furthermore, at the same temperature and pressure and volume, the number of atoms is determined.

The principles of statitical mechanics

The Boltzmann law

Here we study how are the molecules distributed in space when there are forces acting on them, and how are they distributed in velocity. Let us P be the pressure at height h. The vertical force per unit area pushing down at a height h+dh would be the same, in the absence of gravity, but in presence of gravity must exceed the force from above by the weight of gas in the section between h and h+dh. Now mg is the force of gravity on each molecule, where g is the acceleration due to gravity, and n=N/V the number of molecules per unit volume. If the temperature is constant one finds

    [ dP=dn , k , T=-m ,g ,n ,dh ]

solving for the density n one finds

    [n=n_0 , e^{-{Eover kT}} ]

with E=m g h the potential energy of an atom. One can show that this proposition is true for any conservative law. For example, the molecules may be charged electrically, and may be acted on by an electric field or another charge that attracts them. The equation above, known as Boltzmann’s law, states that the probability of finding molecules in a given spatial arrangement varies exponentially with the negative of the potential energy of that arrangement, divided by kT.

Similarly, the density n_v of molecules with a given kinetic energy E_v is

    [n_{v}=n_0 , e^{-{E_vover kT}} ]

The specific heats of gases

We have seen that a monatomic gas like Helium has gamma=5/3. But suppose it is, say, a more complicated molecule. What about a gas made of diatomic molecules. We know that for each of the two atoms, each of the kinetic energies should be 3kT/2. In addition we have the vibration energy due to the fact that the two atoms are bounded by an attractive force. To an excellent approximation, the molecule can be represented as two atoms connected by a spring.. The potential energy of a harmonic oscillator equals the average kinetic energy and therefore the potential energy of vibration is kT/2. The grand total of energy is therefore U=7kT/2 leading to gamma=9/7.

The failure of classical physics

We might try some force law other than a spring, but it turns out that anything else will only make gamma higher. If we include more forms of energy, gamma approaches unity more closely, contradicting the facts. The fact is that there are electrons in each atom, and we know from their spectra that there are internal motions; each of the electrons should have at least kT/2 of kinetic energy, and something for the potential energy, so when these are added in, γ gets still smaller. It is wrong.

The first great paper on the dynamical theory of gases was by Maxwell in 1859. On the basis of ideas we have been discussing, he was able accurately to explain a great many known relations, such as Boyle’s law, the diffusion theory, the viscosity of gases, and things we shall talk about later. He listed all these great successes in a final summary, and at the end he said, “Finally, by establishing a necessary relation between the motions of translation and rotation (he is talking about the kT/2 theorem) of all particles not spherical, we proved that a system of such particles could not possibly satisfy the known relation between the two specific heats.”

Ten years later, in a lecture, he said, “I have now put before you what I consider to be the greatest difficulty yet encountered by the molecular theory.” These words represent the first discovery that the laws of classical physics were wrong.

Without proof, we may state the results for statistical mechanics of the quantum-mechanical theory. The simple result we have in classical mechanics, that n=n_0, e^{-{energyover kT}} becomes the following very important theorem: If the energies of the set of molecular states are called, say, E_i, then in thermal equilibrium the probability of finding a molecule in the particular state of having energy E_n is n_n=n_0, e^{-{E_nover kT}}. So it is less likely to be in a higher energy state than in a lower one.

Now it turns out that for a harmonic oscillator the energy levels are evenly spaced E_n=(n+1/2)hbar omega. Now suppose that kT is much less than ℏω. Then the probability of its being in state different from the ground state is extremely small. Practically all the atoms are in the ground state. All oscillators are in the bottom state, and their motion is effectively “frozen”; there is no contribution of it to the specific heat.

We have been discussing the properties of matter from the atomic point of view, trying to understand roughly what will happen if we suppose that things are made of atoms obeying certain laws. However, there are a number of relationships among the properties of substances which can be worked out without consideration of the detailed structure of the materials. The determination of the relationships among the various properties of materials, without knowing their internal structure, is the subject of thermodynamics

 We know from the kinetic theory that the pressure of a gas is caused by molecular bombardment, and we know that if we heat a gas, so that the bombardment increases, the pressure must increase. If we increase the temperature at a given volume, we increase the pressure and if we compress the gas, we will find that the temperature will rise. From the kinetic theory, one can derive a quantitative relationship between these two effects, but instinctively one might guess that they are related in some necessary fashion which is independent of the details of the collisions.

The science of thermodynamics began with an analysis, by the great engineer Sadi Carnot, of the problem of how to build the best and most efficient engine.

Now the way a steam engine ordinarily operates is that heat from a fire boils some water, and the steam so formed expands and pushes on a piston which makes a wheel go around, then  the steam go into another box, where it is condensed by cool water, and then pump the water back into the boiler, so that it circulates continuously.  Heat is thus supplied to the engine and converted into work. Now would it be better to use alcohol? What property should a substance have so that it makes the best possible engine? That was the question to which Carnot addressed himself.

The results of thermodynamics are all contained implicitly in certain apparently simple statements called the laws of thermodynamics.  The first law states the conservation of energy:  The heat Q put into the system, plus the work W done on the system, is the increase in the energy U of the system; the latter energy is sometimes called the internal energy:

    [ Delta U=Delta Q+Delta W ]

  We know that if we do work against friction, say, the work lost to us is equal to the heat produced. If we do work in a room at temperature T, and we do the work slowly enough, the room temperature does not change much, and we have converted work into heat at a given temperature. What about the reverse possibility? Is it possible to convert the heat back into work at a given temperature? The second law of thermodynamics asserts that it is not.  The heat cannot be taken in at a certain temperature and converted into work with no other change in the system or the surroundings. Heat cannot, of itself, flow from a cold to a hot object. 

Reversible engines

 We know that if we do work against friction, say, the work lost to us is equal to the heat produced. If we do work in a room at temperature T, and we do the work slowly enough, the room temperature does not change much, and we have converted work into heat at a given temperature. What about the reverse possibility? Is it possible to convert the heat back into work at a given temperature? The second law of thermodynamics asserts that it is not.  The heat cannot be taken in at a certain temperature and converted into work with no other change in the system or the surroundings. Heat cannot, of itself, flow from a cold to a hot object.
Reversible engines

 Suppose that we have a gas in a cylinder equipped with a frictionless piston. Also, suppose that we have two heat pads,   that have definite temperatures T_1 and T_2. We will suppose in this case that T_1>T_2. Let us first heat the gas and at the same time expand it pulling the piston out very slowly.  We then push the piston back slowly keeping fix the temperature so the heat would pour back. We see that such an isothermal (constant-temperature) expansion, done slowly and gently enough, is a reversible process.

  For an ideal gas  P,V=N, k, T. During an isothermal expansion the pressure falls as the volume increases until we stop at the point b. At the same time, a certain heat Q_1 must flow into the gas from the reservoir.  Having completed the isothermal expansion, stopping at the point b, let us take the cylinder away from the reservoir and continue the expansion. This time we permit no heat to enter the cylinder. Again we perform the expansion slowly, so there is no reason why we cannot reverse it, and we again assume there is no friction. The gas continues to expand and the temperature falls, since there is no longer any heat entering the cylinder. We let the gas expand, following the curve marked (2), until the temperature falls to T_2. This kind of expansion, made without adding heat, is called an adiabatic expansion.

For an ideal gas, we already know that curve (2) has the form PVγ=constant, where γ is a constant greater than 1, so that the adiabatic curve has a more negative slope than the isothermal curve. The gas cylinder has now reached the temperature T2, so that if we put it on the heat pad at temperature T2 there will be no irreversible changes. Now we slowly compress the gas while it is in contact with the reservoir at T2, following the curve marked (3). Because the cylinder is in contact with the reservoir, the temperature does not rise, but heat Q2 flows from the cylinder into the reservoir at the temperature T2. Having compressed the gas isothermally along curve (3) to the point d, we remove the cylinder from the heat pad at temperature T2 and compress it still further, without letting any heat flow out. The temperature will rise, and the pressure will follow the curve marked (4). If we carry out each step properly, we can return to the point a at temperature T1

 For an ideal gas, we already know that curve (2) has the form P, V^{gamma}={rm const}  where γ is a constant greater than 1, so that the adiabatic curve has a more negative slope than the isothermal curve. The gas cylinder has now reached the temperature T_2, so that if we put it on the heat pad at temperature T_2  there will be no irreversible changes. Now we slowly compress the gas while it is in contact with the reservoir at T_2, following the curve marked (3). Because the cylinder is in contact with the reservoir, the temperature does not rise, but heat Q_2 flows from the cylinder into the reservoir at the temperature T_2. Having compressed the gas isothermally along curve (3) to the point d, we remove the cylinder from the heat pad at temperature T_2 and compress it still further, without letting any heat flow out. The temperature will rise, and the pressure will follow the curve marked (4). If we carry out each step properly, we can return to the point a at temperature T_1 where we started, and repeat the cycle.

  Now the point is that this cycle is reversible, so that we could represent all the steps the other way around. We could have gone backwards instead of forwards.
 If we go around the cycle in one direction, we must do work on the gas; if we go in the other direction, the gas does work on us.
Incidentally, it is easy to find out what the total amount of work is, because the work during any expansion is the pressure times the change in volume,
int P, dV=int y(x) dx,  in other words, the area under the curve. So the area under each of the numbered curves is a measure of the work done by or on the gas in the corresponding step. It is easy to see that the network done is the shaded area of the picture.

   If an engine is reversible,   the amount of work one will obtain if the engine absorbs a given amount of heat at temperature T_1 and delivers heat at some other temperature T_2  does not depend on the design of the engine. It is a property of the world, not a property of a particular engine.

The efficiency of ideal gas engines

  Let us compute now the heats Q_1 and Q_2 exchanges along a Carnot cycle. We have
 

    [ Q_1=int_a^b p, dV=N, k, T_1 int_a^b {dVover V} =N, k, T_1 , ln{V_bover V_a} ]

  Similarly   Q_2=N, k, T_1 , ln{V_c over V_d}.  On the other hand along the adiabatic expansion curve
 

    [ T_1, V_b^{gamma-1}=T_2, V_c^{gamma-1} quadquad T_1, V_a^{gamma-1}=T_2, V_d^{gamma-1} ]

 Altogether one finds
 

    [ {Q_1over T_1}={Q_2over T_2} ]

 This is the relation we were seeking. Although proved for a perfect gas engine, we know it must be true for any reversible engine at all.
 
 The efficiency of an engine is defined as
 

    [  {rm Efficiency}={Q_1-Q_2over Q_1}=1-{T_2over T_1} <1 ]

  The entropy of a system S (or its increase) is defined as the amount of heat  reversibly added to a system at a constant temperature T, i.e. Delta S=Delta Q/T

Summary of Thermodynamics laws

  The three theremodynamics was can then be states as of

First law:       The energy of the universe is always constant.
    
Second law:       The entropy of the universe is always increasing. A process whose only net result is to take heat from a reservoir and convert it to work is impossible. No heat engine taking heat Q_1 from T_1 and delivering heat Q_2 at T_2 can do more work than a reversible engine, for which

    [ W=Q_1-Q_2= Q_1, left({T_1-T_2over T_1}right) ]

Third law: At T=0 , S=0.   

In a reversible change, the total entropy of all parts of the system (including reservoirs) does not change. In irreversible change, the total entropy of the system always increases.