Lecture notes - TU Graz - Institut für Theoretische Physik

Werbung
Einführung in die Theoretische Physik:
Quantenmechanik
teilweise Auszug aus dem Skript
von H.G. Evertz und W. von der Linden
überarbeitet von Enrico Arrigoni
Version vom 27. April 2016
1
Inhaltsverzeichnis
Inhaltsverzeichnis
2
1 Einleitung
5
2 Literatur
7
3 Failures of classical physics
8
3.1 Blackbody radiation . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Photoelectric effect . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Wellen und Teilchen
4.1 Das Doppelspaltexperiment mit klassischen
4.1.1 Mathematische Beschreibung . . .
4.2 Licht . . . . . . . . . . . . . . . . . . . . .
4.3 Elektronen . . . . . . . . . . . . . . . . . .
4.3.1 de Broglie Wellenlänge . . . . . . .
5 The
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
Teilchen
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
wave function and Schrödinger equation
Probability density and the wave function . . . . . . . . . .
Wave equation for light . . . . . . . . . . . . . . . . . . . . .
Euristic derivation of the wave function for massive particles
Wave equations . . . . . . . . . . . . . . . . . . . . . . . . .
Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time-independent Schrödinger equation . . . . . . . . . . .
Normalisation . . . . . . . . . . . . . . . . . . . . . . . . . .
Summary of important concepts . . . . . . . . . . . . . . . .
5.8.1 (1) Wave-particle dualism . . . . . . . . . . . . . . .
5.8.2 (2) New description of physical quantities . . . . . .
5.8.3 (3) Wave equation for Ψ: Schrödinger equation . . . .
5.8.4 (4) Time independent Schrödinger equation . . . . .
6 Einfache Potentialprobleme
6.1 Randbedingungen für die Wellenfunktion
6.2 Konstantes Potential . . . . . . . . . . .
6.3 Gebundene Zustände im Potentialtopf .
6.3.1 Potentialtopf mit unendlich hohen
6.3.2 Potentialtopf mit endlicher Tiefe
2
. . . . . .
. . . . . .
. . . . . .
Wänden .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
12
14
15
17
18
.
.
.
.
.
.
.
.
.
.
.
.
20
20
20
21
22
23
23
25
26
26
26
26
27
.
.
.
.
.
28
28
30
31
31
34
6.4
6.5
6.3.3 Zusammenfassung: gebundene Zustände
Streuung an einer Potentialbarriere . . . . . . .
6.4.1 Tunneling . . . . . . . . . . . . . . . . .
6.4.2 Resonanz . . . . . . . . . . . . . . . . .
Klassischer Limes . . . . . . . . . . . . . . . . .
7 Functions as Vectors
7.1 The scalar product . .
7.2 Operators . . . . . . .
7.3 Eigenvalue Problems .
7.4 Hermitian Operators .
7.5 Additional independent
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
40
41
42
46
47
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
48
49
53
54
55
57
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
representation .
. . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
58
58
59
60
61
62
63
63
.
.
.
.
.
.
.
.
.
.
65
65
65
66
67
68
69
70
70
71
71
. . . . . .
. . . . . .
. . . . . .
. . . . . .
variables
8 Dirac notation
8.1 Vectors . . . . . . . . . . . . . .
8.2 Rules for operations . . . . . . .
8.3 Operators . . . . . . . . . . . .
8.3.1 Hermitian Operators . .
8.4 Continuous vector spaces . . . .
8.5 Real space basis . . . . . . . . .
8.6 Change of basis and momentum
8.7 Identity operator . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 Principles and Postulates of Quantum Mechanics
9.1 Postulate I: Wavefunction or state vector . . . . . .
9.2 Postulate II: Observables . . . . . . . . . . . . . . .
9.3 Postulate III: Measure of observables . . . . . . . .
9.3.1 Measure of observables, more concretely . .
9.3.2 Continuous observables . . . . . . . . . . . .
9.4 Expectation values . . . . . . . . . . . . . . . . . .
9.4.1 Contiunuous observables . . . . . . . . . . .
9.5 Postulate IV: Time evolution . . . . . . . . . . . .
9.5.1 Generic state . . . . . . . . . . . . . . . . .
9.5.2 Further examples . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10 Examples and exercises
72
10.1 Wavelength of an electron . . . . . . . . . . . . . . . . . . . . 72
10.2 Photoelectric effect . . . . . . . . . . . . . . . . . . . . . . . . 72
3
10.3 Some properties of a wavefunction . . . . . . .
10.4 Particle in a box: expectation values . . . . .
10.5 Delta-potential . . . . . . . . . . . . . . . . .
10.6 Expansion in a discrete (orthogonal) basis . .
10.7 Hermitian operators . . . . . . . . . . . . . .
10.8 Standard deviation . . . . . . . . . . . . . . .
10.9 Heisenberg’s uncertainty . . . . . . . . . . . .
10.10Qubits and measure . . . . . . . . . . . . . . .
10.11Qubits and time evolution . . . . . . . . . . .
10.12Free-particle evolution . . . . . . . . . . . . .
10.13Momentum representation of x̂ (NOT DONE)
10.14Ground state of the hydrogen atom . . . . . .
10.15Excited isotropic states of the hydrogen atom
10.16Tight-binding model . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
74
75
75
76
77
77
79
82
84
85
87
88
90
11 Some details
11.1 Probability density . . . . . . . . . . . . . . . . . . . . . . . .
11.2 Fourier representation of the Dirac delta . . . . . . . . . . . .
11.3 Transition from discrete to continuum . . . . . . . . . . . . . .
93
93
93
94
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Einleitung
....
1
Die Quantenmechanik ist von zentraler Bedeutung für unser Verständnis der
Natur. Schon einfache Experimente zeigen, wie wir sehen werden, dass das
klassische deterministische Weltbild mit seinen wohldefinierten Eigenschaften
der Materie inkorrekt ist. Am augenscheinlichsten tritt dies auf mikroskopischer Skala zutage, in der Welt der Atome und Elementarteilchen, die man
nur mit Hilfe der Quantenmechanik beschreiben kann. Aber natürlich ist
auch die makroskopische Welt quantenmechanisch, was z.B. bei Phänomenen
wie dem Laser, der LED, der Supraleitung, Ferromagnetismus, bei der Kernspinresonanz (MR in der Medizin), oder auch bei großen Objekten wie Neutronensternen wichtig wird.
Einer der zentralen Punkte der Quantenmechanik ist, dass nur Aussagen
über Wahrscheinlichkeiten gemacht werden können, anders als in der klassischen deterministischen Physik, in der man das Verhalten einzelner Teilchen mittels der Bewegungsgleichungen vorhersagen kann. Die entsprechende Bewegungsgleichung in der Quantenmechanik, die Schrödingergleichung,
beschreibt statt deterministischer Orte sogenannte Wahrscheinlichkeitsamplituden.
Wie alle Theorien kann man auch die Quantenmechanik nicht herleiten, genausowenig wie etwa die Newtonschen Gesetze. Die Entwicklung einer
Theorie erfolgt anhand experimenteller Beobachtungen. oft in einem langen
Prozess von Versuch und Irrtum. Dabei sind oft neue Begriffsbildungen nötig.
Wenn die Theorie nicht nur die bisherigen Beobachtungen beschreibt,
sondern eigene Aussagekraft hat, folgen aus ihr Vorhersagen für weitere Experimente, die überprüfbar sind. Wenn diese Vorhersagen eintreffen, ist
die Theorie insoweit bestätigt, aber nicht “bewiesen”, denn es könnte immer
andere Experimente geben, die nicht richtig vorhergesagt werden.
Trifft dagegen auch nur eine Vorhersage der Theorie nicht ein, so ist sie
falsifiziert. Die in vielen Aspekten zunächst sehr merkwürdige Quantenmechanik hat bisher alle experimentellen Überprüfungen bestens überstanden,
im Gegensatz zu einigen vorgeschlagenen Alternativen (z.B. mit ,,Hidden
Variables”).
In den letzten Jahren hat es eine stürmische Entwicklung in der Anwendung der experimentell immer besser beherrschbaren fundamentalen Quantenmechanik gegeben, z.B. für die Quanteninformationstheorie, mit zum Teil
spektakulären Experimenten (,,Quantenteleportation”), die zentral die nichtlokalen Eigenschaften der Quantenmechanik nutzen. Grundlegende quanten5
mechanische Phänomene werden auch für speziell konstruierte Anwendungen
immer interessanter, wie etwa die Quantenkryptographie oder Quantencomputer.
6
Literatur
....
2
• R. Shankar, Principles of Quantum Mechanics, 1994.
(Pages 107-112 and 157-164 for parts in german of lecture notes)
• C. Claude Cohen-Tannoudji ; Bernard Diu ; Franck Laloë.
, Quantum Mechanics, 1977.
(Pages 67-78 for parts in german of lecture notes)
• J.L. Basdevant, J. Dalibard, Quantum Mechanics, 2002.
• J.J. Sakurai, Modern Quantum Mechanics, 1994.
• J.S. Townsend, A Modern Approach to Quantum Mechanics, 1992.
• L.E. Ballentine, Quantum Mechanics: A Modern Development, 1998.
7
Failures of classical physics
3.1
....
3
Blackbody radiation
At high temperatures matter (for example metals) emit a continuum radiation spectrum. The color they emit is pretty much the same at a given
temperature independent of the particular substance.
An idealized description is the so-called blackbody model, which describes
a perfect absorber and emitter of radiation.
In a blackbody, electromagnetic waves of all wavevectors k are present.
One can consider a wave with wavevector k as an independent oscillator
(mode).
4000K
u(ω)
3000K
2000K
ω
Energy density u(ω) of blackbody radiation at different temperatures:
• The energy distribution u(ω) vanishes at small and large ω,
there is a maximum in between.
• The maximum frequency ωmax (“color”) of the distribution obeys the
law (Wien’s law) ωmax = const. T
Classical understanding
For a given frequency ω (= 2πν), there are many oscillators (modes) k
having that frequency. Since ω = c |k| the number (density) n(ω) of oscillators with frequency ω is proportional to the surface of a sphere with radius
ω/c, i. e.
n(ω) ∝ ω 2
(3.1)
The energy equipartition law of statistical physics tells us that at temperature
T each mode is excited to the same energy KB T . Therefore, at temperature
T the energy density u(ω, T ) at a certain frequency ω would be given by
u(ω, T ) ∝ KB T ω 2
8
(3.2)
(Rayleigh hypothesis).
KB T ω 2
4000K
u(ω)
3000K
Experimental observation
2000K
ω
This agrees with experiments at small ω, but a large ω u(ω, T ) must decrease
again and go to zero. It must because otherwise the total energy
Z ∞
U=
u(ω, T ) dω
(3.3)
0
would diverge !
Planck’s hypothesis:
The “oscillators” (electromagnetic waves), cannot have a continuous of
energies. Their energies come in “packets” (quanta) of size h ν = ~ω.
h
) Planck’s constant.
h ≈ 6.6 × 10−34 Joules sec
(~ = 2π
At small frequencies, as long as KB T ~ω, this effect is irrelevant. It will
start to appear at KB T ∼ ~ω: here u(ω, T ) will start to decrease. And in
fact, Wien’s empiric observation is that at ~ω ∝ KB T
u(ω, T ) displays
a maximum. Eventually, for KB T ~ω the oscillators are not excited at
all, their energy is vanishingly small. A more elaborate theoretical treatment
gives the correct functional form.
Average energy of “oscillators”
9
Energy
Maximum in between
u(ω)
KB T
h̄ω > KB T
u(ω) ∝ KB T ω 2
u(ω) ∼ h̄ωe−h̄ω/KB T
h̄ω ≪ KB T
frequency ω
(A) Classical behavior:
Average energy of oscillator < E >= KB T .
⇒ Distribution u(ω) ∝ KB T ω 2 at all frequencies!
(B) Quantum behavior: Energy quantisation
Small ω: Like classical case: oscillator is excited up to < E >≈ KB T .
⇒ u(ω) ∝ KB T ω 2 . Large ω: first excited state (E = 1 ∗ ~ω) is occupied
with probability e−~ω/KB T (Boltzmann Factor): ⇒< E >≈ ~ω e−~ω/KB T
⇒ u(ω) ∼ ~ω e−~ω/KB T
3.2
Photoelectric effect
Electrons in a metal are confined by an energy barrier (work function) φ.
One way to extract them is to shine light onto a metallic plate.
Light transfers an energy Elight to the electrons.
The rest of the energy Elight − φ goes into the kinetic energy of the electron
Ekin = 12 m v 2 .
By measuring Ekin , one can get Elight .
−−−−−−−−−−−−−
anode
V = Ekin,min/e
Ekin
ν
00
11
11
00
00
11
φ
111
000
000
111
000
111
11
00
00
11
00
11
11
00
00
11
00
11
11
00
00
11
00
11
Metal
11
00
000
00111
11
000
111
00
11
000
111
catode
+ + + + + + + + ++
Schematic setup to measure Ekin : a current will be measured as soon as
10
....
Ekin ≥ eV
examples: Sec. 10.2
Classicaly, we would espect the total energy transferred to an electron
Elight = φ + Ekin to be proportional to the radiation intensity. The experimental results give a different picture:
while the current (i. e. the number of electrons per second expelled from the
metal) is proportional to the radiation intensity,
Elight is proportional to the frequency of light:
Elight = h ν
(3.4)
Elight
Ekin
νmin
ν
...
Summary: Planck’s energy quantum
The explanation of Blackbody radiation and of the Photoelectric effect
are explained by Planck’s idea that light carries energy only in “quanta” of
size
E = hν
(3.5).
This means that light is not continuous object, but rather its constituent are
discrete: the photons.
11
Wellen und Teilchen
....
4
Wir folgen in dieser Vorlesung nicht der historischen Entwicklung der Quantenmechanik mit ihren Umwegen, sondern behandeln einige Schlüsselexperimente,
an denen das Versagen der klassischen Physik besonders klar wird, und die zu
den Begriffen der Quantenmechanik führen. Dabei kann, im Sinne des oben
gesagten, die Quantenmechanik nicht ,,hergeleitet”, sondern nur plausibel
gemacht werden.
Die drastischste Beobachtung, die zum Verlassen des klassischen Weltbildes führt, ist, dass alle Materie und alle Strahlung gleichzeitig Teilchencharakter und Wellencharakter hat. Besonders klar wird dies im sogenannten
Doppelspaltexperiment. Dabei laufen Teilchen oder Strahlen auf eine Wand
mit zwei Spalten zu. Dahinter werden sie auf einem Schirm detektiert.
4.1
Das Doppelspaltexperiment mit klassischen Teilchen
Klassische Telchen (z.B. Kugeln)
Wir untersuchen, welches Verhalten wir bei Kugeln erwarten, die durch
die klassische Newtonsche Mechanik beschrieben werden. (Tatsächliche Kugeln verhalten sich natürlich quantenmechanisch. Die Effekte sind aber wegen
ihrer hohen Masse nicht erkennbar).
Wir machen ein Gedankenexperiment mit dem Aufbau, der in Abbildung
(1) skizziert ist.
– Eine Quelle schießt Kugeln zufällig in den Raumwinkel ∆Ω. An den
Spalten werden die Kugeln gestreut.
– Auf dem Schirm S werden die Kugeln registriert. Die Koordinate entlang des Schirms sei x. Aus der Häufigkeit des Auftreffens von Kugeln in
einem Intervall (x, x + ∆x) ergibt sich die Wahrscheinlichkeit P (x)∆x,
dass eine Kugel in diesem Intervall ankommt.
– Die Quelle wird mit so geringer Intensität betrieben, dass die Kugeln
einzeln ankommen.
Das Experiment wird nun auf verschiedene Weisen durchgeführt:
1) Nur Spalt 1 ist offen: dies liefert die Verteilung P1 (x)
12
x
Wand
P12(x)
1
Quelle
D
P1(x)
Ω
2
....
P2(x)
Abbildung 1: {ExKugeln} Doppelspaltexperiment mit Kugeln. H beschreibt
die beobachteten Auftreffhäufigkeiten.
2) Nur Spalt 2 ist offen: dies liefert die Verteilung P2 (x)
3) Beide Spalte sind offen: dies liefert P12 (x), nämlich einfach die Summe
P12 (x) = P1 (x) + P2 (x) der vorherigen Verteilungen.
Wasserwellen
Wir wiederholen den Versuch mit Wasserwellen. Die Versuchsanordnung
ist in Abbildung (2) dargestellt.
– Die Quelle erzeugt kreisförmige Wellen.
– Die Wand hat wieder zwei Spalten.
– Der Schirm S sei ein Absorber, so dass keine Wellen reflektiert werden.
– Wir finden, dass die Auslenkung beim Schirm mit der Zeit oszilliert,
mit einer ortsabhängigen Amplitude.
– Der Detektor D messe die Zeitgemittelte Intensität I = |Amplitude|2 .
13
x
I1
∆
I 6= I1 + I2
I2
Abbildung 2: {ExWasserwellen} Doppelspaltexperiment mit Wasserwellen.
Man beobachtet:
1. Die Intensität I kann alle positiven Werte annehmen (abhängig von der
Quelle). Es tritt keine Quantelung auf.
2. Wir lassen nur Spalt 1 oder 2 offen und finden:
Die Intensitäten I1 bzw. I2 gleichen den entsprechenden Häufigkeiten
beim Experiment mit Kugeln.
3. Wir lassen beide Spalte offen:
I12 (x) zeigt ein Interferenzbild; I12 6= I1 + I2 .
Es hängt vom Abstand ∆ der Spalte ab.
Die Interferenz zwischen beiden (Teil)Wellen ist an manchen Stellen
konstruktiv und an anderen destruktiv. Konstruktive Interferenz tritt
auf, wenn
Abstand (Detektor zu Spalt 1) - Abstand (Detektor zu Spalt 2) = n ·λ,
wobei λ die Wellenlänge ist, und n ∈ N.
4.1.1
Mathematische Beschreibung
Es ist bequem, zur Beschreibung der zeitlichen Oszillation die Funktion eiφ =
cos φ + i sin φ zu verwenden, von der hier nur der Realteil benutzt wird.
14
Die momentane Amplitude am Ort des Detektors D ist
A1 = Re (a1 eiα1 eiωt )
nur Spalt 1 offen
iα2 iωt
A2 = Re (a2 e e )
nur Spalt 2 offen
iωt+iα2
iωt+iα1
) beide Spalte offen
+ a2 e
A12 = Re (a1 e
Der Bezug zu den gemessenen, zeitgemittelten Intensitäten ist
I1 = (Re a1 eiωt+iα1 )2
= 12 |a1 |2
I2 = (Re a2 eiωt+iα2 )2
= 12 |a2 |2
Der
2
2
2|
I12 = (Re (a1 eiωt+iα1 + a2 eiωt+iα2 ))2 = |a1 | +|a
+
|a
||a
|
cos(α
−
α
)
1
2
1
2
2
= I1 + I2 + |a1 ||a2 | cos(α1 − α2 ) .
Term mit dem cos ist der Interferenzterm, der von der Phasendifferenz α1 −α2
abhängt, die sich aus dem Gangunterschied ergibt.
4.2
Licht
Die übliche sehr erfolgreiche Beschreibung von Licht in der makroskopischen
Welt ist die einer Welle, mit elektrischen und magnetischen Feldern. Die
durch Experimente notwendig gewordene teilchenartige Beschreibung über
Photonen war eine Revolution.
Licht besteht aus Photonen Vor der Besprechung des Doppelspaltexperiments seien kurz einige frühe Experimente erwähnt, welche die Teilchennatur von Licht zeigen.
Details in Sec. 3
• Das temperaturabhängige Spektrum eines sog. schwarzen Körpers, lässt
sich klassisch nicht verstehen. Bei einer klassischen Wellennatur des
Lichts würde die Intensität des Spektrums zu hohen Frequenzen hin
divergieren. Die Energiedichte des elektromagnetischen Feldes wäre unendlich !
Die Erklärung für das tatsächlich beobachtete Spektrum fand Planck
im Jahr 1900 (am selben Tag, an dem er die genauen experimentellen
Ergebnisse erfuhr !), indem er postulierte, dass Licht nur in festen
Einheiten der Energie E = hν abgestrahlt wird. Die später Photonen
genannten ,,Quanten” gaben der Quantentheorie ihren Namen. Dieses Postulat war eine ,,Verzweiflungstat” Plancks und stieß auf grosse
Skepsis. Einstein nannte es ,,verrückt”.
15
• Beim Photoeffekt schlägt ein Photon der Frequenz ν aus einem Metall ein Elektron heraus, das mit der kinetischen Energie hν − Φ austritt, wobei Φ eine Austrittsarbeit ist. Es gibt daher eine Schwelle für
die Frequenz des Photons, unterhalb derer keine Elektronen austreten.
Klassisch hätte man erwartet, dass bei jeder Photonfrequenz mit zunehmender Lichtintensität mehr und mehr Elektronen ,,losgeschüttelt”
würden. Stattdessen bestimmt die Intensität des Lichtes nur die Anzahl
der austretenden Elektronen und auch nicht ihre kinetische Energie.
Mit der Lichtquantenhypothese konnte Einstein 1905 den Photoeffekt
erklären. Es war diese Arbeit, für die er 1921 den Nobelpreis erhielt.
• Auch der Comptoneffekt, mit Streuung von Licht an Elektronen, lässt
sich nur über Photonen erklären.
• Noch direkter bemerkt man die Partikelstruktur von Licht mit Geigerzählern, Photomultipliern oder mit CCDs (Digitalkameras!). Interessanterweise kann man sogar mit bloßem Auge bei einer schwach beleuchteten Wand fleckige Helligkeitsschwankungen erkennen, die sich
schnell ändern. Sie beruhen auf der Schwankung der Anzahl auftreffender Photonen, die man ab etwa 10 pro 100msec wahrnehmen kann.
Licht hat Wellennatur Die Wellennatur des Lichtes erkennt man klar
am Doppelspaltexperiment: Aufbau und Ergebnis bezüglich der Intensitäten
verhalten sich genau wie beim Experiment mit Wasserwellen. In der Maxwelltheorie ist die Intensität des Lichts proportional zum Quadrat der Am~ 2 , also von derselben Struktur wie bei
plitude des elektrischen Feldes I ∼ E
den Wasserwellen, nur dass jetzt das elektrische und das magnetische Feld
die Rolle der Amplitude spielen.
Licht: Teilchen oder Wellen ?
16
Einzelne
Photonen?
a
Ganz anders als bei Wasserwellen ist aber das Auftreffen des Lichtes auf
den Schirm: die Photonen treffen einzeln auf, jeweils mit der Energie hν, und
erzeugen trotzdem ein Interferenzbild, wenn 2 Spalten geöffnet sind ! Der
Auftreffpunkt eines einzelnen Photons lässt sich dabei nicht vorhersagen,
sondern nur die zugehörige Wahrscheinlichkeitsverteilung !
4.3
Elektronen
Noch etwas deutlicher wird die Problematik von Teilchen- und Wellennatur
im Fall von Materie, wie Elektronen oder Atomen. Die ,,Teilchennatur” ist
hier sehr klar. Zum Beispiel kann man für ein einzelnes Elektron Ladung und
Masse bestimmen.
Interferenz von Elektronen Das Verhalten am Doppelspalt zeigt aber
wieder Wellennatur (siehe Abbildung 3) !
Experimentelle Beobachtungen:
1. Die Elektronen kommen (wie klassische Teilchen) als Einheiten am Detektor an.
2. In Abhängigkeit vom Ort des Detektors variiert die Zählrate.
3. Gemessen wird eine Häufigkeitsverteilung, d.h. die Auftreffwahrscheinlichkeit.
4. Öffnet man nur Spalt 1 oder nur Spalt 2, so tritt eine Häufigkeitsverteilung
wie bei Kugeln (oder bei Wellen mit nur 1 offenen Spalt) auf.
17
Abbildung 3: {ExElektronen} Doppelspaltexperiment mit Elektronen.
5. Öffnet man dagegen beide Spalte, so beobachtet man ein Interferenzmuster, also wieder eine Wahrscheinlichkeitsverteilung P12 6= P1 + P2 .
Insbesondere sinkt durch das Öffnen des 2. Spaltes an manchen Stellen
sogar die Auftreffwahrscheinlichkeit auf Null.
Dasselbe Verhalten hat man auch mit Neutronen, Atomen und sogar
Fulleren-Molekülen beobachtet !
4.3.1
de Broglie Wellenlänge
Das Interferenzergebnis zeigt uns, dass sowohl Photonen als auch Elektronen
(als auch jedes mikroskopisches Teilchen) ein Wellencharacter haben. Für
gegebenes Impuls p ist die räumliche Periodizität dieser Wellen gegeben durch
die
λ =
2π
k
=
h
p
Diese Längenskala λ erscheint sowohl im Doppelspaltexperiment, als auch
z.B. bei der Streuung von Teilchen mit Impuls p an einem Kristall. Man
erhält dort ein Interferenzbild (Davisson-Germer-Experiment), und zwar für
Elektronen mit Impuls p dasselbe wie für Photonen mit demselben Impuls!
18
...
de-Broglie-Wellenlänge
(4.1).
....
Wie wir in den nächsten Kapiteln sehen werden, werden freie Elektronen quantenmechanisch durch eine Warscheinlichkeitsamplitude in der Form
einer ebenen Welle eipx beschrieben.
examples: Sec. 10.1 Quantenmechanische Effekte werden unterhalb
einer Längenskala der Größenordnung der de-Broglie-Wellenlänge λ wichtig.
Sie beträgt zum Beispiel bei
◦
0.28A
Protonen: λ ' p
Ekin /eV
◦
12A
Elektronen: λ ' p
Ekin /eV
◦
380A
Photonen: λ ' p
Ekin /eV
19
(4.2)
The wave function and Schrödinger equation
....
5
5.1
Probability density and the wave function
In Sec. 4 we have seen that the trajectory of a particle is not deterministic,
but described by a probability distribution amplitude. In other words, for
each time t and point in space r there will be a certain probability W to
find the particle within a (infinitesimal) volume ∆V around the point r.
This probability (which depends of course on ∆V ) is given in terms of the
probability density Pt (r), as W = Pt (r)∆V
Obviously, the total probability of finding the particle within a volume V
is given by
Z
Pt (r)d3 r .
V
Pt (r) = |Ψ(t, r)|2 .
...
As discussed in Sec. 4, the relevant (i. e. additive) quantity for a given
particle is its probability amplitude Ψ(t, r). This is a complex function, and
the probability density P is given by
(5.1).
Ψ is also called the wavefunction of the particle. As discussed in Sec. 4, it
is possible to predict the time evolution of Ψ, which is what we are going to
do in the present section.
To do this, we will start from the wave function of light, whose properties
we already know from classical electrodynamics, and try to extend it to
matter waves.
5.2
Wave equation for light
...
The wave function describing a free electromagnetic wave, can be taken, for
example, as the amplitude of one of its two constituent fields,1 E or B, i. e.
it has the form
Ψ = E0 eik·r−iωk t
(5.2).
1
This is correct because the intensity of light, which corresponds to the probability
density of finding a photon, is proportional to |E|2 .
20
...
Planck’s quantisation hypothesis was that light of a certain frequency ω
comes in quanta of energy
E=~ω
(5.3).
(Or taking ν = ω/(2π), E = h ν), with Planck’s constant
h = 2π ~ ≈ 6.6 × 10−34 Joules sec
(5.4)
...
From the energy we can derive the momentum. Here we use, the relation
between energy E and momentum p for photons, which are particles of zero
rest mass and move with the velocity of light c:2
E = c |p|
(5.5).
~ω
h
=~k=
c
λ
(5.6).
p=
...
From (5.3) we thus obtain
ω = c|k|
...
which is precisely the De Broglie relation between momentum and wavelength
of a particle discussed in Sec. 4. Here, we have used the dispersion relation
(5.7).
for electromagnetic waves.
5.3
Euristic derivation of the wave function for massive
particles
...
With the assumption that matter particles (i. e. particle with a nonzero rest
mass such as electrons, protons, etc.) with a given momentum and energy
behave as waves, their wave function will be described by a form identical to
(5.2), however with a different dispersion relation. The latter can be derived
by starting from the energy-momentum relation, which instead of (5.5) reads
(in the nonrelativistic case)
p2
E=
.
(5.8).
2m
2
This can be obtained by starting from Einstein’s formula E = m c2 and by using the
fact that p = m v = m c
21
~ω =
5.4
~2 k 2
2m
...
Applying Planck’s (5.3) and De Broglie relations (5.6), we readily obtain the
dispersion relation for nonrelativistic massive particles
(5.9).
Wave equations
− i∇ Ψ = k Ψ
i
∂
Ψ=ω Ψ
∂t
...
...
One property of electromagnetic waves is the superposition principle:
If Ψ1 and Ψ2 are two (valid) wave functions, any linear combination a1 Ψ1 +
a2 Ψ2 is a valid wave function. Due to this linearity property, any valid wave
function must satisfy a (linear) wave equation.
We already know this equation for free electromagnetic waves. This is
given by
1 ∂2
(5.10).
(∇2 − 2 2 )Ψ = 0
c ∂t
In order to write down the corresponding equation for massive particles,
we first notice that Ψ (5.2) (valid both for matter as well as light) is an
eigenfunction of the differential operators ∇ and ∂/∂t, i. e.
(5.11).
Replacing (5.11) in (5.10), we see that the latter is equivalent to the dispersion
relation
ω 2 = c2 k 2
(5.12)
...
(which is, of course, equivalent to (5.7)).
This immediately suggests to use the dispersion relation (5.9) combined
with the relation (5.11) to write down the corresponding analogous of (5.10)
for massive particles:
∂
1
2
i~ −
(−i~∇) Ψ = 0
(5.13).
∂t 2m
Which is the (time-dependent) Schrödinger equation for massive (nonrelativistic) free particles.
22
5.5
Potential
...
For a particle moving in a constant potential (i.e. t- and r-independent) V ,
p2
the dispersion relation (5.8) is replaced with E = 2m
+V , and (5.13) acquires
a corresponding term V Ψ on the l.h.s. The guess by Schrödinger was to
formally do the same also for a t- and r-dependent potential 3 V (t, r), yielding
the complete time-dependent Schrödinger equation
~2 2
∂Ψ
= −
∇ + V (t, r) Ψ
(5.14).
i~
∂t
2m
|
{z
}
Ĥ
Of course, for a nonconstant potential (5.2) is no longer a solution of (5.14).
The differential operator on the r.h.s. of (5.14) is termed Hamilton operator Ĥ. Symbolically, thus, the Schrödinger equation is written as
i~
∂Ψ
= Ĥ Ψ .
∂t
(5.15)
...
In general, Ψ can belong to a larger vector space (such as a function in 3N
variables for N particles or contain further degrees of freedom, such as spin).
Combining the relations (5.3) and (5.6), we see that energy and momentum, which were variables in classical physics, become now differential
operators
∂
p → −i~∇ .
(5.16).
E → i~
∂t
This is one important aspect of quantum mechanics, which we will discuss
further below, namely the fact that physical quantities become linear operators.
5.6
Time-independent Schrödinger equation
Ψ(t, r) = f (t)ψ(r)
3
...
Time-independent Schrödinger equation
We consider in the following a stationary (i. e. time-independent) potential V (r) and look for solution of (5.14) in the form (separation of variables)
(5.17).
This can be understood if one assumes pieceweise constant potentials Vi , and requires
that locally the equation for wave equation should only depend on the local Vi . In the end,
one takes the limit of a continuous V (t, r)
23
df (t)
~2 2
1
1
i~
−
∇ + V (r) ψ(r)
=
f (t)
dt
ψ(r)
2m
|
{z
}
|
{z
}
independent of r
independent of t
...
dividing by f (t)ψ(r), (5.14) becomes
(5.18).
df (t)
Et
= E f (t) ⇒ f (t) = f0 exp(−i
)
dt
~
the second one is the time-independent Schrödinger equation
~2 2
∇ + V (r) ψ(r) = Eψ(r)
−
2m
|
{z
}
(5.19).
...
i~
...
Therefore both sides must be equal to a constant. By comparing with (5.14)
we can recognise this constant as the energy E. (5.18) thus splits into two
equations, the first being easy to solve
(5.20).
Ĥ
This is the equation for a wave function of a particle with a fixed value of
the energy. It is one of the most important equations in quantum mechanics
and is used, e.g., to find atomic orbitals.
Schrödinger equation: ideas
These results suggest us some ideas that we are going to meet again later
• Physical quantities (observables), are replaced by differential operators.
Here we had the case of energy E and momentum p:
~2 2
∂
= Ĥ = −
∇ + V (r)
∂t
2m
p → p̂ = −i~∇
E → i~
(5.21)
The “hat” ˆ is used to distinguish an operator from its corresponding
value.
• (5.20) has the form of an eigenvalue equation similar to the one we
encounter in linear algebra. The solutions of (5.20) are, thus, eigefunctions4 of Ĥ
4
also called eigenstates
24
• Solutions of (5.20) are called stationary states, since their time evolution is given by (5.18) with (5.19), so that the probability density
|Ψ(t, r)|2 is time-independent.
Ways to solve the time-dependent Schrödinger equation
Not any wave function will have the separable form (5.17). However, any
wave function can be written as a linear combination of such terms. One then
can then evaluate the time evolution for each separate term using (5.19) and
(5.20). This is the most common approach used to solve the time-dependent
Schrödinger equation. We will discuss it again later.
5.7
Normalisation
...
Due to the linearity of (5.20), its solution can be always multiplied by a
constant. An important point in quantum mechanics is that two wave functions differing by a constant describe the same physical state. The value of
the constant can be (partly) restricted by the condition that the wavefunction
is normalized. This is obtained ba normalizing the probability density (5.1),
i. e. by requiring that the total probability is 1. This gives the normalisation
condition
Z
< ψ|ψ >≡ |ψ(r)|2 d3 r = 1 .
(5.22).
It is not strictly necessary, but useful, to normalize the wave function. If
the wave function is not normalized, however, one has to remember that the
probability density ρ(r) for finding a particle near r is not merely |ψ(r)|2 but
ρ(r) =
|ψ(r)|2
< ψ|ψ >
(5.23)
....
Notice that even after normalisation the constant is not completely determined, as one can always multiply by a number of modulus 1, i. e. a phase eiα .
Finally, notice that not all wave functions are normalizable. In some cases
the integral (5.22) may diverge. This is for example the case for free particles
(5.2). We will discuss this issue later.
examples: Sec. 10.3
25
5.8
5.8.1
Summary of important concepts
(1) Wave-particle dualism
Objects (electrons, electromagnetic waves) are both Waves and Particles:
Waves: Delocalized, produce interference
Particles: localized, quantized
Reconciling both aspects:
complex wave function Ψ(t, r) → interference
probability density ρ(r) ∝ |Ψ(t, r)|2
5.8.2
(2) New description of physical quantities
p → −i~∇
E → i~
∂
∂t
...
Physical quantities become differential operators on Ψ(t, r):
(5.24).
This comes by combining
(a) electromagnetic waves:
∂
∂t
k → −i∇
ω→i
~k = p
~ω = E
(b) de Broglie, Planck
5.8.3
(3) Wave equation for Ψ: Schrödinger equation
Combining (5.24) with classical energy relation
yields Schrödinger equation
∂
i~ Ψ =
∂t
p2
+ V (r)
2m
2
2∇
−~
+ V (r) Ψ ≡ ĤΨ
2m
same idea as for electromagnetic waves:
E 2 = c2 p2
∂2
Ψ = c2 ∇ 2 Ψ
∂t2
→
26
...
E=
(5.25).
5.8.4
(4) Time independent Schrödinger equation
Separable solution of (5.25):
Ψ(t, r) = e−iEt/~ ψ(r)
Eigenfunction of the energy operator.
Requires solution of eigenvalue equation
Ĥψ(r) = Eψ(r)
which determines Energy levels and wavefunctions
27
Einfache Potentialprobleme
....
6
Wir werden nun die zeitunabhn̈gige Schrödingergleichung (5.20) für einige
einfache Potentialprobleme lösen. In den Beispielen beschränken wir uns dabei auf eindimensionale Probleme.
Randbedingungen für die Wellenfunktion
....
6.1
Zuerst leiten wir Randbedingungen der Ortsraum-Wellenfunktion für die Eigenzustände von Ĥ her.
Wir behandeln zunächst den eindimensionalen Fall.
1) Die Wellenfunktion ψ(x) ist immer stetig. Beweis: Wir betrachxZ0 +ε
d
ψ(x) dx = ψ(x0 + ε) − ψ(x0 − ε) .
ten
dx
x0 −ε
Wäre ψ unstetig bei x0 , so würde die rechte Seite für ε → 0 nicht verd
ψ(x) ∝ δ(x − x0 ) und somit würde
schwinden. Das bedeutet aber, dass dx
die kinetische Energie divergieren:
Ekin
−~2
=
2m
Z∞
ψ ∗ (x)
d2
ψ(x) dx
dx2
−∞
+~2
=
2m
Z∞ −∞
∝
Z∞
d
d ∗
ψ (x) ·
ψ(x) dx
dx
dx
Z∞
~2
=
2m
δ(x − x0 ) · δ(x − x0 ) dx = δ(0) = ∞
−∞
d
ψ(x)2 dx
dx
.
−∞
Da die kinetische Energie endlich ist, muss also ψ(x) überall stetig sein.
~2
−
2m
xZ0 +ε
x0 −ε
d2
ψ(x) dx = −
dx2
xZ0 +ε
V (x)ψ(x) dx + E
x0 −ε
xZ0 +ε
ψ(x) dx .
x0 −ε
28
...
2) Die Ableitung dψ
ist bei endlichen Potentialen stetig. Wir
dx
integrieren die Schrödingergleichung von x0 − ε bis x0 + ε
(6.1).
Für ein endliches Potential V (x) ist verschwindet die rechte Seite der Gleichung (6.1) im Limes → 0, da ψ(x) keine δ-Beiträge besitzt, denn sonst
würde die kinetische Energie erst recht divergieren.
Wir haben daher
lim −
→0
Also ist
∂
ψ(x)
∂x
~2
[ψ 0 (x0 + ε) − ψ 0 (x0 − ε)] = 0
2m
(6.2)
stetig.
3) Sprung der Ableitung von ψ bei Potentialen mit δ-Anteil Wenn
V (x) einen δ-Funktionsbeitrag V (x) = C · δ(x − x0 ) + (endliche Anteile) enthält,
dann gilt
xZ0 +ε
V (x) ψ(x) dx =
x0 −ε
xZ0 +ε
C · δ(x − x0 ) ψ(x) dx = C ψ(x0 )
x0 −ε
Ein solches Potential wird z.B. verwendet, um Potentialbarrieren zu beschreiben. Damit wird aus (6.1) ein Sprung in der Ableitung von ψ(x):
0
0
lim ψ (x0 + ε) − ψ (x0 − ε)
ε→0
=
2m
C ψ(x0 ) (6.3).
~2
...
4) Die Wellenfunktion verschwindet bei unendlichem Potential
Wenn V (x) = ∞ in einem Intervall x ∈ (xa , xb ), dann verschwindet die
Wellenfunktion in diesem Intervall, da sonst die potentielle Energie unendlich
wäre.
5) Unstetigkeit von dψ
am Rand eines unendlichen Potentials
dx
Wenn V (x) = ∞ in einem Intervall x ∈ (xa , xb ), dann ist zwar die Wellenfunktion Null im Intervall, und überall stetig, aber die Ableitung wird in der
Regel an den Grenzen des Intervalls unstetig sein.
29
Randbedingungen dreidimensionaler Probleme Aus ähnlichen Überlegungen
folgt ebenso in drei Dimensionen, dass die Wellenfunktion und deren partielle
Ableitungen überall stetig sein müssen, wenn das Potential überall endlich
ist. Weitere allgemeine Eigenschaften der Wellenfunktion werden wir später
besprechen.
Konstantes Potential
....
6.2
Besonders wichtig bei Potentialproblemen ist der Fall, dass das Potential in
einem Intervall konstant ist. Wir behandeln das eindimensionale Problem. Es
sei also
V (x) = V = konst. für a < x < b .
In diesem Intervall wird dann die Schrödingergleichung (5.20) zu
−
~2
ψ 00 (x) = (E − V ) ψ(x)
2m
(6.4)
(Schwingungsgleichung), mit der allgemeinen Lösung
Lösung der Schrödingergleichung für konstantes Potential
ψ(x)
= a1 e q x
+
b1 e−q x
(6.5a)
= a2 e i k x
+
b2 e−i k x
(6.5b)
= a3 cos(kx) +
mit k 2
= −q 2 =
2m
~2
b3 sin(kx) ,
(6.5c)
(E − V )
(6.5d)
Diese drei Lösungen sind äquivalent !
Wenn E < V , dann ist q reell, und die Formulierung der ersten Zeile ist
bequem. Die Wellenfunktion ψ(x) hat dann im Intervall [a, b] i.a. exponentiell
ansteigende und abfallende Anteile !
Wenn E > V , dann ist k reell, und die zweite oder dritte Zeile sind, je
nach Randbedingugen, bequeme Formulierungen. Die Wellenfunktion zeigt
dann im Intervall [a, b] oszillierendes Verhalten.
30
V (x)
III
II
I
V =∞
V =∞
L
0
x
6.3
Gebundene Zustände im Potentialtopf
6.3.1
Potentialtopf mit unendlich hohen Wänden
....
Abbildung 4: {qm30} Potentialtopf mit unendlich hohen Wänden
Der Potentialtopf mit unendlich hohen Wänden, der in Abbildung (4) skizziert ist, kann als stark idealisierter Festkörper betrachtet werden. Die Elektronen verspüren ein konstantes Potential im Festkörper und werden durch
unendlich hohe Wände daran gehindert, den Festkörper zu verlassen.
Das Potential lautet
(
V0 := 0 für 0 < x < L
V (x) =
∞
sonst
Es gibt also die drei skizzierten, qualitativ verschiedenen Teilgebiete. Eine
oft sinnvolle Strategie bei solchen Potentialproblemen ist, zuerst allgemeine
Lösungen für die Wellenfunktion in den Teilgebieten zu finden, und diese
dann mit den Randbedingungen geeignet zusammenzusetzen.
Die Energie E, also der Eigenwert von Ĥ, ist nicht ortsabhängig ! (Sonst
wäre z.B. die Stetigkeit von ψ(x, t) zu anderen Zeiten verletzt).
Für den unendlich hohen Potentialtopf finden wir:
Gebiete I & III: Hier ist V (x) = ∞ und daher ψ(x) ≡ 0, da sonst
Z
Epot = V (x) |ψ(x)|2 dx = ∞
Gebiet II: Hier ist das Potential konstant.
1. Versuch: Wir setzen E < V0 = 0 an und verwenden (6.5a):
ψ(x) = a eqx + b e−qx
mit reellem q =
q
2m
~2
(V0 − E).
31
Die Stetigkeit der Wellenfunktion bei x = 0 verlangt ψ(0) = 0, also
a = −b. Die Stetigkeit bei x = L verlangt ψ(L) = 0, also eqL − e−qL = 0.
Daraus folgt q = 0 und damit ψ(x) = a(e0 − e0 ) ≡ 0. Wir finden also keine
Lösung mit E < V0 ! Man kann allgemein sehen, dass die Energie E immer
größer als das Minimum des Potentials sein muss.
2. Versuch: Wir setzen E > V0 an und verwenden (wegen der Randbedingungen) (6.5c):
mit
ψ(x) = a sin kx + b cos kx
r
2m(E − V0 )
k =
, a, b ∈ C
~2
(6.6)
.
Die Wellenfunktion muss mehrere Bedingungen erfüllen:
• Die Stetigkeit der Wellenfunktion ergibt hier die Randbedingungen
ψ(0) = 0 und ψ(L) = 0 , und daher
b = 0
a sin(kL) = 0
.
Die zweite Bedingung zusammen mit der Normierung kann nur mit
sin(kL) = 0 erfüllt werden, da mit a = 0 die Wellenfunktion wieder
mit einer ganzzahliidentisch verschwinden würde. Also muss k = nπ
L
gen Quantenzahl n gelten, die den gebundenen Zustand charakterisiert.
Der Wert n = 0 ist ausgeschlossen, da dann wieder ψ ≡ 0 wäre. Wir
können uns auf positive n beschränken, denn negative n ergeben mit
sin(−nkx) = − sin(nkx) bis auf die Phase (−1) dieselbe Wellenfunktion.
• Die Ableitung der Wellenfunktion darf bei x = 0 und x = L beliebig
unstetig sein, da dort das Potential unendlich ist. Hieraus erhalten wir
im vorliegenden Fall keine weiteren Bedingungen an ψ.
• Normierung der Wellenfunktion: Zum einen muss ψ(x) überhaupt normierbar sein, was in dem endlichen Intervall [0, L] kein Problem ist.
Zum anderen können wir die Normierungskonstante a in Abhängigkeit
32
von der Quantenzahl n berechnen:
1 = hψ|ψi =
Z
Z∞
dx |ψ(x)|2
−∞
L
nπ
dx sin2 ( x)
L
0
Z nπ
L
nπ
dy sin2 y mit y =
x
= |a|2
nπ 0
L
L nπ
L
= |a|2
= |a|2
.
nπ 2
2
= |a|2
Also |a|2 = L2 , mit beliebiger Phase für a, welches wir reell wählen.
Insgesamt erhalten wir die
Lösung für ein Teilchen im unendlich hohen Potentialtopf
ψn (x) =
r
kn =
nπ
L
En =
~2 π 2 2
n + V0
2mL2
...
2
sin(kn x) , 0 < x < L ; (ψ(x) = 0 sonst)
L
(6.7).
; n = 1, 2, . . .
(6.8)
(6.9)
....
Es sind also hier Energie und Wellenzahl quantisiert, mit nur diskreten möglichen Werten, in Abhängigkeit von der Quantenzahl n. Die Energie
nimmt mit n2 zu und mit 1/L2 ab.
In Abbildung 5 sind Wellenfunktionen zu den drei tiefsten Eigenwerten
dargestellt. Man erkennt, dass die Wellenfunktion des Grundzustands nur
am Rand Null wird. Bei jeder Anregung kommt ein weiterer Nulldurchgang
(,,Knoten“) hinzu.
examples: Sec. 10.4
33
V (x)
I
III
II
E3
V =∞
E2
ψ1(x)
E1
0
ψ2(x)
9 V =∞
4
1
L
x
....
ψ3(x)
Abbildung 5: {qm31} Wellenfunktionen zu den drei niedrigsten EnergieEigenwerten. Hier sind auch die Eigenenergien gezeichnet.
Kraftübertragung auf die Wände Die Kraft errechnet sich aus der
Energie
dE
dL
~2 π 2 n2 2
~ 2 π 2 n2
=
· 3 =
.
2m
L
mL3
F = −
Potentialtopf mit endlicher Tiefe
Wir betrachten nun einen Potentialtopf mit endlicher Tiefe
(
V0 für |x| ≤ L/2
; V0 < 0
V (x) =
,
0 sonst
...
6.3.2
....
Die Energie eines Zustands ist ein einem breiteren Topf geringer. Deswegen
wirkt eine Kraft auf die Wände, die versucht, sie auseinanderzuschieben !
(6.10).
wie er in Abbildung (6) skizziert ist.
Wir haben hier den Koordinatenursprung im Vergleich zum vorherigen
Beispiel um −L/2 verschoben, da sich hierdurch die Rechnungen vereinfachen. Wir interessieren uns hier zunc̈hst ausschließlich für gebundene Zustände,
34
V
V =0
E
I
III
II
V = V0
− L2
x
L
2
Abbildung 6: {pot_mulde} Potentialtopf endlicher Tiefe.
das heißt im Fall des vorliegenden Potentials E < 0. Gleichzeitig muss die
Energie aber, wie bereits besprochen, größer als das Potential-Minimum sein,
d.h. V0 < E < 0.
Wir unterscheiden die drei in Abbildung (6) gekennzeichneten Bereiche.
Für die Bereiche I und III lautet die zeitunabhängige Schrödingergleichung
ψ 00 (x) =
2m |E|
ψ(x)
~2
.
...
Die allgemeine Lösung dieser Differentialgleichung hat jeweils die Form
...
ψ(x) = A1 e−qx + A2 e+qx
(6.11).
r
2m |E|
mit
q=
(6.12).
~2
mit unterschiedlichen Koeffizienten A1,2 in den Bereichen I und III. Im Bereich I muss AI1 = 0 sein, da die Wellenfunktion ansonsten für x → −∞
exponentiell anwachsen würde und somit nicht normierbar wäre. Aus demselben Grund ist AIII
= 0 im Bereich III.
2
Im Zwischenbereich II lautet die zeitunabhängige Schrödingergleichung
ψ 00 (x) = −
2m(|V0 | − |E|)
ψ(x)
~2
.
ψ(x) = B1 eikx + B2 e−ikx
35
...
Hier hat die allgemeine Lösung der Differentialgleichung die Form
(6.13).
2m(|V0 | − |E|)
~2
Die gesamte Wellenfunktionen ist somit

qx

; x < − L2
A1 e
ψ(x) = B1 eikx + B2 e−ikx ; − L2 ≤ x ≤


A2 e−qx
; x > L2
...
k=
(6.14).
L
2
.
...
mit
r
(6.15).
anti-symmetrisch

qx

; x ≤ − L2
Aa e
ψa (x) = Ba sin(kx) ; − L2 ≤ x ≤


−Aa e−qx ; x ≥ L2
L
2
...
...
Die Koeffizienten werden aus den Stetigkeitsbedingungen der Funktion
und dessen Ableitung bestimmt
Alle Wellenfunktionen sind entweder symmetrisch oder antisymmetrisch
unter der Transformation x → −x sind. 5 Das vereinfacht die Lösung des
Problems.
Die Wellenfunktionen lautet dann

qx

; x ≤ − L2
As e
(6.16a).
symmetrisch
ψs (x) = Bs cos(kx) ; − L2 ≤ x ≤ L2


L
−qx
As e
;x ≥ 2
(6.16b).
...
Wir starten vom symmetrischen Fall Nun werten wir die Randbedingungen
zur Bestimmung der Konstanten aus den Stetigkeitsbedingungen:

L
−q( L
)

2
ψs ( 2 ) :
As e
= Bs cos(kL/2)

q
⇒ tan(kL/2) =
(6.17).

0 L
k
k
−q( L
)

ψs ( 2 ) : −As e 2 = − q Bs sin(kL/2)
Die linke Seite stellt ein homogener System linearer Gleichungen für die Koeffizienten As und Bs dar. Das hat bekanntlich nur dann eine nichtriviale
Lösung, wenn die Determinante verschwindet. Die Bedingung dafür steht
5
I. e. gerade oder ungerade. Dass alle Lösungen entweder gerade oder ungerade sein
müssen folgt aus der Tatsache, dass das Potential symmetrisch ist. Wir werden dieses
später diskutieren.
36
auf der rechten Seite von (6.17). Da q und k von E abhängig sind (siehe
(6.12) and (6.14)), stellt diese eine Bedingung für die Energie dar. Wie wir
sehen werden, wird (6.17) von einem dikreten Satz von E erfüllt.
Diese Gleichung liefert die Quantisierungsbedingung für die erlaubten
Energieeigenwerte (im symmetrischen Fall).
Für die weitere Analyse ist es sinnvoll, eine neue Variable η = kL/2
einzuführen und E über (6.13) hierdurch auszudrücken
2
2
2
L 2m
L
L
2
2
k =
q2 .
(|V0 | − |E|) = Ṽ0 −
η =
2
2
2
~
2
2
Hier haben wir definiert: Ṽ0 ≡ L2 2m
|V0 |. Das gibt eine Beziehung zwischen
~2
k und q:
q
...
Ṽ0 − η 2
q
=
k
(6.18).
η
...
Die Bedingungsgleichung (6.17) geben also für den symmetrischen Fall
q
Ṽ0 − η 2
q
(6.19).
tan(η)
= =
k
η
Der antisymmetrische Fall verläuft identisch. Die Rechenschritte werden hier
ohne Beschreibung präsentiert.
0
ψa ( L2 ) :
L
Aa e−q( 2 ) =
k
q




Ba cos(kL/2) 
⇒ tan(kL/2) = −
k
(6.20).
q
...
L
ψa ( L2 ) : −Aa e−q( 2 ) = Ba sin(kL/2)
tan(η)
=−
k
η
= −q
q
Ṽ0 − η 2
...
(6.18) gilt auch im antisymmetrischen Fall, so daß:
(6.21).
Die graphische Lösung der impliziten Gleichungen für η (6.19) und (6.21)
erhält man aus den Schnittpunkten der in Abbildung (7) dargestellten Kurve
tan(η)
mit den Kurven q/k bzw. −k/q (beide definiert im Bereich 0 < η <
p
Ṽ0 ). Man erkennt, dass unabhängig von Ṽ0 immer ein Schnittpunkt mit der
37
r
V˜0
6
tan(η)
4
q/k
2
2
4
6
8
10
η
−2
−k/q
−4
....
−6
Abbildung 7: {schacht} Graphische Bestimmung der Energie-Eigenwerte im
Potentialtopf. Aufgetragen ist tan(η) über η und außerdem die Funktionen
−k/q und q/k. Es wurde Ṽ0 = 49 gewählt.
q/k-Kurve auftritt. Es existiert somit mindestens immer ein symmetrischer,
gebundener Zustand.
Wir können auch leicht die Zahl der gebundenen Zustände bei gegebenem
Potentialparameter Ṽ0 bestimmen. Der Tangens hat Nullstellen bei η = nπ.
Die Zahl der Schnittpunkte
der Kurve zu q/k mit tan(η) nimmt immer um
p
eins zu wenn Ṽ0 die Werte nπ
√ überschreitet. Die Zahl der symmetrischen
Eigenwerte ist also N+ = int(
Ṽ0
π
+ 1).
√
Ṽ
Die Zahl der anti-symmetrischen Eigenwerte ist dagegen int( π 0 + 1/2)
(ÜBUNG).
Zur endgültigen Festlegung der Wellenfunktion nutzen wir die Stetigkeitsbedingungen (6.17) und (6.20)
As = Bs eqL/2 cos(kL/2)
Aa = −Ba eqL/2 sin(kL/2)
38

q(ξ+1)

− sin(η) e
Ψa (ξ) = Ba
sin(ηξ)


sin(η) e−q(ξ−1)
, ξ < −1
, −1 ≤ ξ ≤ +1
, ξ > +1
...
...
aus und erhalten daraus mit der dimensionslosen Länge ξ = x/(L/2)

q(ξ+1)

, ξ < −1
 cos(η) e
Ψs (ξ) = Bs
(6.22a).
cos(ηξ)
, −1 ≤ ξ ≤ +1


cos(η) e−q(ξ−1)
, ξ > +1
. (6.22b).
....
Die Parameter Ba/s ergeben sich aus der Normierung. Es ist nicht zwingend
notwendig, die Wellenfunktion zu normieren. Mann muss dan aber bei der
Berechnung von Mittelwerten, Wahrscheinlichkeiten, etc. aufpassen,
Die möglichen gebundenen Zustände der Potential-Mulde mit Ṽ0 = 13
sind in der Abbildung (8) dargestellt.
Abbildung 8: {psi_pot} Wellenfunktionen Ψn (ξ) zu den drei Eigenwerten
En des Potentialtopfes mit einer Potentialhöhe Ṽ0 = 13.
Wie im unendliche tiefen Potentialtopf, hat die Wellenfunktion zur tiefsten Energie keine Nullstelle. Die Zahl der Nullstellen ist generell n, wobei
die Quantenzahl n = 0, 1, · · · die erlaubten Energien En durchnumeriert.
39
....
In Abbildung (8) gibt die Null-Linie der Wellenfunktionen gleichzeitig
auf der rechten Achse die zugehörige Eigenenergie En an. In den verwendeten Einheiten befinden sich die Potentialwände bei ±1. Man erkennt, dass
die Wellenfunktion mit steigender Quantenzahl n zunehmend aus dem Potentialbereich hinausragt.
Weitere Beispiele: Sec. 161
6.3.3
Zusammenfassung: gebundene Zustände
Aus den obigen Ergebnissen für speziellen (kastenförmigen) Potentialen, versuchen wir allgemeinere Ergebnisse für gebundene Zustände in einer Dimension ohne Beweis zu leiten. Dabei betrachten wir stetige Potentiale V (x), bei
denen limx→±∞ V (x) = V∞ < ∞, wobei wir ohne Einschränkungen V∞ = 0
wählen können. Weitehin soll V (x) ≤ 0 sein.
Zusammenfassung: gebundene Zustände
• Gebundene Zustände haben negative Energieeigenwerte
• ihre Energieeigenwerte En sind diskret. Das kommt aus der Bedingung,
dass die Wellenfunktion für x → ±∞ nicht divergieren darf.
R∞
• ihre Wellenfunktion ψn (x) ist normierbar: −∞ |ψn (x)|2 dx < ∞
• Sie kann reell gewählt werden (das haben wir nicht gezeigt).
• Eigenzustände unterschiedlicher Energien sind orthogonal:
Z ∞
ψn (x)∗ ψm (x) dx ∝ δn,m
(6.23)
−∞
• die Wellenfunktionen haben Knoten (ausser dem Grundzustand, i. e.
dem Zustand mit niedrigsten Energie), deren Anzahl mit n zunimmt.
Das kommt aus der Orthogonalitätsbedingung.
• in einer Dimension gibt es immer mindestens einen gebundenen Zustand (in höhere Dimensionen ist das nicht immer der Fall).
40
Streuung an einer Potentialbarriere
I
II
R
V0
....
6.4
III
T
−L
0
x
Abbildung 9: {qm32} Streuung an einer rechteckigen Potential-Barriere.
Wir untersuchen nun quantenmechanisch die Streuung von Teilchen an einem Potential. Gebundene Zustände haben wir bereits im letzten Abschnitt
behandelt; wir konzentrieren uns hier auf ungebundene Zustände. Wir betrachten hier nur den Fall einer rechteckigen Potential-Barriere, so wie er in
Abbildung (9) dargestellt ist (V0 > 0). Wie im vorherigen Anschnitt hat diese
Barrierenform den Vorteil, dass das Problem exakt lösbar ist (allerdings aufwendig). Die qualitative Schlussfolgerungen gelten auch, unter bestimmten
Bedingungen, für allgemeinere Barrierenformen.
Von links treffen Teilchen auf das Potential. Wir werden die Intensität
R der rückgestreuten und die Intensität T der transmittierten Teilchen berechnen. R und T bezeichnet man auch als Reflexionskoeffizienten bzw.
Transmissionskoeffizienten. Sie sind definiert als
R =
Zahl der reflektierten Teilchen
Zahl der einfallenden Teilchen
T =
Zahl der transmittierten Teilchen
Zahl der einfallenden Teilchen
.
Im klassischen Fall ist die Situation einfach: wenn E > V0 kommt das
Teilchen über die Potentialbarriere durch (es wird lediglich während des
Übergangs verlangsamt), also T = 1, R = 0, während für E < V0 wird
es komplett reflektiert, also T = 0, R = 1.
Wir werden hier nur einige für die Quantenmechanik interessantere Eigenschaften dieses Problems studieren.
41
6.4.1
Tunneling
Zunächst interessieren wir uns für den Fall V0 > 0 und für Energien 0 < E <
V0 . In diesem Fall wäre ein klassiches Teilchen Total reflektiert, R = 1. Das
ist nicht der Fall in der Quantenmechanik.
In den Bereichen I und III lautet die allgemeine Lösung
mit der Wellenzahl
ψ(x) = A1 · eikx + A2 · e−ikx
r
2mE
k =
und E > 0. (6.24)
~2
Diese Wellenfunktion ist, wie bereits besprochen, nicht normierbar,6 . Im
Bereich I beschreibt diese einen Strom von nach rechts einlaufenden (Impuls
∂
= ~k > 0) und einen Strom von nach links (p̂ = −~k) reflektiep̂ = −i~ ∂x
renden Teilchen.
Hinter der Barriere (Bereich III) kann es, wegen der vorgegebenen Situtation mit von links einlaufenden Teilchen, nur nach rechts auslaufende
Teilchen (Wellen) geben. Daher muss dort A2 = 0 sein.
Im Bereich II gilt
ψ(x) = B1 · eqx + B2 · e−qx
r
2m(V0 − E)
reell und > 0
mit
q =
~2
Die gesamte Wellenfunktion lautet somit

ikx
−ikx

; x ≤ −L
A1 e + A2 e
qx
−qx
ψ(x) =
B1 e + B2 e
; −L ≤ x ≤ 0


ikx
C e
;x ≥ 0
.
(6.25)
.
0
Die Stetigkeitsbedingungen von ψ(x) und ψ (x) liefern 4 Randbedingungen
zur Festlegung der 5 Unbekannten koeffizienten A1 , A2 , B1 , B2 , C.7 Da die
Wellenfunktion mal einer Konstante multipliziert werden kann, kann eines
6
Diese nicht Normierbarkeit“ ist noch zulässig. Der Fall, wenn die Wellenfunktion im
”
unendlichen divergiert ist dagegen nicht erlaubt
7
Wir sehen schon hier ein Unterschied zu dem Fall gebundener Zuständen, wo die Zahl
der Unbekannten gleich der Zahl der Bedingungen war.
42
der Koeffizienten (solange es nicht verschwindet), auf 1 festgelegt werden.
Physikalisch können wir A1 = 1 wählen, da dieses die Dichte der einfallenden
Teilchen berschreibt.
Uns interessieren der Reflektions- und der Transmissionskoeffizient
nr |A2 |2
= |A2 |2
R = =
ne
|A1 |2
nt |C|2
T = =
= |C|2
,
ne
|A1 |2
...
wo ne , nr , nt die (Wahrscheinlichkeits-) Dichten der einfallenden, reflektierten
und transmittierten Teilchen ist. Der Strom ist eigentlich gegeben durch
Dichte mal Geschwindigkeit, aber letztere ist die gleiche rechts und links der
Barriere, und proportional zum Impuls ~k.
Es bleibt als Lösung

ikx
−ikx

; x ≤ −L
 e +A e
qx
−qx
ψ(x) = B1 e + B2 e
.
(6.26).
; −L ≤ x ≤ 0


ikx
C e
; x≥0
Die restlichen 4 Koeffizienten kann man nun über die Randbedingungen
bestimmen. Diese ergeben:
(a) ψ(−L) : eik(−L) + A · e−ik(−L)
=
B̄1 + B̄2
(b)
ψ(0) :
C
=
B1 + B2
(c) ψ (−L) :
e−ikL − AeikL
=
−iρ(B̄1 − B̄2 ) (6.27)
C
=
−iρ(B1 − B2 )
0
(d)
0
ψ (0) :
mit
B̄1 ≡ B1 e−qL , B̄2 ≡ B2 eqL , ρ ≡
Erste Konsequenzen
43
q
k
Energie von ungebundenen Zuständen Bilden einen Kontinuum
Es handelt sich hier um ein hinomogenes lineares System von vier Gleichungen mit vier unbekannten. Dadurch, dass das system inhomogen ist, muss
die Determinante nicht verschwinden. Im Gegensatz zum Fall gebundener
Zuständen, gibt es also hier keine Bedingung für die Energien: Alle (positiven) Energien sind elaubt. Das kommt aus det Tatsache, dass bei gebundenen
Zuständen ein Koeffizient verlorengeht, aufgrund der Bedingung , dass die
Wellenfunktion im unendlichen verschwinden muss.
Eine wichtige Schlussfolgerung ist also dass:
Für ungebundene Zustände sind die Energien nicht quantisiert, sie
bilden einen Kontinuum
Erste Konsequenzen
...
Stromerhaltung Ein weiteres wichtiges Ergebnis ist, dass, wie physikalisch erwartet:
T +R=1,
(6.28).
was der Stromerhaltung entspricht.
Letzteres kann man auf folgender Weise sehen:
Aus (6.27), durch multiplikation von (b) mit (d)∗ (i. e. komplex conjugiert) berechnen wir:
|C|2 =
ReC C ∗ = Re [(−iρ)(B1 B1∗ − B2 B2∗ − B1 B2∗ + B2 B1∗ )]
= 2ρRe iB1 B2∗
(6.29)
mit (a) · (c)∗
Re (e−ikl + Aeikl )(e−ikl − Aeikl )∗ = 2ρRe iB̄1 B̄2∗ = 2ρRe iB1 B2∗ = |C|2
Die linke Seite gibt aber
Also
Re 1 − |A|2 + Ae2ikL − A∗ e−2ikL = 1 − |A|2
1 − |A|2 = |C|2
|{z} |{z}
R
was (6.28) entspricht.
44
T
(6.30)
Tunnel Effekt
Die Lösung des Gleichungssystems (6.27) ist ziemlich aufwendig. Wir
wollen hier den interessanten Fall qL 1 betrachten. Das ist der Tunnel Regime. Es beschreibt das Tunneln von einem Teilchen durch eine Barriere die
höher als die Teilchenenergie ist. Anwendungen: Raster-Tunnel Mikroskop,
Alpha-Zerfall.
Ergebnis:
Für qL 1 können wir B̄1 in (6.27) vernachlässigen. Die Summe von (a)
und (c) in (6.27) gibt dann
2e−ikL = B̄2 (1 + iρ) ⇒ B2 =
(b) minus (d) geben
B1 = −B2
also
B1 B2∗ = −B2 B2∗
2
e−qL e−ikL
1 + iρ
1 − iρ
1 + iρ
4e−2qL
1 − iρ
=−
1 + iρ
(1 + iρ)2
T = |C|2 =
16ρ2
e−2qL .
2
2
(1 + ρ )
...
Aus (6.29) erhalten wir also
(6.31).
....
Der wichtige Term in (6.31) ist das Exponential. (6.31) zeigt, dass der Transmissionskoeffizient exponentiell mit der Barrierenlänge L abfällt. Der Koeffizient von L im Exponenten, 2q (inverse Eindringstiefe), nimmt mit der
Teilchenmasse und mit dem Abstand zwischen Energie und Barrierenhöhe
(vgl. (6.25)) zu.
Anwendung: Raster-Tunnel-Mikroskop (Nobelpreis 1986 H.Rohrer, G.Binnig
(IBM-Rüschlikon))
Beim Scanning Tunneling Mikroscope (STM) wird eine Metallspitze über
eine Probenoberfläche mittels ,,Piezoantrieb” geführt, siehe Abbildung (10).
Die leitende (oder leitend gemachte) Probe wird zeilenweise abgetastet. Zwischen der Spitze und der Probe wird ein Potential angelegt, wodurch ein
,,Tunnel-Strom” fließt, der vom Abstand der Spitze zur lokalen Probenoberfläche abhängt. Mit Hilfe einer Piezo-Mechanik kann die Spitze auch senkrecht zur Probenoberfläche bewegt werden. Es gibt verschiedene Arten, das
45
Abbildung 10: {qm35} Raster-Tunnel-Mikroskop.
Tunnel-Mikroskop zu betreiben. In einer Betriebsart wird die Spitze immer
so nachjustiert, dass der Tunnel-Strom konstant ist. Die hierfür notwendige
Verschiebung ist ein Maß für die Höhe der Probenoberfläche.
Ein STM hat atomare Auflösung. Das erscheint zunächst unglaubwürdig,
da die Spitze makroskopische Dimensionen hat. Der Grund ist, dass wegen
der exponentiellen Abhängigkeit des Tunnel-Stromes vom Abstand das ,,unterste Atom” der Spitze den dominanten Beitrag zum Strom liefert (siehe
Abbildung (11)).
Abbildung 11: {qm36} Spitze des Raster-Tunnel-Mikroskops.
6.4.2
Resonanz
Ein weiterer wichtiger Quanteneffekt ist die Streuresonanz. Diese kommt
bei energien E > V0 vor, wenn die Barrierenbreite ein Vielfaches der halben Wellenlänge in der Barriere, so daß die Welle in die Potentialbarriere
,,hineinpasst”. Für E > 0, können wir die Ergebnisse aus dem vorherigen
46
q = i q̄
mit q̄ reell .
...
Abschnitt (Eq. (6.27)) übernehmen, mit
(6.32).
Die Wellenlänge in der Barriere ist dann 2π
q̄
In allgemeinen Fall wird (im Gegensatz zu klassischen Teilchen) das Teilchen zum Teil reflektiert: R > 0, T < 1.
Im Resonanzfall q̄L = n π (n ganzzahlig) gibt es allerdings perfekte Transmission (R = 0, T = 1):
Das sieht man aus (6.27), wo B̄1 = ±B1 und B̄2 = ±B2 , 8 also
e−ikL + AeikL = ±C = e−ikL − AeikL ⇒ A = 0 ⇒ R = 0, T = 1 .
6.5
Klassischer Limes
Ergebnisse aus der Quantenmechanik müssen sinnvollerweise im Limes ~ → 0
gleich dem klassischen Limes werden, oder besser gesagt, da ~ die Dimensionen Energie · Zeit = Impuls · Länge hat, wenn ~ viel kleiner als diese
entsprechenden characteristischen Grössen ist.
Zum Beispiel, für gebundene Zustände werden im ~ → 0 Limes die Energieabstände Null (siehe z.B. (6.7)).
Aus Sec. 6.4 haben wir für E < V0 und ~ → 0, q → ∞ (siehe Eq. (6.25)),
also T → 0 (vgl. Eq. (6.31)). Für E > V0 wird −iρ = kq̄ → 1 (vgl.
(6.24),(6.25),(6.27),(6.32)), so daß B2 = 0, A = 0, B1 = C = 1, also T = 1.
8
± je nachdem ob n gerade oder ungerade ist
47
7
Functions as Vectors
48
....
c
2004
and on, Leon van Dommelen
Wave functions of quantum mechanics belong to an infinite-dimensional vector space, the Hilbert space. In this section, we want to present an Heuristic
treatment of functions in terms of vectors, skipping rigorous mathematical
definitions.
There are certain continuity and convergence restriction, which we are
not going to discuss here.
For a more rigorous treatment, please refer to the original QM Script by
Evertz and von der Linden.
The main point here is that most results about vectors, scalar products,
matrices, can be extended to linear vector spaces of functions.
This part is based largely on the Lecture Notes of L. van Dommelen.
There are, however, some modifications.
A vector f (which might be velocity v, linear momentum p = mv, force
F, or whatever) is usually shown in physics in the form of an arrow:
Abbildung 12: The classical picture of a vector.
However, the same vector may instead be represented as a spike diagram,
by plotting the value of the components versus the component index:
In the same way as in two dimensions, a vector in three dimensions, or,
for that matter, in thirty dimensions, can be represented by a spike diagram:
For a large number of dimensions, and in particular in the limit of infinitely many dimensions, the large values of i can be rescaled into a continuous
coordinate, call it x. For example, x might be defined as i divided by the
number of dimensions. In any case, the spike diagram becomes a function
f (x):
The spikes are usually not shown:
c
2004
and on, Leon van Dommelen
49
Abbildung 13: Spike diagram of a vector.
Abbildung 14: More dimensions.
In this way, a function is just a vector in infinitely many dimensions.
Key Points
Functions can be thought of as vectors with infinitely many components.
7.1
The scalar product
....
This allows quantum mechanics do the same things with functions as you
can do with vectors.
The scalar product 9 of vectors is an important tool. It makes it possible
to find the length of a vector, by multiplying the vector by itself and taking
the square root. It is also used to check if two vectors are orthogonal: if
their scalar product is zero, they are. In this subsection, the scalar product
is defined for complex vectors and functions.
9
also called inner or dot product, sometimes with slight differences
c
2004
and on, Leon van Dommelen
50
Abbildung 15: Infinite dimensions.
Abbildung 16: The classical picture of a function.
The usual scalar product of two vectors f and g can be found by multiplying components with the same index i together and summing that:
f · g ≡ f1 g1 + f2 g2 + f3 g3
Figure (17) shows multiplied components using equal colors.
Abbildung 17: Forming the scalar product of two vectors {fig:dota}
Note the use of numeric subscripts, f1 , f2 , and f3 rather than fx , fy , and
fz ; it means the same thing. Numeric subscripts allow the three term sum
c
2004
and on, Leon van Dommelen
51
above to be written more compactly as:
X
f ·g ≡
fi gi
all i
The length of a vector f , indicated by |f | or simply by f , is normally
computed as
sX
√
|f | = f · f =
fi2
all i
hf |gi ≡
X
...
However, this does not work correctly for complex vectors. The difficulty is
that terms of the form fi2 are no longer necessarily positive numbers.
Therefore, it is necessary to use a generalized “scalar product” for complex
vectors, which puts a complex conjugate on the first vector:
fi∗ gi
(7.1).
all i
If vector f is real, the complex conjugate does nothing, and hf |gi is the
same as the g · f . Otherwise, in the scalar product f and g are no longer
interchangeable; the conjugates are only on the first factor, f . Interchanging
f and g changes the scalar product value into its complex conjugate.
The length of a nonzero vector is now always a positive number:
|f | =
p
hf |f i =
sX
all i
|fi |2
(7.2)
In (7.1) we have introduced the Dirac notation, which is quite common
in quantum mechanics. Here, one takes the scalar product “bracket” verbally
apart as
hf |
|gi
bra /c ket
and refer to vectors as bras and kets. This is useful in many aspects: it
identifies which vector is taken as complex conjugate, and it will provide a
elegant way to write operators. More details are given in Sec. 8.
The scalar product of functions is defined in exactly the same way as for
vectors, by multiplying values at the same x position together and summing.
c
2004
and on, Leon van Dommelen
52
But since there are infinitely many (a continuum of ) x-values, one multiplies
by the distance ∆x between these values:
X
hf |gi ≈
f ∗ (xi )g(xi ) ∆x
i
which in the continumm limit ∆x → 0 becomes an integral:
Z
f ∗ (x)g(x) dx
hf |gi =
(7.3)
all x
as illustrated in figure 18.
Abbildung 18: Forming the scalar product of two functions. {fig:dotb}
The equivalent of the length of a vector is in case of a function called its
“norm:”
sZ
p
||f || ≡ hf |f i =
|f (x)|2 dx
(7.4)
The double bars are used to avoid confusion with the absolute value of the
function.
A vector or function is called “normalized” if its length or norm is one:
hf |f i = 1 iff f is normalized.
(7.5)
(“iff” should really be read as “if and only if.”)
Two vectors, or two functions, f and g are by definition orthogonal if
their scalar product is zero:
hf |gi = 0 iff f and g are orthogonal.
Sets of vectors or functions that are all
(7.6)
c
2004
and on, Leon van Dommelen
53
• mutually orthogonal, and
• normalized
occur a lot in quantum mechanics. Such sets are called orthonormal“.
”
So, a set of functions or vectors f1 , f2 , f3 , . . . is orthonormal if
0 = hf1 |f2 i = hf2 |f1 i = hf1 |f3 i = hf3 |f1 i = hf2 |f3 i = hf3 |f2 i = . . .
and
1 = hf1 |f1 i = hf2 |f2 i = hf3 |f3 i = . . .
Key Points
To take the scalar product of vectors, (1) take complex conjugates of the
components of the first vector; (2) multiply corresponding components of
the two vectors together; and (3) sum these products.
To take an scalar product of functions, (1) take the complex conjugate
of the first function; (2) multiply the two functions; and (3) integrate the
product function. The real difference from vectors is integration instead of
summation.
To find the length of a vector, take the scalar product of the vector with
itself, and then a square root.
To find the norm of a function, take the scalar product of the function
with itself, and then a square root.
A pair of functions, or a pair of vectors, are orthogonal if their scalar
product is zero.
7.2
Operators
....
A set of functions, or a set of vectors, form an orthonormal set if every one
is orthogonal to all the rest, and every one is of unit norm or length.
This section defines linear operators (or, more simply operators), which are a
generalization of matrices. Operators are the principal components of quantum mechanics.
c
2004
and on, Leon van Dommelen
54
In a finite number of dimensions, a matrix  can transform any arbitrary
vector v into a different vector Âv:
v
matrix  w = Âv
Similarly, an operator transforms a function into another function:
operator Â
- g(x) = Âf (x)
f (x)
Some simple examples of operators:
f (x)
f (x)
x
b
d
dx
-
g(x) = xf (x)
-
g(x) = f 0 (x)
Note that a hat (ˆ) is often used to indicate operators, and to distinguish them
from numbers; for example, x
b is the symbol for the operator that corresponds
to multiplying by x. If it is clear that something is an operator, such as d/dx,
no hat will be used.
It should really be noted that the operators we are interested in in quantum mechanics are “linear” operators, i. e. such that for two functions f and
g and two numbers a and b:
 (a f + b g) = a  f + b  g
(7.7)
Key Points
Matrices turn vectors into other vectors.
7.3
Eigenvalue Problems
....
Operators turn functions into other functions.
To analyze quantum mechanical systems, it is normally necessary to find
so-called eigenvalues and eigenvectors or eigenfunctions. This section defines
what they are.
A nonzero vector v is called an eigenvector of a matrix  if Âv is a
multiple of the same vector:
Âv = av iff v is an eigenvector of Â
(7.8)
c
2004
and on, Leon van Dommelen
55
The multiple a is called the eigenvalue. It is just a number.
A nonzero function f is called an eigenfunction of an operator  if Âf is
a multiple of the same function:
Âf = af iff f is an eigenfunction of Â.
(7.9)
For example, ex is an eigenfunction of the operator d/dx with eigenvalue 1,
since dex /dx = 1ex .
However, eigenfunctions like ex are not very common in quantum mechanics since they become very large at large x, and that typically does not
describe physical situations. The
√eigenfunctions of d/dx that do appear a lot
ikx
are of the form e , where i = −1 and k is an arbitrary real number. The
eigenvalue is ik:
d ikx
e = ikeikx
dx
ikx
Function e does not blow up at large x; in particular, the Euler identity
says:
eikx = cos(kx) + i sin(kx)
The constant k is called the wave number.
Key Points
If a matrix turns a nonzero vector into a multiple of that vector, that
vector is an eigenvector of the matrix, and the multiple is the eigenvalue.
7.4
Hermitian Operators
....
If an operator turns a nonzero function into a multiple of that function,
that function is an eigenfunction of the operator, and the multiple is the
eigenvalue.
† = (ÂT )∗ .
...
Most operators in quantum mechanics are of a special kind called “Hermitian”. This section lists their most important properties.
The Hermitian conjugate † of an operator Â, corresponds, for finitedimensional spaces to the transpose, complex conjugate of the matrix Â:
(7.10).
hf |Âgi = h† f |gi
...
In general, for given  it is defined as the operator for which
(7.11).
c
2004
and on, Leon van Dommelen
56
hf |Âgi = hÂf |gi always iff  is Hermitian.
...
for any vector |f i and |gi.
An operator for which  = † is called hermitian. In other words, an
hermitian operator can always be flipped over to the other side if it appears
in a scalar product:
(7.12).
That is the definition, but Hermitian operators have the following additional special properties, which, again, are very symilar to the corresponding
properties of Hermitian matrices.
• They always have real eigenvalues. (But the eigenfunctions, or eigenvectors if the operator is a matrix, might be complex.) Physical values
such as position, momentum, and energy are ordinary real numbers since they are eigenvalues of Hermitian operators (we will see this later).
• Their eigenfunctions can always be chosen so that they are normalized
and mutually orthogonal, in other words, an orthonormal set.
• Their eigenfunctions form a “complete” set. This means that any function can be written as some linear combination of the eigenfunctions.
In practical terms, it means that you only need to look at the eigenfunctions to completely understand what the operator does.
• In summary, the set {fn (x)} of eigenfunctions of an Hermitian operator
can be chosen as an orthonormal basis set for the infinite-dimensional
space. This is a very important property. It means that once we have
found the infinite set of eigenfunctions fn (x) of an hermitian operator,
we can write any f (x) as
X
an fn (x)
(7.13)
f (x) =
n
We will not discuss here issues of convergence.
An important issue, however, which is peculiar of function spaces, is
the fact that the set of eigenvalues of an operator is not always discrete,
but sometimes continuous. We will discuss this issue later.
• From now on, unless otherwise specified, when we refer to a basis we
mean an orthonormal (and of course complete) basis.
If  is Hermitian:
hg|Âf i = hf |Âgi∗ ,
hf |Âf i is real.
...
The following properties of scalar products involving Hermitian operators
are often needed, so they are listed here:
(7.14).
Key Points
Hermitian operators can be flipped over to the other side in scalar products.
Hermitian operators have only real eigenvalues.
7.5
Additional independent variables
....
Hermitian operators have a complete set of orthonormal eigenfunctions (or
eigenvectors) that can be used as a basis.
In many cases, the functions involved in an scalar product may depend on
more than a single variable x. For example, they might depend on the position
(x, y, z) in three dimensional space.
The rule to deal with that is to ensure that the scalar product integrations
are over all independent variables. For example, in three spatial dimensions:
Z
Z
Z
hf |gi =
f ∗ (x, y, z)g(x, y, z) dxdydz
all x
all y
all z
Note that the time t is a somewhat different variable from the rest, and
time is not included in the scalar product integrations.
Dirac notation
....
8
The advantage of the Dirac notation is that it provides a natural way to carry
out the operation between vectors and operators discussed in section 7. As
much as possible, it provides also a unanbiguous way to distinguish whether
an object is a vector (notation as in Sec. 8.1), an operator (with hat“ or as
”
in Sec. 8.3), or a scalar (all the rest).
57
Vectors
....
8.1
As explained in Sec. 7.1, in the Dirac notations, vectors are represented
in “bra” and “ket”. Here, we illustrate some useful operations that can be
carried out with this formalism. We can write, for example, two vectors as
linear combinations of other ones:
X
|f i =
fn |en i
n
|gi =
X
n
gn |en i
where gn , fn are coefficients (numbers). We now evaluate their scalar product:
!†
!
X
X
hg|f i =
gm |em i
fn |en i
m
n
...
The † operation changes a “bra” into a “ket” and makes the complex conjugate of coefficients, we thus get
X
∗
gm
fn hem |en i ,
(8.1).
hg|f i =
m,n
and we finally obtain the known result
X
∗
hg|f i =
gm
fm .
(8.2).
...
hem |en i = δn,m
...
where we have exploited the fact that “numbers” (gm , fn ) commute with
everything. Notice that two vertical lines (||) collapse into a single one (|).
If, for example, the set of the |en i are orthonormal, then
(8.3).
m
Rules for operations
....
8.2
In expressions such as (8.1), and also in the following we adopt the following
rules for expressions containing bra“ (h|) and kets“ (|i):
”
”
• The distributive property applies
58
• Coefficients (so-called c-numbers, i. e. complex numbers) commute with
each other and with bra“ and kets“
”
”
• bra“ and kets“ do not mutually commute
”
”
• a product like hx||yi contracts“ into a hx|yi, which is a c-number
”
• therefore, such a term (as a whole) commutes with other bra“ or ket“.
”
”
For example:
hx|yi |zi = |zi hx|yi
...
• Complex conjugation turns a bra“ into a ket“ and vice-versa.
”
”
• New For operators in the for (8.5) or (8.7) (see below) hermitian conjugation is obtained by complex conjugation of the coefficients and by
flipping“ the bra and ket:
”
X
X
(
ai |vi ihui |)† =
a∗i |ui ihvi |
(8.4).
i
Operators
....
8.3
i
...
Operators transform vectors in other vectors. As for matrices, an operator
 is completely specified, by specifying its action to any vector of the vector
space. I. e., if we know the result of Â|vi for all |vi we know Â.
Alternatively, it is sufficient to know the action of Â|vi on all elements of
a basis
As a further alternative, it is sufficient to know all “matrix elements” of
the operator between two elements of a basis set. I. e., for example we need
to know hen |Â|em i for all n, m.
An operator can be written as a sum of terms of the form
X
 =
ai |vi ihui | ,
(8.5).
i
(notice, this is different from the scalar product (8.3)) where ai are numbers,
and |vi i, hui | are ket and bra vectors. The application of  to a vector |f i
gives (see the rules 8.2):
X
Â|f i =
ai hui |f i |vi i .
(8.6)
i
59
...
In particular, in terms of its matrix elements Am,n in a complete basis, an
operator can be written as
X
 =
An,m |en ihem | .
(8.7).
n,m
8.3.1
Hermitian Operators
hg|Âf i = hÂg|f i = (hf |Âgi)∗
...
We have already seen in (7.11) the definition for the Hermitian conjugate
† of an operator  (see also (8.4): The proof is left as an exercise ). An
operator  for which  = † is called hermitian.
It is thus straightforward to see that an operator  is hermitian iff (cf.
(7.12) for any |f i, |gi
(8.8).
where we have used one of the rules 8.2. Using the expansion (8.7), we have
X
X
X
A∗m,n hg|en ihem |f i .
Am,n hf |em ihen |gi)∗ =
An,m hg|en ihem |f i = (
n,m
n,m
n,m
Since this is valid for arbitrary |gi, |f i, we have
A∗m,n = An,m
(8.9)
...
....
which for a finite-dimensional space corresponds to the relation (cf. (7.10))
for an hermitian matrix A = (AT )∗ .examples: Sec. 10.7 If  is hermitian,
it has a complete set of (orthonormal) eigenvectors |an i with eigenvalues an .
In terms of this basis set,
X
 =
an |an ihan | ,
(8.10).
n
which is called spectral decomposition. This can be easily verified by applying
 to one of its eigenvectors:
X
Â|am i =
an |an i han |am i = am |am i
| {z }
n
δn,m
|vi:
P̂v ≡ |vihv|
60
...
An important operator is the projection operator on a (normalized) vector
(8.11).
Continuous vector spaces
....
8.4
ei k x
φk (x) = √
,
2π
...
Quite generally, the Hilbert space can also contain non-normalizable vectors.
An example is given by the wave functions of free particles (we discuss here
for simplicity the one-dimensional case):
(8.12).
√
where the 2π is taken for convenience. We denote by |k̃i the corresponding
vector.
∂
These functions are eigenfunctions of the momentum operator p̂ ≡ −i~ ∂x
with eigenvalue ~k:
∂
φk (x) = ~kφk (x) ⇐⇒ p̂|k̃i = ~k|k̃i
∂x
(8.13)
....
− i~
...
The scalar product between two of these functions is see: Sec. 11.2
Z
1
0
0
hk̃ |k̃i =
ei (k−k )x dx = δ(k − k 0 ) .
(8.14).
2π
So for k = k 0 it is virtually “infinite”. Physically this is because a wave
function like (8.12) is homogeneously distributed in the whole space, so its
normalized probability density should be zero everywhere!
Of course these vectors (also called states) never exist in reality (one has
wave packets). However, it is mathematically useful to use them.
For practical purposes, one can extend the discussion of the previous part
to such non-normalizable vectors“ by using following modifications (we will
”
prove this later)
• Discrete sums (such as in (8.3), (8.10), (8.7)), are replaced with integrals
• The Kronecker delta (such as in (8.2)) is replaced by the Dirac delta
...
In this way, one can introduce a normalization condition“ for non-normalizable“
”
”
vectors as in (8.14):
hk̃ 0 |k̃i = δ(k − k 0 ) .
(8.15).
Of course, we must now find another name for these non normalizable“
”
vectors. Since their index (here k̃) must be continuous, we will call them
61
...
continuum vectors“ as opposed to discrete vectors“ (the former normali”
”
”
zable“).
Notice that in general a basis of the Hilbert space can contain both discrete as well as continuum vectors. Therefore, the expansion of a generic
vector of the Hilbert space can contain contributions from both discrete as
well as continuum basis vectors:
Z
X
|f i =
fn |en i + f (q)|e(q)id q
(8.16).
n
hem |en i = δn,m
8.5
he(q)|e(q 0 )i = δ(q − q 0 )
...
with the normalisation conditions
(8.17).
Real space basis
An important continuum basis set is provided by the
eigenfunctions of the position operator x̂, defined as
x̂ f (x) = xf (x) .
(8.18)
The eigenfunction fx0 (x) of x̂ with eigenvalue x0 is 10 δ(x − x0 ), since this
is the only function that multiplied by x is proportional to itself:
x̂fx0 (x) = x δ(x − x0 ) = x0 δ(x − x0 )
...
These eigenfunctions are normalized according to the r.h.s. of (8.17).
Z
hx0 |x1 i = δ(x − x0 )∗ δ(x − x1 )d x = δ(x0 − x1 ) .
(8.19).
...
here, |x0 i is the Dirac notation for the vector associated with δ(x − x0 ).
This orthonormal and complete basis is also called the real space basis.
Notice that, given a vector |f i, its scalar product with |x0 i is
Z
hx0 |f i = δ(x − x0 )f (x) dx = f (x0 ) ,
(8.20).
i. e., the function associated with the vector |f i.
10
This is actually not a function but a distribution. However, we will improperly call
it a function
62
8.6
Change of basis and momentum representation
...
We now expand an arbitrary vector |f i in the real-space basis {|xi} (cf.
(8.16)). Of course, the basis has only continuum states, so that we have:
Z
|f i = cx |xi dx ,
(8.21).
From linear algebra, we know how to obtain the expansion coefficients cx : we
have to multiply from left by each element of the basis:
Z
hx1 |f i = cx0 hx1 |x0 i dx0 = cx1 ,
| {z }
δ(x1 −x0 )
|f i =
fk |k̃i dk
...
Comparing with (8.20), we see that the expansion coefficients in the realspace basis are nothing else than the function associated with the vector |f i
itself: cx = f (x).
By the way, the fact that each vector of the Hilbert space can be expanded
as in (8.21), proves that the set of the |x0 i is indeed complete.
The above result suggests to expand the same vector |f i in another useful
basis, namely the basis of the eigenfunction of momentum (8.12) (which is
again complete):
Z
(8.22).
...
The coefficients fk of the expansion are obtained as usual by multiplying
”
from left“, and using the continuum version of (8.3) as well as (8.12):
Z
1
fk = hk̃|f i = √
e−ikx f (x) dx ≡ f˜(k) ,
(8.23).
2π
i. e. the coefficients fk are the Fourier transform f˜(k) of f (x).
The function f˜(k) represented in the momentum“ basis is called the
”
momentum representation of the vector |f i.
8.7
Identity operator
...
The step from the second to the third term in (8.23) is a special case of an
elegant method to change basis by using useful expression for the identity
operator Iˆ in terms of a complete, orthonormal basis {|en i}
X
Iˆ =
|en ihen | .
(8.24).
n
63
This expression can be easily understood, if on recognizes that this is the
sum over all projectors (cf. (8.11)) over all elements of the basis. More concretely, this can be shown by observing that the operator relation (8.24)
holds whenever applied to an arbitrary element |em i of the basis:
X
ˆ mi =
I|e
|en i hen |em i = |em i ,
| {z }
n
δn,m
and, thus, it is true.
Obviously, (8.24) must be suitably modified with the rules above, for the
case in which all or part of the |en i are continuous.
We now use the same as (8.24) but for the real space basis.
Z
Iˆ = |xihx| dx
(8.25)
in order to reobtain (8.23) in an elegant way
Z
Z ∗
hk̃|f i = hk̃|xihx|f i dx =
hx|k̃i f (x) dx
........
which, using hx|k̃i = √12π eikx (cf. (8.12)) gives the last term in (8.23).
proof of the rules for continuum vectors: Sec. 11.3 : Some applications:
Sec. 10.6
64
Principles and Postulates of Quantum Mechanics
....
9
The postulates“ of quantum mechanics consist in part of a summary and
”
a formal generalisation of the ideas which we have met up to now, in the
course of the years they have been put together in order to understand the
meaning and to provide a description for the puzzling physical results that
had been observed.
These postulates have been so far been confirmed by all experiments build
up in order to verify (or falsify) their validity.
Here, we will present these postulates together with practical examples.
In these examples you will find again most of the concept introduced in the
previous chapters.
9.1
Postulate I: Wavefunction or state vector
The state of a system is completely defined by a (time-dependent) vector |ψi
(state vector) of a Hilbert space.
For example, for a particle in one dimension,11 this can be represented in
real space by the wave function ψ(x) = hx|ψi , which contains all information
about the state.
A consequence of the linearity of the Hilbert space is that any linear
combination of physical states is a physical (i. e. accepted) state12 .
The state vector is not directly observable. In other words, not all its
information can be extracted in an experiment. One can, however, choose
which information he or she wants to extract.
For a given state |ψi and an arbitrary c-number c, c|ψi describes the same
state as |ψi.
9.2
Postulate II: Observables
Dynamical variables, so-called observables, i. e. properties that can be observed, measured, are represented by Hermitian operators
Important examples of observables are:
11
For simplicity, we will only illustrate results in one dimension here. The generalisation
to the realistic three-dimensional case is straightforward.
12
this is very interesting for quantum computers!
65
• Coordinates: r̂ = (x̂, ŷ, ẑ)
∂
, p̂y = · · · , p̂z (p̂ = −i~∇)
• Momentum: p̂x = −i~ ∂x
• Spin
Further observables are obtained from compositions (products and sums)
of these
• Energy (Hamiltonian or Hamilton operator): Ĥ =
p̂2
2m
+ V (x̂).
• Angular momentum L̂ = r̂ × p̂
9.3
Postulate III: Measure of observables
The measure postulate is certainly the most striking and still the most discussed in quantum mechanics.
When trying to extract information from a state, one can only measure
observables. (the wave function cannot be measured) So far, nothing special.
In general, observables in classical physics have their counterpart in quantum
mechanics.
A new concept is that when measuring an observable, the only possible
values that one can obtain are the eigenvalues of the operator corresponding
to the observable.
This means that not all classically allowed values of a physical quantity
are allowed in quantum mechanics.
The most striking example is the energy: as we have seen, for bound states
only discrete values of the energy are allowed.
Having specified what the possible outcome of a measure is, we should
also specify which outcome we expect to have for a given state |ψi.
Here comes the big problem: Even if one knows |ψi with exact accuracy it
is not possible (in general) to predict the outcome of the measure. Possible
results are statistically distributed, with a probability (density) that depends
on |ψi. Details are given below.
The last important result (which again will be specified more in detail
below) is:
A measure modifies the state vector: After the measure of an observable, the
particle falls into the eigenstate corresponding to the measured eigenvalue.
66
9.3.1
Measure of observables, more concretely
Ĥ|em i = Em |em i .
...
Let us illustrate the meaning of the measure postulate by using as an observable, the energy, associated with the hermitian operator Ĥ (Hamiltonian).
The discussion below can be extended straightforwardly to any observable
with discrete eigenvalues. The extension to continuous eigenvalues is also
discussed.
We have learned in Sec. 6, how to find solutions of the Schrödinger equation (5.20), i. e. its eigenvalues En and eigenfunctions en (x). Instead of wave
functions we want to use the formal (vector) notation introduced in Sec. 7.
We, thus, denote by |en i the corresponding eigenvectors, i. e. en (x) = hx|en i.
The eigenvalue condition is
(9.1).
...
Which tells us that a particle in the state |en i has the energy En .
However, an arbitrary physical state |ψi can, in general consist of a linear
combination of the |en i:
X
an |en i
(9.2).
|ψi =
n
...
Notice, first of all, that, once the basis is fixed, the state |ψi is completely
specified by the coefficients an . In a vector notation these are nothing else
than the coordinates of the vector.
The question is now, what the energy of this state is.
The way to answer this question is to measure the energy! The crucial
point, introduced above, is that the outcome of the experiment cannot be
foreseen, even if one knows |ψi exactly, and even if one could carry out
the experiment with arbitrary precision. This unpredictability is intrinsic of
quantum mechanics.
As discussed above, the measured energy will be one of the En . The outcome is distributed statistically and the distribution is determined by |ψi.
More precisely,:13
The measure of the energy will give En with probability W (En )P
∝ |an |2 . The
proportionality constant is given by the normalisation condition n W (En ) =
1. This gives
|an |2
|an |2
W (En ) = P
=
.
(9.3).
2
hψ|ψi
m |am |
13
Here and in the following we neglect the case of degeneracy, i. e. when there are two
orthogonal states with the same energy.
67
Notice that for normalized states, the denominator in (9.3) is equal to 1 and
can be dropped.
A further important aspect introduced above is that a measure modifies
the physical state. More specifically, suppose the measure yields the value
En0 for the energy, just after the measure the state will be transformed into
the corresponding eigenvector |en0 i.
This is the so-called collapse of the wavefunction. After the measure, the
large amount of potential information contained in (9.2) (i.e. in its coefficients
an ) is lost, and only an0 = 1 remains !
The striking aspect is that a measure always disturbs the system. This
is in contrast to classical physics, where one could always think, at least in
principle, to carry out a measure as little disturbing as possible, so that the
state of the system is essentially not disturbed.
9.3.2
Continuous observables
....
The discussion above was carried out for the measure of the energy of a
bound state, which has discrete eigenvalues.
As we know, there are in general observables, such as the position x̂, or
the momentum p̂ operators, that admit continuous eigenvalues.
In addition to the replacements in Sec. 8.4, the discussion carried out for
observables with discrete eigenvalues can be extended to ones with continuous
eigenvalues upon replacing probabilities with probability densities. see for a
proof: Sec. 217 Reminder: probability density: Sec. 11.1
For example, if we measure x̂ on the vector
Z
|ψi = ψ(x)|xi dx ,
then a measure of the position x̂ will give one of the possible x (in one
dimension this is any value between −∞ to +∞) with probability density
(cf. (9.3))
|ψ(x)|2
P (x) =
.
(9.4)
hψ|ψi
This is what we already learned in Sec. 5.
After the measure, the state vector will collapse into the state |x0 i, where
x0 is the value of x obtained in the measure.
68
9.4
Expectation values
P
2
hψ|Ĥψi
n En |an |
=
,
< E >= P
2
hψ|ψi
n |an |
...
A practical situation in physics is that one has many (a so-called ensemble
of) particles all in the same state (9.2). One can then repeat the measure of
the energy for all these particles. The outcome being statistically distributed
means that it will be, in general, different for each particle.
One interesting quantity that can be (and in general is) determined in this
case is the average value < E > of the energy. This is technically called the
expectation value of the energy in the state |ψi. In contrast to the measure
on a single particle, this will not necessary be given by one of the eigenvalues
En , as the average can lie in between the En .
If one knows the state, one can predict < E >. As we show below, for
the state (9.2) this is given by:
(9.5).
where Ĥ is the Hamilton operator, i. e. the operator associated with the
energy observable.
The first equality in (9.5) is easily obtained by probability theory: the
average value of E is given by the sum over all its possible values En weighted
with their probability (9.3). To show the second equality, let us evaluate the
numerator in the last term in (9.5):
X
a∗n am hen | Ĥ|em i =
hψ|Ĥψi =
| {z }
n,m
X
n,m
Em |em i
a∗n am Em hen |em i =
| {z }
δn,m
X
n
|an |2 En
(9.6)
which corresponds to the numerator of the second term in (9.5). Notice that
we have used the eigenvalue relation (9.1).
Again, the above discussion holds for an arbitrary observable taken instead of the energy, provided one expands the quantum states in eigenstates
of this observable, instead of the energy.
69
9.4.1
Contiunuous observables
.... ....
...
For continuous observables, i. e. observables with a continuum of eigenvalues,
such as x̂, we adopt the usual rules and obtain, similarly to (9.5):
R
x|ψ(x)|2 dx
hψ|x̂ψi
=
.
(9.7).
< x̂ >= R
2
hψ|ψi
|ψ(x)| dx
Extension: standard deviation: Sec. 10.8 Example: Heisenberg uncertainty:
Sec. 10.9 Example: qubits: Sec. 10.10
Postulate IV: Time evolution
....
9.5
i~
∂
|ψ(t)i = Ĥ |ψ(t)i
∂t
...
The last postulate tells us how a state vector evolves in time. In principle,
the relevant equation was already obtained in (5.14). We write this equation
symbolically and more compact in terms of state vectors:14
(9.8).
...
The solution is simple in the case in which |ψi is proportional to an eigenstate
|en i of Ĥ, i. e.
|ψ(t)i = an (t) |en i
(9.9).
with a possibly time-dependent coefficient an (t). In this case, we have
i~
∂
an (t) |en i = an (t)En |en i .
∂t
(9.10)
We see that this reduces to a (well-known) differential equation for an (t).
The solution was already found in (5.19):
an (t) = an0 exp(−i
En t
)
~
(9.11)
On the other hand, |en i is a solution of the time-dependent Schrödinger
equation (9.1), which corresponds to (5.20). These results are, thus, well
known from Sec. 5.
Eigenstates of Ĥ are called stationary states because their time dependence is completely included in the time dependence of a multiplicative coefficient an , which, as we know, does not modify the state.
14
We consider here only the case in which the Hamilton operator Ĥ is time-independent.
70
9.5.1
Generic state
...
The linearity of (9.8), as well as the above solution for an eigenstate of Ĥ,
immediately tells us the time dependence of an arbitrary state (9.2) written
as a superoposition of eigenstates of Ĥ. As for (9.9), the time dependence
appears in the coefficients:
X
En t
(9.12).
|ψ(t)i =
an0 e−i ~ |en i
n
....
Notice that this state is not stationary, as the different exponential terms
cannot be collected into a global multiplicative term.
Again, the above result also hold when the eigenvalues of the Hamilton
operator are continuous.
example: free particle evolution: Sec. 10.12
Further examples
example: Qubits: Sec. 10.11
example: Tight-binding model: Sec. 10.16
Hidrogen atom: Sec. 10.14
Hidrogen atom excited states: Sec. 10.15
Momentum representation: Sec. 10.13
71
.... .... .... .... ....
9.5.2
10.1
....
Examples and exercises
Wavelength of an electron
....
10
back to pag. 19
Determine the kinetic energy in eV for an electron with a wavelength of
0.5 nm (X-rays).
Solution:
E = p2 /(2m) = h2 /(2λ2 m) =
(6.6 × 10−34 Js)2 /(2(5 × 10−10 m)2 × 9.1 × 10−31 Kg)
= 9.6 × 10−19 J × eV /eV = 9.6/1.6 × 10−19+19 eV = 6eV
Photoelectric effect
....
10.2
back to pag. 11
The work function of a particular metal is 2.6eV (1.eV = 1.6×10−12 erg).
What maximum wavelength of light will be required to eject an electron from
that metal?
Solution:
φ = h ν = h c/λ ⇒ λ = h c/φ =
6.6 × 10−34 Js × 3. × 108 m/s
≈ 4.8 × 10−7 m = 480nm
−19
2.6 × 1.6 × 10 J
Some properties of a wavefunction
....
10.3
back to pag. 25
The ground-state wavefunction of the Hydrogen atom has the form
e−
a r
2
where r = |r| and r = (x, y, z).
Normalize the wavefunction.
Find the expectation value of the radius < r >.
72
(10.1)
Find the probability W (r0 < r < r0 + ∆r0 ) that r is found between r0 and
r0 + ∆r0 .
In the limit of small ∆r0 , the probability density P (r0 ) for r (not for r!) is
given by
P (r0 ) ∆r0 = W (r0 < r < r0 + ∆r0 )
(10.2)
Determine P (r0 ) and plot it.
Determine the most probable value of r (i. e. the maximum in P (r0 )).
Normalisation:
Z
Z
2
2
− a2 r 2
) dV = N
e−a r d V
(10.3)
1=N
(e
...
The volume element in spherical coordinates (r, θ, φ) is given by dV =
r2 d r sin θ d θ dφ. The integral over the solid angle gives 4π.
We thus have:
r
Z ∞
a3
2
2
−a r 2
2
(10.4).
1=N 4π
e
r dr = N 4π 3 ⇒ N =
a
8π
0
Z ∞
3
2
(10.5)
< r >= N 4 π
e−a r r2 r dr =
a
0
W (r0 < r < r0 + ∆r0 ) is given by the integral (10.4) in between these
limits:
Z r0 +∆r0
2
W (r0 < r < r0 + ∆r0 ) = N 4 π
e−a r r2 d r
(10.6)
r0
For small ∆r0 this is obviously given by the integrand times ∆r0 , so that
W (r0 < r < r0 + ∆r0 ) = P (r0 ) ∆r0 = N 2 4 π e−a
r0 2
r0
∆r0
(10.7)
The most probable value is given by the maximum of P (r0 ), this is easily
73
found to be rmax = a2 .
rmax
<r>
0.25
0.2
P(r)
0.1
2
4
6
8
10
r a
10.4
Particle in a box: expectation values
....
Notice that the probability density P (r) for the coordinates r = (x, y, z) is
given instead by P (r) = N 2 e−a r0 and has its maximum at the centre r = 0.
back to pag. 33 Evaluate the expectation value < x > (average value) of the
coordinate x for the ground
state of the particle in a box. Evaluate its stanp
dard deviation ∆x ≡ < (x− < x >)2 >.
Solution:
Ground state
π
(10.8)
ψ(x) = N Sin x
a
Normalisation
r
Z a
a
π
2
1 = N2
(Sin x)2 d x = N 2 ⇒ N =
(10.9)
a
2
a
0
Z
2 a
π
a
< x >=
x (Sin x)2 =
(10.10)
a 0
a
2
(∆x)2 ≡< (x− < x >)2 > =< x2 > −2 < x >< x > + < x >2 =< x2 >
− < x >2
Z
2 a 2
π 2 1 2
3
2
< x >=
(10.11)
x (Sin x) = a 2 − 2
a 0
a
6
π
back to pag. 40
74
1
1
− 2)
12 2π
(10.12)
....
∆x2 =< x2 > − < x >2 = a2 (
10.5
Delta-potential
Find the bound states and the corresponding energies for an attractive δpotential
V (x) = −u δ(x)
u>0
Solution:
In a bound state E < 0, so that the wave function for x > 0 is
r
−2mE
ψ(x) = A e−qx
x>0
q=
qx
ψ(x) = B e
x<0
~2
The continuity of the wavefunction at x = 0 requires A = B. The condition
(6.3) requires
2m
um
−2qA = −u 2 A ⇒ q = 2
~
~
2
There is always one and only one bound state with energy E = − u2~2m
Further exercises:
• Determine A so that the wave function is normalized
10.6
Expansion in a discrete (orthogonal) basis
....
• Determine the unbound states and the transmission and reflection coefficients.
back to pag. 64
The Hilbert space admits continuous, mixed (as in (8.16)), and also purely
discrete (although infinite) basis sets. One example for the latter is provided
by the eigenstates of the harmonic oscillator, as we shall see.
• Write the expansion of two vectors |f i and |gi in a discrete basis {|bn i},
and express the scalar product hg|f i in terms of the expansion coefficients fn and gn .
(Of course, as expected the result is formally the same as in the finitedimensional case (7.1)).
• Write an expansion of the function f (x) ≡ hx|f i in terms of the orthogonal functions bn (x) ≡ hx|bn i. Write an expression for the expansion
coefficients.
75
Expansion:
|f i =
X
n
...
Solution:
fn |bn i ,
(10.13).
where as usual the coefficients fn = hbn |f i.
We do the same directly for the bra“ vector hg|:
”
X
∗
hbm | ,
hg| =
gm
m
Notice the convenience of using a different index m.
The scalar product:
X
X
∗
gm
fn hbm ||bn i =
hg|f i =
gn∗ fn ,
| {z }
m,n
δm,n
n
cf. (7.1).
Expansion of f (x): we multiply (10.13) from left with hx|:
X
X
f (x) ≡ hx|f i =
fn hx|bn i =
fn bn (x)
n
n
The expansion coefficients:
Z
Z
fn = hbn |f i = hbn |xi hx|f i dx = bn (x)∗ f (x) dx
(10.15)
Hermitian operators
....
10.7
(10.14)
back to pag. 60
Show that the momentum operator (~ = 1)
p̂ ≡ −i
d
dx
is hermitian. Use the relation (7.12) and partial integration.
Proof:
Z
Z
d
d
∗
hg|p̂f i = g(x) (−i f (x)) dx = (−i g(x))∗ f (x) dx = hp̂g|f i
dx
dx
Show that if an operator  is hermitian, then Â2 ≡  is hermitian
76
Note: the definition of a product of two operators  and B̂ is defined as
for matrices
(ÂB̂)|f i ≡ Â(B̂|f i)
(10.16)
Proof:
hg|ÂÂf i = hÂg|Âf i = hÂÂg|f i
Show that if two operators  and B̂ are hermitian, then ÂB̂ + B̂  is
hermitian.
Proof:
hg|(ÂB̂ + B̂ Â)f i = hg|ÂB̂f i + hg|B̂ Âf i =
hÂg|B̂f i + hB̂g|Âf i = hB̂ Âg|f i + hÂB̂g|f i =
h(ÂB̂ + B̂ Â)g|f i
Standard deviation
....
10.8
back to pag. 70
The value of < E > tells us about the average value of the energy, but
there is no information on how strong the energies deviate from this average
value. One would like to know the typical deviation from < E >. This
information is provided by the standard deviation ∆E. The square ∆E 2 is
given by the average value of the deviation square, i.e.
∆E 2 =< (E− < E >)2 >
This can be simplified to
∆E 2 =< E 2 > −2 < E >< E > + < E >2 =< E 2 > − < E >2 =
!2
2
hψ|Ĥψi
hψ|Ĥ ψi
−
(10.17)
hψ|ψi
hψ|ψi
10.9
Heisenberg’s uncertainty
back to pag. 70
Consider the wave function
ψ(x) = e−a
77
x2 /2
....
It is easy to see that this expression is valid for any other observable, including
continuous ones.
For large a the wave function is strongly peaked around x = 0, i. e. the
uncertainty (standard deviation) ∆x in the coordinate x is small.
Here we want to illustrate the Heisenberg uncertainty principle, according
to which a small uncertainty in x corresponds to a large uncertainty in the
momentum p and vice-versa. To do this:
• Normalize ψ
• Evaluate < x̂ >
• Evaluate ∆x̂2
• Evaluate p̂
• Evaluate ∆p̂2
p
√
• Evaluate ∆p̂2 ∆x̂2 and verify that it is independent of a
Solution
Z
p
π/a =⇒ ψN (x) ≡ (a/π)1/4 ψ(x) is normalized
Z
2
1/2
< x̂ >= (a/π)
x e−ax dx = 0
Z
1
2
2
2
2
2
1/2
∆x̂ =< x̂ > − < x̂ > =< x̂ >= (a/π)
x2 e−ax dx =
2a
hψ|ψi =
2
e−ax dx =
< p̂ >= hψN |p̂ψN i =
(a/π)1/2
R
2 /2
e−ax
R
ψN (x)(p̂ψN (x)) dx =
2 /2
d
(−i~ dx
)e−ax
dx =
R
2
2
−i~(a/π)1/2 e−ax /2 (−ax)e−ax /2 dx = 0
R
∆p2 =< p̂2 >= ψN (x)(p̂2 ψN (x)) dx =
(a/π)1/2
R
2 /2
e−ax
−~2 (a/π)1/2
R
2
2 /2
d
−ax
(−~2 dx
2 )e
2
dx =
e−ax (a2 x2 − a) dx = −~2 (−a/2) =
~2
∆x ∆p =
4
2
2
78
a~2
2
Qubits and measure
....
10.10
B̂ ≡ |1ih1| and N̂ ≡ |1ih0| + |0ih1| ,
...
back to pag. 70
A qubit is described by a particle that can be in two discrete states
denoted by15 |0i and |1i.
We consider the two observables B and N described by the operators
(10.18).
respectively.
• Verify that these operators are hermitian.
• What are the possible outcomes of a measure of B and of N ?
Let us consider an ensemble of qubits, all prepared in the state |0i.
We first measure B on these qubits. After that we measure N .
• Determine the outcomes of the measures of B and N and their probabilities.
• Starting again with the ensemble prepared in |0i, invert the order of
the measures of N and B and determine outcomes and probabilities.
...
Proof of hermiticity:
One way to show that the operators are Hermitian is to write them in
matrix form (they are obviously 2 × 2 matrices):
0 1
0 0
N̂ =
.
(10.19).
B̂ =
1 0
0 1
One can readily verify that the matrices are symmetric and real and, thus,
hermitian.
The other way is to use the rule (8.4). For B̂ it is straightforward, for N̂ :
N̂ † = (|1ih0| + |0ih1|)† = (|0ih1| + |1ih0|) = N̂
Possible outcomes
The possible outcomes are given by the eigenvalues of the operators:
15
These can be, for example, two electronic levels in an atom, or the two possible value
of the spin of an electron (see below).
79
B̂ has eigenvalues (and thus possible outcomes) B = 0 and B = 1 (it is
the observable telling in which qubit the particle is). The eigenvectors are
easily shown to be |0i and |1i.
For N̂ one can directly compute the eigenvalues from the matrix (10.19)
with the known methods. However, for such a simple operator, it may be
easier to solve the eigenvalue equation directly:16
!
N̂ (a|0i + b|1i) = (a|1i + b|0i) = N (a|0i + b|1i)
a = N b,
b = N a ⇒ N 2 = 1 ⇒ N = ±1 .
...
By equating the coefficients of the two basis vectors we have
(10.20).
1
|N = +1i = √ (|0i + |1i)
2
1
|N = −1i = √ (|0i − |1i)
2
...
For the second part of the problem it is useful to evaluate the eigenvectors,
which we denote as |N = −1i and |N = +1i with obvious notation. From
(10.20), we have a = −b for N = −1 and a = b for N = 1. We can choose
the overall constant so that the vectors are normalized:
(10.21).
...
Measure of (1) B̂ and (2) N̂
Since the starting state |0i is already an eigenstate of B̂, the expansion
(9.2) has only one term, and (9.3) tells us that the measure will give B = 0
with probability W (B = 0) = 1.
After the measure, the state collapses into the corresponding eigenstate
|0i, so it is unchanged.
To measure N̂ we have to expand |0i into eigenstates (10.21) of the operator:
1
(10.22).
|0i = √ (|N = −1i + |N = +1i)
2
W (N = −1) =
1
2
W (N = +1) =
1
2
...
From (9.3), we thus have
(10.23).
Measure of (1) N̂ and (2) B̂
16
It may be convenient to use, for the eigenvalue, the same letter N as the operator,
however withoutˆ.
80
We now start from the ensemble of |0i and measure N̂ . This is the same
situation as in the last measure of the previous case, since we had the same
starting state.
This measure, thus, give the probabilities (10.23).
The important point is that now, after the measure, the state will collapse
into one of the eigenstates (10.21), depending upon the outcome. Since we
have an ensemble (many) such particles, (10.23) will tell us that half of them
will collapse to |N = −1i and half to |N = +1i.
We now measure B̂, in each one of these states.
(10.21) is already the expansion of these states in eigenvalues of B̂. We
thus have that for the particles in |N = −1i.
1
2
W (B = 1) =
1
2
...
W (B = 0) =
(10.24).
The same result holds for the particles in |N = +1i so that (10.24) describes
the global outcome of the second measure.
Schematic diagram of the processes
for the last measure sequence
Result of measure
Measure
1
=+
1i
=+
B̂
N
50%
Measure
|0i
N̂
|N
0
B=
25%
|0i
|1i
25%
B=
1
Total:
50%
B=0
State after measure
0
B=
25%
|N =
−1i
50%
N=
−1
Measure
B̂
50%
B=1
|0i
|1i
25%
B=
1
Discussion
This result shows (again) some important issues of quantum mechanics
81
• Measuring an observable changes the state of a quantum system (no
matter how one tries to be delicate“: the above results are intrinsic).
”
• When measuring different observables the possible outcomes depend,
in general, upon the order of the measure.
10.11
Qubits and time evolution
....
• The example above applies, for example, to the (z-component of the)
spin sz (internal angular momentum) of an electron which can admit
only two values sz = ± ~2 , corresponding to the two states |0i and
|1i. The observables B and N are then related to the z-component
(ŝz = ~(B̂ − 21 )) and the x-component (ŝx = ~2 N̂ ) of the spin.
back to pag. 71
Consider the example on qubits above, take ~ = 1.
At t = 0 a qubit is prepared in the state |ψ(t = 0)i = |0i. The Hamiltonian of the system describes a particles hopping from one site to the other
and is given by (see (10.18))
Ĥ = aN̂
(10.25)
• Determine the qubit state |ψ(t)i at time t
• Determine the probability P (N = +1) to obtain N = +1 when measuring N at time t.
• Determine the expectation value < N̂ > and the square of the standard
deviation < (∆N̂ )2 > versus t.
• Suppose one instead measures B at time t. Determine P (B = 0), as
well as < B̂ > and < (∆B̂)2 > versus t.
1
|ψ(t)i = √ (|N = −1ieiat + |N = +1ie−iat )
2
82
...
Solution
According to (9.12), we must expand the initial state in eigenstates of
the Hamiltonian. We have already done this in (10.22). From (9.12), the
time evolution is given by (~ = 1)
(10.26).
From (9.3), and since the state is normalized, we have
P (N = +1) =
1
2
independently of t.
Since, obviously, P (N = −1) = 1 − P (N = +1), we have, again independent of t:
< N̂ >= 1 P (N = +1) + (−1)P (N = −1) = 0
To evaluate < (∆N̂ )2 >, we need < N̂ 2 > (see (10.17)).
There are several ways to do that. One is to first evaluate the operator
N̂ 2 = (|1ih0| + |0ih1|)(|1ih0| + |0ih1|) =
|1i h0||1ih0| + |1i h0||0ih1| + |0i h1||1ih0| + |0i h1||0ih1| =
| {z }
| {z }
| {z }
| {z }
0
|1ih1| + |0ih0| = Iˆ
1
1
0
i. e. the identity.
Therefore < N 2 >= 1, and we have (see (10.17)):
< (∆N̂ )2 >=< N 2 > − < N >2 = 1
(10.27)
Measure of B
For B the situation is more complicated, because we have to expand
(10.26) back into eigenstates of B̂. For this we simply use the expressions
(10.21), collect the coefficients of the basis vectors, and obtain:
1
1
|ψ(t)i = (eiat + e−iat ) |0i + (e−iat − eiat ) |1i
|2
|2
{z
}
{z
}
−i sin at
cos at
We thus have
(10.28)
P (B = 0) = (cos at)2
P (B = 1) = (sin at)2
And thus
< B̂ >= (sin at)2
We determine < B 2 > in a different way, namely by observing that eigenstates of B̂ are also eigenstates of B̂ 2 (this actually holds for any observable),
and, in this case B 2 = B, so that < B̂ 2 >=< B̂ >, and
< (∆B̂)2 >=< B 2 > − < B >2 = (sin at)2 (1 − (sin at)2 ) = (sin at)2 (cos at)2
(10.29)
83
Free-particle evolution
....
10.12
back to pag. 71
The wave function of a free particle (V = 0) at t = 0 is given by
2 /2
ψ(x, t = 0) = e−ax
...
determine the wave function ψ(x, t) at time t.
Solution
According to (9.12) we have to expand |ψi in eigenstate of the Hamitonian
p2
. These are also eigenstates of the momentum p, and are given in
Ĥ = 2m
(8.12):
ei k x
φk (x) = √
.
2π
The expansion is given by (cf. (8.23))
Z
|ψi = hk̃|ψi|k̃i dk
(10.30).
Z
k2 √
1
e−ikx ψ(x) dx = e− 2a / a
hk̃|ψi = √
2π
Following (9.12), the time evolution of (10.30) is given by
Z
|ψ(t)i = hk̃|ψie−iEk t/~ |k̃i dk
...
with
(10.31).
where the energy of the state |k̃i is
Ek =
~2 k 2
2m
The wave function in real space is obtained by multiplying from the left with
hx|:
Z
ψ(x, t) = hx|ψ(t)i =
hk̃|ψie−iEk t/~ hx|k̃i dk
Inserting (10.31) and again (8.12), we obtain
ψ(x, t) =
Z
k2
e− 2a −i~k2 t/(2m) 1 −ikx
√ e
√ e
dk
a
2π
84
Substituting k =
√
aq, and introducing the variable
T ≡
~at
m
we have
Z
√
1
2 (1+iT )
e−q 2 −iq ax dq
ψ(x, t) = √
2π
The complicated integral gives finally
ax2
e− 2(1+iT )
ψ(x, t) = √
1 + iT
The main effect is a broadening of the Gaussian function with time
10.13
|1 + iT |
a
Momentum representation of x̂ (NOT DONE)
back to pag. 71
Instead of the usual real-space representation,
Z
|ψi = ψ(x)|xi dx
ψ(x) = hx|ψi .
....
∆x2 ∝
(10.32)
a state vector can be written in the momentum representation (see (8.22)),
i. e. expanded in eigenstates of momentum.
Z
|ψi = ψ̃(k)|k̃i dk
ψ̃(k) = hk̃|ψi .
(10.33)
and
x̂ψ̃(k) → i
85
d
ψ̃(k)
dk
(10.34).
...
p̂ψ̃(k) → k ψ̃(k)
...
In the real-space representation, we have learned that the action of the operators x̂ and p̂ correspond to the application of the operators x and −id/dx
on the wave function ψ(x) (we use ~ = 1), respectively.
Show that in the momentum representation,
(10.35).
To help you do that, let us remind, how the corresponding results in the
real-space representation are obtained:
We start from
ψ(x) = hx|ψi
now we compute the wavefunction of the vector x̂|ψi:
hx|x̂ψi = hx̂x|ψi = xhx|ψi = xψ(x)
The action of p is somewhat more complicated:
We compute the wavefunction of the vector p̂|ψi. To obtain this we insert
the identity
Z
|k̃ihk̃| dk
Z
Z
hx|p̂ψi = hx̂|k̃ihk̃|p̂ψi dk =
Z
1
1 ikx
√ e hp̂k̃|ψi dk = k √ eikx ψ̃(k) dk =
2π
2π
Z
d
1 ikx
d
√ e ψ̃(k) dk = −i ψ(x)
−i
dx
dx
2π
where ψ̃(k) is the Fourier transform of ψ(x), see (8.23).
Use a similar procedure to prove (10.34) and (10.35).
Solution:
Start from
ψ̃(k) = hk̃|ψi
now compute the wavefunction in the momentum representation of the vector
p̂|ψi:
hk̃|p̂ψi = hp̂k̃|ψi = khk̃|ψi = k ψ̃(k)
This proves the first result (10.34).
Compute the wavefunction in the momentum representation of the vector
x̂|ψi. To obtain this, insert the identity
Z
|xihx| dx
86
Z
hk̃|x̂ψi = hk̃|xihx|x̂ψi dx =
Z
Z
1
xhk̃|xihx|ψi dx = x √ e−ikx ψ(x) dx =
2π
Z
d
1
d
√ e−ikx ψ(x) dx = i ψ̃(k)
i
dk
dk
2π
10.14
Ground state of the hydrogen atom
....
which proves the second result (10.35).
Ĥ = −
~2 ∇2
+ V (r)
2m
...
back to pag. 71
The Hamiltonian for the hydrogen atom is given by
(10.36).
∇2 Ψ =
1 ∂2
1 2
∂2
2 ∂
1
rΨ
+
∇
Ψ
=
Ψ+
Ψ + 2 ∇2θ,φ Ψ ,
θ,φ
2
2
2
r∂ r
r
∂r
r∂ r
r
(10.37).
...
e2
α
α=
r
4π0
Remember that in polar coordinates
V (r) = −
...
with
(10.38).
where ∇θ,φ is a differential operator in θ and φ, i. e. ∇2θ,φ Ψ(r) = 0.
Look for a spherical symmetric solution of the form
ψ(r) = e−qr
(10.39)
Find q and the corresponding eigenvalue
solution
2
~ 1
Ĥψ(r)= − 2m
(rψ 00 (r) + 2ψ 0 (r)) − αr ψ(r) =
r
!
~2
−e−qr 2m
(q 2 − 2qr ) + αr = Ee−qr
1
r
terms, we need to set
q=
αm
~2
87
...
In order to cancel the
(10.40).
E = −ERy ≡ −
αq
~2 2
q =−
2m
2
...
which gives the energy
(10.41).
10.15
Excited isotropic states of the hydrogen atom
....
Here, the characteristic decay length a0 ≡ q −1 is the Bohr radius, and ERy ≈
13.6 eV is the Rydberg energy, i. e. the ionisation energy of the hydrogen
atom.
ψ(r) = p(r) e−qr
...
back to pag. 71
Using the same Hamiltonian as in the previous example (10.36).
Look for solutions of the form
(10.42).
With p(r) a polynomial in r, and q > 0.
To do this follow these steps:
• For given q, fix the value of the energy E that solves the Schrödinger
equation in the large-r limit.
(Hint: in the Schrödinger equation keep just leading terms for large r.
Remember that if p is a polynomial in r, then p0 /p = O(1/r).
• Write an equation for the coefficients An of the polynomial.
• Find the set of values of q for which p(r) is really a polynomial. i. e.
for which the coefficients An vanish for n larger than some n0 .
• Find the corresponding values of the energies.
• What happens when q does not belong to this set: what can one say
about the large-r behavior of ψ(r) in this case?
Solution
The Schrödinger equation (10.36) with (10.37) and (10.38) gives (~ = 1).
2
2mα
−2mĤψ(r) = ψ 00 (r) + ψ 0 (r) +
ψ(r) = −2mEψ(r)
r
r
88
Using (10.42)
2
p00 (r) − 2qp0 (r) + q 2 p(r) e−qr + (p0 (r) − qp(r))e−qr +
r
2mα
p(r)e−qr + 2mEp(r)e−qr = 0
(10.43)
r
r→∞
(q 2 p(r) + 2mEp(r))e−qr −→ 0
...
The leading terms for large r are the ones with no derivatives of p(r) and no
r in the denominator:
(10.44).
which gives
...
q2
(10.45).
2m
This is the same as (10.41) in the previous example, however, we have to
fix q.
To this end we expand p(r) explicitly
E=−
p(r) =
N
X
An r n
n=0
and insert it into (10.43) along with the value of E above:
e
−qr
N
X
n=0
An n(n − 1)rn−2 − 2qnrn−1 + 2nrn−2 − 2qrn−1 + 2mαrn−1 = 0
Collecting terms with the same power of r, we get
N
X
n=0
rn [An+2 ((n + 2)(n + 1) + 2(n + 2)) − 2An+1 (q(n + 1) + q − mα)] = 0
Which gives the following recursion equation for the coefficients
An+2 = 2An+1
q(n + 2) − mα
(n + 2)(n + 3)
In order for p(r) to be a polynomial, the left-hand side must vanish at some
finite, integer n. This gives
q = qn =
mα
n̄
89
n̄ = n + 2
In principle n = 0, 1, 2, · · · , but, in fact, one can check that also n = −1 gives
a solution (which corresponds to the one of the previous example (10.40).
Therefore, n̄ = 1, 2, 3, · · · .
The (binding) energies (10.45), are given by
1
q2
= −ERy 2
n̄ = 1, 2, 3, · · · ,
(10.46)
2m
n̄
where ERy is given below (10.41). The behavior n̄12 is characteristics of the
binding energies of the Hydrogen atom.
For q different from any qn , we have
E=−
An+2
An+1
n→∞
≈
2q
n
An ≈ const.
(2q)n
n!
therefore
p(r)e−qr ≈ const. e2qr e−qr ≈ const. eqr
i. e., the wave function diverges exponentially at r → ∞. This, by the way,
corresponds to negative q solution of the asymptotic equation (10.44).
Tight-binding model
....
10.16
back to pag. 71
This is a very simple model for the dynamic of an electron in a solid.
We consider a circular chain consisting of L lattice points (L even) labeled
by n = 0, L − 1. In each lattice point there is a single orbital“. A particle
”
on this orbital is described by the vector |ni.
Orbitals belonging to different sites are orthogonal:
hn|mi = δn,m
The Hamiltonian of the system is given by
Ĥ = V
L−1
X
n=0
(|nihn + 1| + |n + 1ihn|)
(10.47)
|n + Li ≡ |ni .
We want to find eigenvalues and eigenvectors of Ĥ.
Proceed as follows:
90
...
with the real hopping parameter V > 0.
We consider periodic boundary conditions, i. e., we always identify
(10.48).
L−1
X
f (n) =
n=0
L−1
X
f (n + m)
...
• Due to the periodicity (10.48), indices can be always translated within
a sum over lattice sites, i. e.
(10.49).
n=0
• Show that the Hamiltonian is hermitian
...
• Consider the translation operator T̂ (m) with the property (remember
(10.48))
T̂ (m)|ni = |n + mi
(10.50).
• consider the states
|ξ(k)i =
L−1
X
n=0
eikn |ni
(10.51).
...
Ĥ = V (T̂ (1) + T̂ (−1))
...
Show that
(10.52).
Determine the allowed values of k for which (10.48) holds, and that
gives different states |ξ(k)i.
• Show that |ξ(k)i are eigenvectors of the T̂ (m). Determine their eigenvalues.
• As a consequence, |ξ(k)i are eigenvectors of Ĥ, determine the eigenvalues.
• Find the ground state (state with lowest energy) and its energy.
• If the particle is prepared in the ground state, find the probability to
find it in |0i.
Solution:
Hermiticity:
†
Ĥ = V
∗
L−1
X
n=0
(|n + 1ihn| + |nihn + 1|) = Ĥ
91
Due to (10.50), T̂ (m) is given by
T̂ (m) =
L−1
X
n=0
|n + mihn| ,
(10.53)
as it is easy to show by applying it to an arbitrary |n0 i.
Using (10.49), we can shift one of the terms of Ĥ by one, obtaining
Ĥ = V
L−1
X
n=0
(|n − 1ihn| + |n + 1ihn|) = V (T̂ (−1) + T̂ (1))
In order for the |ξ(k)i to have the correct periodic boundary conditions
we must have (see (10.52)) eik(n+L) = eikn , i. e.
k=
2πj
L
j = 0, 1, · · · L − 1
j must be integer and can be taken between 0 and L − 1. For j = L we have
k = 2π, which is equivalent to k = 0. We have, thus, L eigenvectors, as many
as the dimension of the space.
Applying T̂ (m) to |ξ(k)i:
T̂ (m)|ξ(k)i =
L−1
X
n=0
ikn
e
|n + mi =
L−1
X
n=0
eik(n−m) |ni = e−ikm |ξ(k)i
which shows that |ξ(k)i is an eigenvector of T̂ (m) with eigenvalue e−ikm .
Using (10.51), we have
Ĥ|ξ(k)i = V (T̂ (1) + T̂ (−1))|ξ(k)i = V (e−ik + eik )|ξ(k)i
(10.54)
which shows that |ξ(k)i is an eigenvector of Ĥ with eigenvalue
E(k) ≡ V (e−ik + eik ) = 2V cos k
(10.55)
The minimum of the energy is given by k = π, where E(π) = −2V .
The corresponding eigenstate is |ξ(π)i.
Using (9.3) with (10.52), we obtain that the probability P (0) to find the
particle in 0 is given by:
P (0) =
1
hξ(π)|ξ(π)i
92
with
hξ(π)|ξ(π)i =
X
n,m
so that P (0) = 1/L.
11.1
δn,m
....
Some details
Probability density
....
11
e−iπn eiπm hn|mi = L
| {z }
back to pag. 68 In terms of the probability density P (x), the probability of
finding x between x1 and x2 is given by
Z x2
W (x1 ≤ x ≤ x2 ) =
P (x) dx .
x1
W (x1 ≤ x ≤ x1 + ∆x) = P (x1 )∆x
...
If x2 = x1 + ∆x, with ∆x infinitesimal
(11.1).
which defines P (x).
The extension to three dimensions is straightforward: In terms of the
probability density P (r), the probability of finding r in a volume V is given
by
Z
P (r) d3 r
W (V ) =
V
if V = V (r, ∆V ) is an infinitesimal volume of size ∆V around r:
W (V (r, ∆V )) = P (r) ∆V ,
11.2
Fourier representation of the Dirac delta
....
which defines P (r).
1
2π
Z
e−ikx dk = δ(x)
93
...
back to pag. 61
We prove the relation
(11.2).
The δ-distribution is defined by
Z
f (x)δ(x − x0 ) dx = f (x0 )
for any function f .
It is, thus, sufficient to show that (11.2) satisfies this property:
Z
Z
1
f (x) dx
e−i(x−x0 )k dk .
2π
Permuting the order of the integrals
Z
Z
Z
1
1
1
ix0 k
−ixk
√
e
dk √
f (x)e
dx = √
eix0 k f˜(k) dk = f (x0 )
2π
2π
2π
|
{z
}
f˜(k)
11.3
Transition from discrete to continuum
....
where f˜(k) is the (symmetric) Fourier transform of f (x).
xn = n ∆
n = 0, ±1, ±2, · · · , ±∞ .
...
back to pag. 64
Here, we want to give some further justification of the rules“ presented
”
in Sec. 8.4 for the transition to continuum.
We make here the example of the real-space basis {|xi}, but it is easily
extended to any continuum basis. We start by discretizing the domain of the
possible values of x by a discretisation ∆ that we will eventually let go to
zero, i. e. we write
(11.3).
We define corresponding discrete vectors |x¯n i, which are (ortho-)normalized
as usual discrete vectors:
hx̄n |x̄m i = δn,m = δn−m
We now define new vectors, which differ just by a constant
1
|xn i ≡ √ |x̄n i .
∆
94
...
For ∆ → 0 these are continuum“ normalized (8.15), since:
”
1
∆→0
hxn |xm i = δn−m = δ(xn − xm ) .
∆
(11.4).
...
The last step being valid in the continuum ∆ → 0 limit (see (11.8) below).
We now consider the identity operator (see (8.24))
Z
X
X
∆→0
I=
|x̄n ihx̄n | =
∆|xn ihxn | =
|xn ihxn | dxn
(11.5).
n
n
Again, the last step is valid in the continuum ∆ → 0 limit (see (11.7) below).
By inserting (11.5) we can obtain, for example, the continuum expression
for the scalar product
Z
X
hg|f i =
hg|x̄n ihx̄n |f i = hg|xihx|f i dx ,
where we have dropped the n indices, since the x are now continuous.
back to pag. ?? Probability vs. probability density
We consider the expansion of a normalized state
X
hx̄n |ψi|x̄n i
|ψi =
....
n
n
From (9.3), we know that the probability of obtaining xn from a measure of
x̂ is given by (the sate is normalized)
W (xn ) = |hx̄n |ψi|2 = ∆|hxn |ψi|2
(11.6)
...
Looking at (11.1), we can identify, in the ∆ → 0 limit, P (x) = |hxn |ψi|2 as
the probability density.
Some proofs for the continuum limit
We now prove the continuum limits carried out in (11.4) and (11.5).
We have a smooth function f (x), and we discretize x as in (11.3). We
have
Z
Z
X
∆→0
∆
f (xn ) = ∆ f (xn ) dn = f (xn ) dxn ,
(11.7).
n
where we have used (11.3) in the last step..
95
Using this, we can write
f (xn )δn−m
∆→0
=
n
which lets us identify
δn−m
∆
∆→0
=
Z
f (xn )
δ(xn − xm )
96
δn−m
dxn
∆
...
f (xm ) =
X
(11.8).
Herunterladen