Efficient Algorithms and Data Structures

Werbung
WS 2017/18
Efficient Algorithms
and Data Structures
Harald Räcke
Fakultät für Informatik
TU München
http://www14.in.tum.de/lehre/2017WS/ea/
Winter Term 2017/18
Ernst Mayr, Harald Räcke
1/117
Part I
Organizational Matters
Ernst Mayr, Harald Räcke
2/117
Part I
Organizational Matters
ñ
Modul: IN2003
ñ
Name: “Efficient Algorithms and Data Structures”
“Effiziente Algorithmen und Datenstrukturen”
ñ
ECTS: 8 Credit points
ñ
Lectures:
ñ
ñ
4 SWS
Mon 10:00–12:00 (Room Interim2)
Fri 10:00–12:00 (Room Interim2)
Webpage: http://www14.in.tum.de/lehre/2017WS/ea/
Part I
Organizational Matters
ñ
Modul: IN2003
ñ
Name: “Efficient Algorithms and Data Structures”
“Effiziente Algorithmen und Datenstrukturen”
ñ
ECTS: 8 Credit points
ñ
Lectures:
ñ
ñ
4 SWS
Mon 10:00–12:00 (Room Interim2)
Fri 10:00–12:00 (Room Interim2)
Webpage: http://www14.in.tum.de/lehre/2017WS/ea/
Part I
Organizational Matters
ñ
Modul: IN2003
ñ
Name: “Efficient Algorithms and Data Structures”
“Effiziente Algorithmen und Datenstrukturen”
ñ
ECTS: 8 Credit points
ñ
Lectures:
ñ
ñ
4 SWS
Mon 10:00–12:00 (Room Interim2)
Fri 10:00–12:00 (Room Interim2)
Webpage: http://www14.in.tum.de/lehre/2017WS/ea/
Part I
Organizational Matters
ñ
Modul: IN2003
ñ
Name: “Efficient Algorithms and Data Structures”
“Effiziente Algorithmen und Datenstrukturen”
ñ
ECTS: 8 Credit points
ñ
Lectures:
ñ
ñ
4 SWS
Mon 10:00–12:00 (Room Interim2)
Fri 10:00–12:00 (Room Interim2)
Webpage: http://www14.in.tum.de/lehre/2017WS/ea/
Part I
Organizational Matters
ñ
Modul: IN2003
ñ
Name: “Efficient Algorithms and Data Structures”
“Effiziente Algorithmen und Datenstrukturen”
ñ
ECTS: 8 Credit points
ñ
Lectures:
ñ
ñ
4 SWS
Mon 10:00–12:00 (Room Interim2)
Fri 10:00–12:00 (Room Interim2)
Webpage: http://www14.in.tum.de/lehre/2017WS/ea/
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
ñ
Required knowledge:
ñ
ñ
ñ
ñ
ñ
IN0001, IN0003
“Introduction to Informatics 1/2”
“Einführung in die Informatik 1/2”
IN0007
“Fundamentals of Algorithms and Data Structures”
“Grundlagen: Algorithmen und Datenstrukturen” (GAD)
IN0011
“Basic Theoretic Informatics”
“Einführung in die Theoretische Informatik” (THEO)
IN0015
“Discrete Structures”
“Diskrete Strukturen” (DS)
IN0018
“Discrete Probability Theory”
“Diskrete Wahrscheinlichkeitstheorie” (DWT)
Ernst Mayr, Harald Räcke
4/117
The Lecturer
ñ
Harald Räcke
ñ
Email: [email protected]
ñ
Room: 03.09.044
ñ
Office hours: (by appointment)
Ernst Mayr, Harald Räcke
5/117
Tutorials
A01 Monday, 12:00–14:00, 00.08.038 (Schmid)
A02 Monday, 12:00–14:00, 00.09.038 (Stotz)
A03 Monday, 14:00–16:00, 02.09.023 (Liebl)
B04 Tuesday, 10:00–12:00, 00.08.053 (Schmid)
B05 Tuesday, 12:00–14:00, 03.11.018 (Kraft)
B06 Tuesday, 14:00–16:00, 00.08.038 (Somogyi)
D07 Thursday, 10:00–12:00, 03.11.018 (Liebl)
E08 Friday, 12:00–14:00, 00.13.009 (Stotz)
E09 Friday, 14:00–16:00, 00.13.009 (Kraft)
Ernst Mayr, Harald Räcke
6/117
Assignment sheets
In order to pass the module you need to pass an exam.
Ernst Mayr, Harald Räcke
7/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
An assignment sheet is usually made available on Monday
on the module webpage.
ñ
Solutions have to be handed in in the following week before
the lecture on Monday.
ñ
You can hand in your solutions by putting them in the
mailbox "Efficient Algorithms" on the basement floor in the
MI-building.
ñ
Solutions have to be given in English.
ñ
Solutions will be discussed in the tutorial of the week when
the sheet has been handed in, i.e, sheet may not be
corrected by this time.
ñ
You can submit solutions in groups of up to 2 people.
Ernst Mayr, Harald Räcke
8/117
Assessment
Assignment Sheets:
ñ
Submissions must be handwritten by a member of the
group. Please indicate who wrote the submission.
ñ
Don’t forget name and student id number for each group
member.
Ernst Mayr, Harald Räcke
9/117
Assessment
Assignment Sheets:
ñ
Submissions must be handwritten by a member of the
group. Please indicate who wrote the submission.
ñ
Don’t forget name and student id number for each group
member.
Ernst Mayr, Harald Räcke
9/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Assignment can be used to improve you grade
ñ
If you obtain a bonus your grade will improve according to
the following function

 1 round 10 round(3x)−1
1<x≤4
10
3
f (x) =
 x
otw.
ñ
It will improve by 0.3 or 0.4, respectively.
Examples:
ñ
ñ
ñ
ñ
ñ
3.3 → 3.0
2.0 → 1.7
3.7 → 3.3
1.0 → 1.0
> 4.0 no improvement
Ernst Mayr, Harald Räcke
10/117
Assessment
Requirements for Bonus
ñ
50% of the points are achieved on submissions 2–8,
ñ
50% of the points are achieved on submissions 9–14,
ñ
each group member has written at least 4 solutions.
Ernst Mayr, Harald Räcke
11/117
1 Contents
ñ
Foundations
ñ
ñ
ñ
ñ
ñ
Machine models
Efficiency measures
Asymptotic notation
Recursion
Higher Data Structures
ñ
ñ
ñ
ñ
Search trees
Hashing
Priority queues
Union/Find data structures
ñ
Cuts/Flows
ñ
Matchings
1 Contents
Ernst Mayr, Harald Räcke
12/117
1 Contents
ñ
Foundations
ñ
ñ
ñ
ñ
ñ
Machine models
Efficiency measures
Asymptotic notation
Recursion
Higher Data Structures
ñ
ñ
ñ
ñ
Search trees
Hashing
Priority queues
Union/Find data structures
ñ
Cuts/Flows
ñ
Matchings
1 Contents
Ernst Mayr, Harald Räcke
12/117
1 Contents
ñ
Foundations
ñ
ñ
ñ
ñ
ñ
Machine models
Efficiency measures
Asymptotic notation
Recursion
Higher Data Structures
ñ
ñ
ñ
ñ
Search trees
Hashing
Priority queues
Union/Find data structures
ñ
Cuts/Flows
ñ
Matchings
1 Contents
Ernst Mayr, Harald Räcke
12/117
1 Contents
ñ
Foundations
ñ
ñ
ñ
ñ
ñ
Machine models
Efficiency measures
Asymptotic notation
Recursion
Higher Data Structures
ñ
ñ
ñ
ñ
Search trees
Hashing
Priority queues
Union/Find data structures
ñ
Cuts/Flows
ñ
Matchings
1 Contents
Ernst Mayr, Harald Räcke
12/117
2 Literatur
Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman:
The design and analysis of computer algorithms,
Addison-Wesley Publishing Company: Reading (MA), 1974
Thomas H. Cormen, Charles E. Leiserson, Ron L. Rivest,
Clifford Stein:
Introduction to algorithms,
McGraw-Hill, 1990
Michael T. Goodrich, Roberto Tamassia:
Algorithm design: Foundations, analysis, and internet
examples,
John Wiley & Sons, 2002
2 Literatur
Ernst Mayr, Harald Räcke
13/117
2 Literatur
Ronald L. Graham, Donald E. Knuth, Oren Patashnik:
Concrete Mathematics,
2. Auflage, Addison-Wesley, 1994
Volker Heun:
Grundlegende Algorithmen: Einführung in den Entwurf und
die Analyse effizienter Algorithmen,
2. Auflage, Vieweg, 2003
Jon Kleinberg, Eva Tardos:
Algorithm Design,
Addison-Wesley, 2005
Donald E. Knuth:
The art of computer programming. Vol. 1: Fundamental
Algorithms,
3. Auflage, Addison-Wesley, 1997
2 Literatur
Ernst Mayr, Harald Räcke
14/117
2 Literatur
Donald E. Knuth:
The art of computer programming. Vol. 3: Sorting and
Searching,
3. Auflage, Addison-Wesley, 1997
Christos H. Papadimitriou, Kenneth Steiglitz:
Combinatorial Optimization: Algorithms and Complexity,
Prentice Hall, 1982
Uwe Schöning:
Algorithmik,
Spektrum Akademischer Verlag, 2001
Steven S. Skiena:
The Algorithm Design Manual,
Springer, 1998
2 Literatur
Ernst Mayr, Harald Räcke
15/117
Part II
Foundations
Ernst Mayr, Harald Räcke
16/117
3 Goals
ñ
Gain knowledge about efficient algorithms for important
problems, i.e., learn how to solve certain types of problems
efficiently.
ñ
Learn how to analyze and judge the efficiency of algorithms.
ñ
Learn how to design efficient algorithms.
3 Goals
Ernst Mayr, Harald Räcke
17/117
3 Goals
ñ
Gain knowledge about efficient algorithms for important
problems, i.e., learn how to solve certain types of problems
efficiently.
ñ
Learn how to analyze and judge the efficiency of algorithms.
ñ
Learn how to design efficient algorithms.
3 Goals
Ernst Mayr, Harald Räcke
17/117
3 Goals
ñ
Gain knowledge about efficient algorithms for important
problems, i.e., learn how to solve certain types of problems
efficiently.
ñ
Learn how to analyze and judge the efficiency of algorithms.
ñ
Learn how to design efficient algorithms.
3 Goals
Ernst Mayr, Harald Räcke
17/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
What do you measure?
ñ
Memory requirement
ñ
Running time
ñ
Number of comparisons
ñ
Number of multiplications
ñ
Number of hard-disc accesses
ñ
Program size
ñ
Power consumption
ñ
...
4 Modelling Issues
Ernst Mayr, Harald Räcke
18/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
How do you measure?
ñ
Implementing and testing on representative inputs
ñ
ñ
ñ
ñ
ñ
How do you choose your inputs?
May be very time-consuming.
Very reliable results if done correctly.
Results only hold for a specific machine and for a specific
set of inputs.
Theoretical analysis in a specific model of computation.
ñ
ñ
ñ
Gives asymptotic bounds like “this algorithm always runs in
time O(n2 )”.
Typically focuses on the worst case.
Can give lower bounds like “any comparison-based sorting
algorithm needs at least Ω(n log n) comparisons in the
worst case”.
4 Modelling Issues
Ernst Mayr, Harald Räcke
19/117
4 Modelling Issues
Input length
The theoretical bounds are usually given by a function f : N → N
that maps the input length to the running time (or storage
space, comparisons, multiplications, program size etc.).
The input length may e.g. be
ñ
the size of the input (number of bits)
ñ
the number of arguments
Example 1
Suppose n numbers from the interval {1, . . . , N} have to be
sorted. In this case we usually say that the input length is n
instead of e.g. n log N, which would be the number of bits
required to encode the input.
4 Modelling Issues
Ernst Mayr, Harald Räcke
20/117
4 Modelling Issues
Input length
The theoretical bounds are usually given by a function f : N → N
that maps the input length to the running time (or storage
space, comparisons, multiplications, program size etc.).
The input length may e.g. be
ñ
the size of the input (number of bits)
ñ
the number of arguments
Example 1
Suppose n numbers from the interval {1, . . . , N} have to be
sorted. In this case we usually say that the input length is n
instead of e.g. n log N, which would be the number of bits
required to encode the input.
4 Modelling Issues
Ernst Mayr, Harald Räcke
20/117
4 Modelling Issues
Input length
The theoretical bounds are usually given by a function f : N → N
that maps the input length to the running time (or storage
space, comparisons, multiplications, program size etc.).
The input length may e.g. be
ñ
the size of the input (number of bits)
ñ
the number of arguments
Example 1
Suppose n numbers from the interval {1, . . . , N} have to be
sorted. In this case we usually say that the input length is n
instead of e.g. n log N, which would be the number of bits
required to encode the input.
4 Modelling Issues
Ernst Mayr, Harald Räcke
20/117
4 Modelling Issues
Input length
The theoretical bounds are usually given by a function f : N → N
that maps the input length to the running time (or storage
space, comparisons, multiplications, program size etc.).
The input length may e.g. be
ñ
the size of the input (number of bits)
ñ
the number of arguments
Example 1
Suppose n numbers from the interval {1, . . . , N} have to be
sorted. In this case we usually say that the input length is n
instead of e.g. n log N, which would be the number of bits
required to encode the input.
4 Modelling Issues
Ernst Mayr, Harald Räcke
20/117
4 Modelling Issues
Input length
The theoretical bounds are usually given by a function f : N → N
that maps the input length to the running time (or storage
space, comparisons, multiplications, program size etc.).
The input length may e.g. be
ñ
the size of the input (number of bits)
ñ
the number of arguments
Example 1
Suppose n numbers from the interval {1, . . . , N} have to be
sorted. In this case we usually say that the input length is n
instead of e.g. n log N, which would be the number of bits
required to encode the input.
4 Modelling Issues
Ernst Mayr, Harald Räcke
20/117
Model of Computation
How to measure performance
1. Calculate running time and storage space etc. on a
simplified, idealized model of computation, e.g. Random
Access Machine (RAM), Turing Machine (TM), . . .
2. Calculate number of certain basic operations: comparisons,
multiplications, harddisc accesses, . . .
Version 2. is often easier, but focusing on one type of operation
makes it more difficult to obtain meaningful results.
4 Modelling Issues
Ernst Mayr, Harald Räcke
21/117
Model of Computation
How to measure performance
1. Calculate running time and storage space etc. on a
simplified, idealized model of computation, e.g. Random
Access Machine (RAM), Turing Machine (TM), . . .
2. Calculate number of certain basic operations: comparisons,
multiplications, harddisc accesses, . . .
Version 2. is often easier, but focusing on one type of operation
makes it more difficult to obtain meaningful results.
4 Modelling Issues
Ernst Mayr, Harald Räcke
21/117
Model of Computation
How to measure performance
1. Calculate running time and storage space etc. on a
simplified, idealized model of computation, e.g. Random
Access Machine (RAM), Turing Machine (TM), . . .
2. Calculate number of certain basic operations: comparisons,
multiplications, harddisc accesses, . . .
Version 2. is often easier, but focusing on one type of operation
makes it more difficult to obtain meaningful results.
4 Modelling Issues
Ernst Mayr, Harald Räcke
21/117
Model of Computation
How to measure performance
1. Calculate running time and storage space etc. on a
simplified, idealized model of computation, e.g. Random
Access Machine (RAM), Turing Machine (TM), . . .
2. Calculate number of certain basic operations: comparisons,
multiplications, harddisc accesses, . . .
Version 2. is often easier, but focusing on one type of operation
makes it more difficult to obtain meaningful results.
4 Modelling Issues
Ernst Mayr, Harald Räcke
21/117
Turing Machine
ñ
Very simple model of computation.
ñ
Only the “current” memory location can be altered.
ñ
Very good model for discussing computabiliy, or polynomial
vs. exponential time.
ñ
Some simple problems like recognizing whether input is of
the form xx, where x is a string, have quadratic lower
bound.
=⇒ Not a good model for developing efficient algorithms.
0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1. . .
...
control
unit
state
state holds program and can
act as constant size memory
4 Modelling Issues
Ernst Mayr, Harald Räcke
22/117
Turing Machine
ñ
Very simple model of computation.
ñ
Only the “current” memory location can be altered.
ñ
Very good model for discussing computabiliy, or polynomial
vs. exponential time.
ñ
Some simple problems like recognizing whether input is of
the form xx, where x is a string, have quadratic lower
bound.
=⇒ Not a good model for developing efficient algorithms.
0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1. . .
...
control
unit
state
state holds program and can
act as constant size memory
4 Modelling Issues
Ernst Mayr, Harald Räcke
22/117
Turing Machine
ñ
Very simple model of computation.
ñ
Only the “current” memory location can be altered.
ñ
Very good model for discussing computabiliy, or polynomial
vs. exponential time.
ñ
Some simple problems like recognizing whether input is of
the form xx, where x is a string, have quadratic lower
bound.
=⇒ Not a good model for developing efficient algorithms.
0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1. . .
...
control
unit
state
state holds program and can
act as constant size memory
4 Modelling Issues
Ernst Mayr, Harald Räcke
22/117
Turing Machine
ñ
Very simple model of computation.
ñ
Only the “current” memory location can be altered.
ñ
Very good model for discussing computabiliy, or polynomial
vs. exponential time.
ñ
Some simple problems like recognizing whether input is of
the form xx, where x is a string, have quadratic lower
bound.
=⇒ Not a good model for developing efficient algorithms.
0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1. . .
...
control
unit
state
state holds program and can
act as constant size memory
4 Modelling Issues
Ernst Mayr, Harald Räcke
22/117
Turing Machine
ñ
Very simple model of computation.
ñ
Only the “current” memory location can be altered.
ñ
Very good model for discussing computabiliy, or polynomial
vs. exponential time.
ñ
Some simple problems like recognizing whether input is of
the form xx, where x is a string, have quadratic lower
bound.
=⇒ Not a good model for developing efficient algorithms.
0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0 1. . .
...
control
unit
state
state holds program and can
act as constant size memory
4 Modelling Issues
Ernst Mayr, Harald Räcke
22/117
Random Access Machine (RAM)
ñ
Input tape and output tape (sequences of zeros and ones;
unbounded length).
ñ
Memory unit: infinite but countable number of registers
R[0], R[1], R[2], . . . .
ñ
Registers hold integers.
ñ
Indirect addressing.
input tape
0 1 0 0 1 0 0 1 1. . .
...
memory
R[0]
R[1]
R[2]
control
unit
R[3]
R[4]
Note that in the picture on the right
the tapes are one-directional, and that
a READ- or WRITE-operation always advances its tape.
R[5]
0 0 1 1
...
output tape
...
.
.
.
4 Modelling Issues
Ernst Mayr, Harald Räcke
23/117
Random Access Machine (RAM)
ñ
Input tape and output tape (sequences of zeros and ones;
unbounded length).
ñ
Memory unit: infinite but countable number of registers
R[0], R[1], R[2], . . . .
ñ
Registers hold integers.
ñ
Indirect addressing.
input tape
0 1 0 0 1 0 0 1 1. . .
...
memory
R[0]
R[1]
R[2]
control
unit
R[3]
R[4]
Note that in the picture on the right
the tapes are one-directional, and that
a READ- or WRITE-operation always advances its tape.
R[5]
0 0 1 1
...
output tape
...
.
.
.
4 Modelling Issues
Ernst Mayr, Harald Räcke
23/117
Random Access Machine (RAM)
ñ
Input tape and output tape (sequences of zeros and ones;
unbounded length).
ñ
Memory unit: infinite but countable number of registers
R[0], R[1], R[2], . . . .
ñ
Registers hold integers.
ñ
Indirect addressing.
input tape
0 1 0 0 1 0 0 1 1. . .
...
memory
R[0]
R[1]
R[2]
control
unit
R[3]
R[4]
Note that in the picture on the right
the tapes are one-directional, and that
a READ- or WRITE-operation always advances its tape.
R[5]
0 0 1 1
...
output tape
...
.
.
.
4 Modelling Issues
Ernst Mayr, Harald Räcke
23/117
Random Access Machine (RAM)
ñ
Input tape and output tape (sequences of zeros and ones;
unbounded length).
ñ
Memory unit: infinite but countable number of registers
R[0], R[1], R[2], . . . .
ñ
Registers hold integers.
ñ
Indirect addressing.
input tape
0 1 0 0 1 0 0 1 1. . .
...
memory
R[0]
R[1]
R[2]
control
unit
R[3]
R[4]
Note that in the picture on the right
the tapes are one-directional, and that
a READ- or WRITE-operation always advances its tape.
R[5]
0 0 1 1
...
output tape
...
.
.
.
4 Modelling Issues
Ernst Mayr, Harald Räcke
23/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
input operations (input tape → R[i])
ñ
ñ
output operations (R[i] → output tape)
ñ
ñ
WRITE i
register-register transfers
ñ
ñ
ñ
READ i
R[j] := R[i]
R[j] := 4
indirect addressing
ñ
ñ
R[j] := R[R[i]]
loads the content of the R[i]-th register into the j-th
register
R[R[i]] := R[j]
loads the content of the j-th into the R[i]-th register
4 Modelling Issues
Ernst Mayr, Harald Räcke
24/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Random Access Machine (RAM)
Operations
ñ
branching (including loops) based on comparisons
ñ
ñ
ñ
ñ
jump x
jumps to position x in the program;
sets instruction counter to x;
reads the next operation to perform from register R[x]
jumpz x R[i]
jump to x if R[i] = 0
if not the instruction counter is increased by 1;
jumpi i
jump to R[i] (indirect jump);
arithmetic instructions: +, −, ×, /
ñ
R[i] := R[j] + R[k];
R[i] := -R[k];
The jump-directives are very close to the
jump-instructions contained in the assembler language of real machines.
4 Modelling Issues
Ernst Mayr, Harald Räcke
25/117
Model of Computation
ñ
uniform cost model
Every operation takes time 1.
ñ
logarithmic cost model
The cost depends on the content of memory cells:
ñ
ñ
The time for a step is equal to the largest operand involved;
The storage space of a register is equal to the length (in
bits) of the largest value ever stored in it.
Bounded word RAM model: cost is uniform but the largest
value stored in a register may not exceed 2w , where usually
w = log2 n.
The latter model is quite realistic as the word-size of
a standard computer that handles a problem of size n
must be at least log2 n as otherwise the computer could
either not store the problem instance or not address all
its memory.
4 Modelling Issues
Ernst Mayr, Harald Räcke
26/117
Model of Computation
ñ
uniform cost model
Every operation takes time 1.
ñ
logarithmic cost model
The cost depends on the content of memory cells:
ñ
ñ
The time for a step is equal to the largest operand involved;
The storage space of a register is equal to the length (in
bits) of the largest value ever stored in it.
Bounded word RAM model: cost is uniform but the largest
value stored in a register may not exceed 2w , where usually
w = log2 n.
The latter model is quite realistic as the word-size of
a standard computer that handles a problem of size n
must be at least log2 n as otherwise the computer could
either not store the problem instance or not address all
its memory.
4 Modelling Issues
Ernst Mayr, Harald Räcke
26/117
Model of Computation
ñ
uniform cost model
Every operation takes time 1.
ñ
logarithmic cost model
The cost depends on the content of memory cells:
ñ
ñ
The time for a step is equal to the largest operand involved;
The storage space of a register is equal to the length (in
bits) of the largest value ever stored in it.
Bounded word RAM model: cost is uniform but the largest
value stored in a register may not exceed 2w , where usually
w = log2 n.
The latter model is quite realistic as the word-size of
a standard computer that handles a problem of size n
must be at least log2 n as otherwise the computer could
either not store the problem instance or not address all
its memory.
4 Modelling Issues
Ernst Mayr, Harald Räcke
26/117
Model of Computation
ñ
uniform cost model
Every operation takes time 1.
ñ
logarithmic cost model
The cost depends on the content of memory cells:
ñ
ñ
The time for a step is equal to the largest operand involved;
The storage space of a register is equal to the length (in
bits) of the largest value ever stored in it.
Bounded word RAM model: cost is uniform but the largest
value stored in a register may not exceed 2w , where usually
w = log2 n.
The latter model is quite realistic as the word-size of
a standard computer that handles a problem of size n
must be at least log2 n as otherwise the computer could
either not store the problem instance or not address all
its memory.
4 Modelling Issues
Ernst Mayr, Harald Räcke
26/117
Model of Computation
ñ
uniform cost model
Every operation takes time 1.
ñ
logarithmic cost model
The cost depends on the content of memory cells:
ñ
ñ
The time for a step is equal to the largest operand involved;
The storage space of a register is equal to the length (in
bits) of the largest value ever stored in it.
Bounded word RAM model: cost is uniform but the largest
value stored in a register may not exceed 2w , where usually
w = log2 n.
The latter model is quite realistic as the word-size of
a standard computer that handles a problem of size n
must be at least log2 n as otherwise the computer could
either not store the problem instance or not address all
its memory.
4 Modelling Issues
Ernst Mayr, Harald Räcke
26/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
4 Modelling Issues
Example 2
Algorithm 1 RepeatedSquaring(n)
1: r ← 2;
2: for i = 1 → n do
3:
r ← r2
4: return r
ñ
running time:
ñ
ñ
ñ
uniform model: n steps
logarithmic model: 1 + 2 + 4 + · · · + 2n = 2n+1 − 1 = Θ(2n )
space requirement:
ñ
ñ
uniform model: O(1)
logarithmic model: O(2n )
4 Modelling Issues
Ernst Mayr, Harald Räcke
27/117
There are different types of complexity bounds:
ñ
best-case complexity:
Cbc (n) := min{C(x) | |x| = n}
ñ
Usually easy to analyze, but not very meaningful.
worst-case complexity:
Cwc (n) := max{C(x) | |x| = n}
ñ
Usually moderately easy to analyze; sometimes too
pessimistic.
average case complexity:
1 X
Cavg (n) :=
C(x)
|In | |x|=n
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
more general: probability measure µ
X
Cavg (n) :=
µ(x) · C(x)
x∈In
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
There are different types of complexity bounds:
ñ
best-case complexity:
Cbc (n) := min{C(x) | |x| = n}
ñ
Usually easy to analyze, but not very meaningful.
worst-case complexity:
Cwc (n) := max{C(x) | |x| = n}
ñ
Usually moderately easy to analyze; sometimes too
pessimistic.
average case complexity:
1 X
C(x)
Cavg (n) :=
|In | |x|=n
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
more general: probability measure µ
X
Cavg (n) :=
µ(x) · C(x)
x∈In
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
There are different types of complexity bounds:
ñ
best-case complexity:
Cbc (n) := min{C(x) | |x| = n}
ñ
Usually easy to analyze, but not very meaningful.
worst-case complexity:
Cwc (n) := max{C(x) | |x| = n}
ñ
Usually moderately easy to analyze; sometimes too
pessimistic.
average case complexity:
1 X
C(x)
Cavg (n) :=
|In | |x|=n
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
more general: probability measure µ
X
Cavg (n) :=
µ(x) · C(x)
x∈In
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
There are different types of complexity bounds:
ñ
best-case complexity:
Cbc (n) := min{C(x) | |x| = n}
ñ
Usually easy to analyze, but not very meaningful.
worst-case complexity:
Cwc (n) := max{C(x) | |x| = n}
ñ
Usually moderately easy to analyze; sometimes too
pessimistic.
average case complexity:
1 X
C(x)
Cavg (n) :=
|In | |x|=n
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
more general: probability measure µ
X
Cavg (n) :=
µ(x) · C(x)
x∈In
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
There are different types of complexity bounds:
ñ
amortized complexity:
The average cost of data structure operations over a worst
case sequence of operations.
ñ
randomized complexity:
The algorithm may use random bits. Expected running time
(over all possible choices of random bits) for a fixed input
x. Then take the worst-case over all x with |x| = n.
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
There are different types of complexity bounds:
ñ
amortized complexity:
The average cost of data structure operations over a worst
case sequence of operations.
ñ
randomized complexity:
The algorithm may use random bits. Expected running time
(over all possible choices of random bits) for a fixed input
x. Then take the worst-case over all x with |x| = n.
cost of instance
x
input length of
|x|
instance x
set of instances
In
of length n
C(x)
4 Modelling Issues
Ernst Mayr, Harald Räcke
28/117
5 Asymptotic Notation
We are usually not interested in exact running times, but only in
an asymptotic classification of the running time, that ignores
constant factors and constant additive offsets.
ñ
We are usually interested in the running times for large
values of n. Then constant additive terms do not play an
important role.
ñ
An exact analysis (e.g. exactly counting the number of
operations in a RAM) may be hard, but wouldn’t lead to
more precise results as the computational model is already
quite a distance from reality.
ñ
A linear speed-up (i.e., by a constant factor) is always
possible by e.g. implementing the algorithm on a faster
machine.
ñ
Running time should be expressed by simple functions.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
29/117
5 Asymptotic Notation
We are usually not interested in exact running times, but only in
an asymptotic classification of the running time, that ignores
constant factors and constant additive offsets.
ñ
We are usually interested in the running times for large
values of n. Then constant additive terms do not play an
important role.
ñ
An exact analysis (e.g. exactly counting the number of
operations in a RAM) may be hard, but wouldn’t lead to
more precise results as the computational model is already
quite a distance from reality.
ñ
A linear speed-up (i.e., by a constant factor) is always
possible by e.g. implementing the algorithm on a faster
machine.
ñ
Running time should be expressed by simple functions.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
29/117
5 Asymptotic Notation
We are usually not interested in exact running times, but only in
an asymptotic classification of the running time, that ignores
constant factors and constant additive offsets.
ñ
We are usually interested in the running times for large
values of n. Then constant additive terms do not play an
important role.
ñ
An exact analysis (e.g. exactly counting the number of
operations in a RAM) may be hard, but wouldn’t lead to
more precise results as the computational model is already
quite a distance from reality.
ñ
A linear speed-up (i.e., by a constant factor) is always
possible by e.g. implementing the algorithm on a faster
machine.
ñ
Running time should be expressed by simple functions.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
29/117
5 Asymptotic Notation
We are usually not interested in exact running times, but only in
an asymptotic classification of the running time, that ignores
constant factors and constant additive offsets.
ñ
We are usually interested in the running times for large
values of n. Then constant additive terms do not play an
important role.
ñ
An exact analysis (e.g. exactly counting the number of
operations in a RAM) may be hard, but wouldn’t lead to
more precise results as the computational model is already
quite a distance from reality.
ñ
A linear speed-up (i.e., by a constant factor) is always
possible by e.g. implementing the algorithm on a faster
machine.
ñ
Running time should be expressed by simple functions.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
29/117
5 Asymptotic Notation
We are usually not interested in exact running times, but only in
an asymptotic classification of the running time, that ignores
constant factors and constant additive offsets.
ñ
We are usually interested in the running times for large
values of n. Then constant additive terms do not play an
important role.
ñ
An exact analysis (e.g. exactly counting the number of
operations in a RAM) may be hard, but wouldn’t lead to
more precise results as the computational model is already
quite a distance from reality.
ñ
A linear speed-up (i.e., by a constant factor) is always
possible by e.g. implementing the algorithm on a faster
machine.
ñ
Running time should be expressed by simple functions.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
29/117
Asymptotic Notation
Formal Definition
Let f denote functions from N to R+ .
ñ
O(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow not faster than f )
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
30/117
Asymptotic Notation
Formal Definition
Let f denote functions from N to R+ .
ñ
O(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow not faster than f )
ñ
Ω(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≥ c · f (n)]}
(set of functions that asymptotically grow not slower than f )
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
30/117
Asymptotic Notation
Formal Definition
Let f denote functions from N to R+ .
ñ
O(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow not faster than f )
ñ
Ω(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≥ c · f (n)]}
(set of functions that asymptotically grow not slower than f )
ñ
Θ(f ) = Ω(f ) ∩ O(f )
(functions that asymptotically have the same growth as f )
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
30/117
Asymptotic Notation
Formal Definition
Let f denote functions from N to R+ .
ñ
O(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow not faster than f )
ñ
Ω(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≥ c · f (n)]}
(set of functions that asymptotically grow not slower than f )
ñ
Θ(f ) = Ω(f ) ∩ O(f )
(functions that asymptotically have the same growth as f )
ñ
o(f ) = {g | ∀c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow slower than f )
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
30/117
Asymptotic Notation
Formal Definition
Let f denote functions from N to R+ .
ñ
O(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow not faster than f )
ñ
Ω(f ) = {g | ∃c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≥ c · f (n)]}
(set of functions that asymptotically grow not slower than f )
ñ
Θ(f ) = Ω(f ) ∩ O(f )
(functions that asymptotically have the same growth as f )
ñ
o(f ) = {g | ∀c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≤ c · f (n)]}
(set of functions that asymptotically grow slower than f )
ñ
ω(f ) = {g | ∀c > 0 ∃n0 ∈ N0 ∀n ≥ n0 : [g(n) ≥ c · f (n)]}
(set of functions that asymptotically grow faster than f )
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
30/117
Asymptotic Notation
There is an equivalent definition using limes notation (assuming
that the respective limes exists). f and g are functions from N0
to R+
0.
ñ
g ∈ O(f ): 0 ≤ lim
n→∞
g(n)
<∞
f (n)
• Note that for the version of the Landau notation defined here, we assume that f and g are positive functions.
• There also exist versions for arbitrary
functions, and for the case that the
limes is not infinity.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
31/117
Asymptotic Notation
There is an equivalent definition using limes notation (assuming
that the respective limes exists). f and g are functions from N0
to R+
0.
ñ
ñ
g(n)
<∞
f (n)
g(n)
g ∈ Ω(f ): 0 < lim
≤∞
n→∞ f (n)
g ∈ O(f ): 0 ≤ lim
n→∞
• Note that for the version of the Landau notation defined here, we assume that f and g are positive functions.
• There also exist versions for arbitrary
functions, and for the case that the
limes is not infinity.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
31/117
Asymptotic Notation
There is an equivalent definition using limes notation (assuming
that the respective limes exists). f and g are functions from N0
to R+
0.
ñ
ñ
ñ
g(n)
<∞
f (n)
g(n)
g ∈ Ω(f ): 0 < lim
≤∞
n→∞ f (n)
g(n)
g ∈ Θ(f ): 0 < lim
<∞
n→∞ f (n)
g ∈ O(f ): 0 ≤ lim
n→∞
• Note that for the version of the Landau notation defined here, we assume that f and g are positive functions.
• There also exist versions for arbitrary
functions, and for the case that the
limes is not infinity.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
31/117
Asymptotic Notation
There is an equivalent definition using limes notation (assuming
that the respective limes exists). f and g are functions from N0
to R+
0.
ñ
ñ
ñ
ñ
g(n)
<∞
f (n)
g(n)
g ∈ Ω(f ): 0 < lim
≤∞
n→∞ f (n)
g(n)
g ∈ Θ(f ): 0 < lim
<∞
n→∞ f (n)
g(n)
g ∈ o(f ): lim
=0
n→∞ f (n)
g ∈ O(f ): 0 ≤ lim
n→∞
• Note that for the version of the Landau notation defined here, we assume that f and g are positive functions.
• There also exist versions for arbitrary
functions, and for the case that the
limes is not infinity.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
31/117
Asymptotic Notation
There is an equivalent definition using limes notation (assuming
that the respective limes exists). f and g are functions from N0
to R+
0.
ñ
ñ
ñ
ñ
ñ
g(n)
<∞
f (n)
g(n)
g ∈ Ω(f ): 0 < lim
≤∞
n→∞ f (n)
g(n)
g ∈ Θ(f ): 0 < lim
<∞
n→∞ f (n)
g(n)
g ∈ o(f ): lim
=0
n→∞ f (n)
g(n)
g ∈ ω(f ): lim
=∞
n→∞ f (n)
g ∈ O(f ): 0 ≤ lim
n→∞
• Note that for the version of the Landau notation defined here, we assume that f and g are positive functions.
• There also exist versions for arbitrary
functions, and for the case that the
limes is not infinity.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
31/117
Asymptotic Notation
Abuse of notation
1. People write f = O(g), when they mean f ∈ O(g). This is
not an equality (how could a function be equal to a set of
functions).
2. People write f (n) = O(g(n)), when they mean f ∈ O(g),
with f : N → R+ , n , f (n), and g : N → R+ , n , g(n).
3. People write e.g. h(n) = f (n) + o(g(n)) when they mean
that there exists a function z : N → R+ , n , z(n), z ∈ o(g)
such that h(n) = f (n) + z(n).
4. People write O(f (n)) = O(g(n)), when they mean
O(f (n)) ⊆ O(g(n)). Again this is not an equality.
2. In this context f (n) does not mean the
function f evaluated at n, but instead it is
a shorthand for the function itself (leaving
out domain and codomain and only giving
the rule of correspondence of the function).
3. This is particularly useful if you do not want
to ignore constant factors. For example the
median of n elements can be determined
using 32 n + o(n) comparisons.
Asymptotic Notation
Abuse of notation
1. People write f = O(g), when they mean f ∈ O(g). This is
not an equality (how could a function be equal to a set of
functions).
2. People write f (n) = O(g(n)), when they mean f ∈ O(g),
with f : N → R+ , n , f (n), and g : N → R+ , n , g(n).
3. People write e.g. h(n) = f (n) + o(g(n)) when they mean
that there exists a function z : N → R+ , n , z(n), z ∈ o(g)
such that h(n) = f (n) + z(n).
4. People write O(f (n)) = O(g(n)), when they mean
O(f (n)) ⊆ O(g(n)). Again this is not an equality.
2. In this context f (n) does not mean the
function f evaluated at n, but instead it is
a shorthand for the function itself (leaving
out domain and codomain and only giving
the rule of correspondence of the function).
3. This is particularly useful if you do not want
to ignore constant factors. For example the
median of n elements can be determined
using 32 n + o(n) comparisons.
Asymptotic Notation
Abuse of notation
1. People write f = O(g), when they mean f ∈ O(g). This is
not an equality (how could a function be equal to a set of
functions).
2. People write f (n) = O(g(n)), when they mean f ∈ O(g),
with f : N → R+ , n , f (n), and g : N → R+ , n , g(n).
3. People write e.g. h(n) = f (n) + o(g(n)) when they mean
that there exists a function z : N → R+ , n , z(n), z ∈ o(g)
such that h(n) = f (n) + z(n).
4. People write O(f (n)) = O(g(n)), when they mean
O(f (n)) ⊆ O(g(n)). Again this is not an equality.
2. In this context f (n) does not mean the
function f evaluated at n, but instead it is
a shorthand for the function itself (leaving
out domain and codomain and only giving
the rule of correspondence of the function).
3. This is particularly useful if you do not want
to ignore constant factors. For example the
median of n elements can be determined
using 32 n + o(n) comparisons.
Asymptotic Notation
Abuse of notation
1. People write f = O(g), when they mean f ∈ O(g). This is
not an equality (how could a function be equal to a set of
functions).
2. People write f (n) = O(g(n)), when they mean f ∈ O(g),
with f : N → R+ , n , f (n), and g : N → R+ , n , g(n).
3. People write e.g. h(n) = f (n) + o(g(n)) when they mean
that there exists a function z : N → R+ , n , z(n), z ∈ o(g)
such that h(n) = f (n) + z(n).
4. People write O(f (n)) = O(g(n)), when they mean
O(f (n)) ⊆ O(g(n)). Again this is not an equality.
2. In this context f (n) does not mean the
function f evaluated at n, but instead it is
a shorthand for the function itself (leaving
out domain and codomain and only giving
the rule of correspondence of the function).
3. This is particularly useful if you do not want
to ignore constant factors. For example the
median of n elements can be determined
using 32 n + o(n) comparisons.
Asymptotic Notation in Equations
How do we interpret an expression like:
2n2 + 3n + 1 = 2n2 + Θ(n)
Here, Θ(n) stands for an anonymous function in the set Θ(n)
that makes the expression true.
Note that Θ(n) is on the right hand side, otw. this interpretation
is wrong.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
33/117
Asymptotic Notation in Equations
How do we interpret an expression like:
2n2 + 3n + 1 = 2n2 + Θ(n)
Here, Θ(n) stands for an anonymous function in the set Θ(n)
that makes the expression true.
Note that Θ(n) is on the right hand side, otw. this interpretation
is wrong.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
33/117
Asymptotic Notation in Equations
How do we interpret an expression like:
2n2 + 3n + 1 = 2n2 + Θ(n)
Here, Θ(n) stands for an anonymous function in the set Θ(n)
that makes the expression true.
Note that Θ(n) is on the right hand side, otw. this interpretation
is wrong.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
33/117
Asymptotic Notation in Equations
How do we interpret an expression like:
2n2 + O(n) = Θ(n2 )
Regardless of how we choose the anonymous function
f (n) ∈ O(n) there is an anonymous function g(n) ∈ Θ(n2 )
that makes the expression true.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
34/117
Asymptotic Notation in Equations
How do we interpret an expression like:
2n2 + O(n) = Θ(n2 )
Regardless of how we choose the anonymous function
f (n) ∈ O(n) there is an anonymous function g(n) ∈ Θ(n2 )
that makes the expression true.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
34/117
Asymptotic Notation in Equations
How do we interpret an expression like:
n
X
i=1
The Θ(i)-symbol on the left represents one anonymous function
P
f : N → R+ , and then i f (i) is
computed.
Θ(i) = Θ(n2 )
Careful!
“It is understood” that every occurence of an O-symbol (or
Θ, Ω, o, ω) on the left represents one anonymous function.
Hence, the left side is not equal to
Θ(1) + Θ(2) + · · · + Θ(n − 1) + Θ(n)
Θ(1)+Θ(2)+· · ·+Θ(n−1)+Θ(n) does
not really have a reasonable interpretation.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
35/117
Asymptotic Notation in Equations
How do we interpret an expression like:
n
X
i=1
The Θ(i)-symbol on the left represents one anonymous function
P
f : N → R+ , and then i f (i) is
computed.
Θ(i) = Θ(n2 )
Careful!
“It is understood” that every occurence of an O-symbol (or
Θ, Ω, o, ω) on the left represents one anonymous function.
Hence, the left side is not equal to
Θ(1) + Θ(2) + · · · + Θ(n − 1) + Θ(n)
Θ(1)+Θ(2)+· · ·+Θ(n−1)+Θ(n) does
not really have a reasonable interpretation.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
35/117
Asymptotic Notation in Equations
How do we interpret an expression like:
n
X
i=1
The Θ(i)-symbol on the left represents one anonymous function
P
f : N → R+ , and then i f (i) is
computed.
Θ(i) = Θ(n2 )
Careful!
“It is understood” that every occurence of an O-symbol (or
Θ, Ω, o, ω) on the left represents one anonymous function.
Hence, the left side is not equal to
Θ(1) + Θ(2) + · · · + Θ(n − 1) + Θ(n)
Θ(1)+Θ(2)+· · ·+Θ(n−1)+Θ(n) does
not really have a reasonable interpretation.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
35/117
Asymptotic Notation in Equations
We can view an expression containing asymptotic notation as
generating a set:
n2 · O(n) + O(log n)
represents
n
f : N → R+ | f (n) = n2 · g(n) + h(n)
o
with g(n) ∈ O(n) and h(n) ∈ O(log n)
Recall that according to the previous
P
slide e.g. the expressions n
i=1 O(i) and
Pn/2
Pn
O(i)
+
O(i)
generate
difi=n/2+1
i=1
ferent sets.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
36/117
Asymptotic Notation in Equations
Then an asymptotic equation can be interpreted as
containement btw. two sets:
n2 · O(n) + O(log n) = Θ(n2 )
represents
n2 · O(n) + O(log n) ⊆ Θ(n2 )
Note that the equation does not hold.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
37/117
Asymptotic Notation
Lemma 3
Let f , g be functions with the property
∃n0 > 0 ∀n ≥ n0 : f (n) > 0 (the same for g). Then
ñ
ñ
ñ
ñ
c · f (n) ∈ Θ(f (n)) for any constant c
O(f (n)) + O(g(n)) = O(f (n) + g(n))
O(f (n)) · O(g(n)) = O(f (n) · g(n))
O(f (n)) + O(g(n)) = O(max{f (n), g(n)})
The expressions also hold for Ω. Note that this means that
f (n) + g(n) ∈ Θ(max{f (n), g(n)}).
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
38/117
Asymptotic Notation
Lemma 3
Let f , g be functions with the property
∃n0 > 0 ∀n ≥ n0 : f (n) > 0 (the same for g). Then
ñ
ñ
ñ
ñ
c · f (n) ∈ Θ(f (n)) for any constant c
O(f (n)) + O(g(n)) = O(f (n) + g(n))
O(f (n)) · O(g(n)) = O(f (n) · g(n))
O(f (n)) + O(g(n)) = O(max{f (n), g(n)})
The expressions also hold for Ω. Note that this means that
f (n) + g(n) ∈ Θ(max{f (n), g(n)}).
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
38/117
Asymptotic Notation
Lemma 3
Let f , g be functions with the property
∃n0 > 0 ∀n ≥ n0 : f (n) > 0 (the same for g). Then
ñ
ñ
ñ
ñ
c · f (n) ∈ Θ(f (n)) for any constant c
O(f (n)) + O(g(n)) = O(f (n) + g(n))
O(f (n)) · O(g(n)) = O(f (n) · g(n))
O(f (n)) + O(g(n)) = O(max{f (n), g(n)})
The expressions also hold for Ω. Note that this means that
f (n) + g(n) ∈ Θ(max{f (n), g(n)}).
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
38/117
Asymptotic Notation
Lemma 3
Let f , g be functions with the property
∃n0 > 0 ∀n ≥ n0 : f (n) > 0 (the same for g). Then
ñ
ñ
ñ
ñ
c · f (n) ∈ Θ(f (n)) for any constant c
O(f (n)) + O(g(n)) = O(f (n) + g(n))
O(f (n)) · O(g(n)) = O(f (n) · g(n))
O(f (n)) + O(g(n)) = O(max{f (n), g(n)})
The expressions also hold for Ω. Note that this means that
f (n) + g(n) ∈ Θ(max{f (n), g(n)}).
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
38/117
Asymptotic Notation
Lemma 3
Let f , g be functions with the property
∃n0 > 0 ∀n ≥ n0 : f (n) > 0 (the same for g). Then
ñ
ñ
ñ
ñ
c · f (n) ∈ Θ(f (n)) for any constant c
O(f (n)) + O(g(n)) = O(f (n) + g(n))
O(f (n)) · O(g(n)) = O(f (n) · g(n))
O(f (n)) + O(g(n)) = O(max{f (n), g(n)})
The expressions also hold for Ω. Note that this means that
f (n) + g(n) ∈ Θ(max{f (n), g(n)}).
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
38/117
Asymptotic Notation
Comments
ñ
Do not use asymptotic notation within induction proofs.
ñ
For any constants a, b we have loga n = Θ(logb n).
Therefore, we will usually ignore the base of a logarithm
within asymptotic notation.
ñ
In general log n = log2 n, i.e., we use 2 as the default base
for the logarithm.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
39/117
Asymptotic Notation
Comments
ñ
Do not use asymptotic notation within induction proofs.
ñ
For any constants a, b we have loga n = Θ(logb n).
Therefore, we will usually ignore the base of a logarithm
within asymptotic notation.
ñ
In general log n = log2 n, i.e., we use 2 as the default base
for the logarithm.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
39/117
Asymptotic Notation
Comments
ñ
Do not use asymptotic notation within induction proofs.
ñ
For any constants a, b we have loga n = Θ(logb n).
Therefore, we will usually ignore the base of a logarithm
within asymptotic notation.
ñ
In general log n = log2 n, i.e., we use 2 as the default base
for the logarithm.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
39/117
Asymptotic Notation
In general asymptotic classification of running times is a good
measure for comparing algorithms:
ñ
If the running time analysis is tight and actually occurs in
practise (i.e., the asymptotic bound is not a purely
theoretical worst-case bound), then the algorithm that has
better asymptotic running time will always outperform a
weaker algorithm for large enough values of n.
ñ
However, suppose that I have two algorithms:
ñ
ñ
Algorithm A. Running time f (n) = 1000 log n = O(log n).
Algorithm B. Running time g(n) = log2 n.
Clearly f = o(g). However, as long as log n ≤ 1000
Algorithm B will be more efficient.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
40/117
Asymptotic Notation
In general asymptotic classification of running times is a good
measure for comparing algorithms:
ñ
If the running time analysis is tight and actually occurs in
practise (i.e., the asymptotic bound is not a purely
theoretical worst-case bound), then the algorithm that has
better asymptotic running time will always outperform a
weaker algorithm for large enough values of n.
ñ
However, suppose that I have two algorithms:
ñ
ñ
Algorithm A. Running time f (n) = 1000 log n = O(log n).
Algorithm B. Running time g(n) = log2 n.
Clearly f = o(g). However, as long as log n ≤ 1000
Algorithm B will be more efficient.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
40/117
Asymptotic Notation
In general asymptotic classification of running times is a good
measure for comparing algorithms:
ñ
If the running time analysis is tight and actually occurs in
practise (i.e., the asymptotic bound is not a purely
theoretical worst-case bound), then the algorithm that has
better asymptotic running time will always outperform a
weaker algorithm for large enough values of n.
ñ
However, suppose that I have two algorithms:
ñ
ñ
Algorithm A. Running time f (n) = 1000 log n = O(log n).
Algorithm B. Running time g(n) = log2 n.
Clearly f = o(g). However, as long as log n ≤ 1000
Algorithm B will be more efficient.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
40/117
Asymptotic Notation
In general asymptotic classification of running times is a good
measure for comparing algorithms:
ñ
If the running time analysis is tight and actually occurs in
practise (i.e., the asymptotic bound is not a purely
theoretical worst-case bound), then the algorithm that has
better asymptotic running time will always outperform a
weaker algorithm for large enough values of n.
ñ
However, suppose that I have two algorithms:
ñ
ñ
Algorithm A. Running time f (n) = 1000 log n = O(log n).
Algorithm B. Running time g(n) = log2 n.
Clearly f = o(g). However, as long as log n ≤ 1000
Algorithm B will be more efficient.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
40/117
Asymptotic Notation
In general asymptotic classification of running times is a good
measure for comparing algorithms:
ñ
If the running time analysis is tight and actually occurs in
practise (i.e., the asymptotic bound is not a purely
theoretical worst-case bound), then the algorithm that has
better asymptotic running time will always outperform a
weaker algorithm for large enough values of n.
ñ
However, suppose that I have two algorithms:
ñ
ñ
Algorithm A. Running time f (n) = 1000 log n = O(log n).
Algorithm B. Running time g(n) = log2 n.
Clearly f = o(g). However, as long as log n ≤ 1000
Algorithm B will be more efficient.
5 Asymptotic Notation
Ernst Mayr, Harald Räcke
40/117
6 Recurrences
Algorithm 2 mergesort(list L)
1: n ← size(L)
2: if n ≤ 1 return L
n
3: L1 ← L[1 · · · b c]
2
n
4: L2 ← L[b c + 1 · · · n]
2
5: mergesort(L1 )
6: mergesort(L2 )
7: L ← merge(L1 , L2 )
8: return L
6 Recurrences
Ernst Mayr, Harald Räcke
41/117
6 Recurrences
Algorithm 2 mergesort(list L)
1: n ← size(L)
2: if n ≤ 1 return L
n
3: L1 ← L[1 · · · b c]
2
n
4: L2 ← L[b c + 1 · · · n]
2
5: mergesort(L1 )
6: mergesort(L2 )
7: L ← merge(L1 , L2 )
8: return L
This algorithm requires
l n m
j n k
l n m
T (n) = T
+T
+ O(n) ≤ 2T
+ O(n)
2
2
2
comparisons when n > 1 and 0 comparisons when n ≤ 1.
6 Recurrences
Ernst Mayr, Harald Räcke
41/117
Recurrences
How do we bring the expression for the number of comparisons
(≈ running time) into a closed form?
For this we need to solve the recurrence.
6 Recurrences
Ernst Mayr, Harald Räcke
42/117
Recurrences
How do we bring the expression for the number of comparisons
(≈ running time) into a closed form?
For this we need to solve the recurrence.
6 Recurrences
Ernst Mayr, Harald Räcke
42/117
Methods for Solving Recurrences
1. Guessing+Induction
Guess the right solution and prove that it is correct via
induction. It needs experience to make the right guess.
2. Master Theorem
For a lot of recurrences that appear in the analysis of
algorithms this theorem can be used to obtain tight
asymptotic bounds. It does not provide exact solutions.
3. Characteristic Polynomial
Linear homogenous recurrences can be solved via this
method.
6 Recurrences
Ernst Mayr, Harald Räcke
43/117
Methods for Solving Recurrences
4. Generating Functions
A more general technique that allows to solve certain types
of linear inhomogenous relations and also sometimes
non-linear recurrence relations.
5. Transformation of the Recurrence
Sometimes one can transform the given recurrence relations
so that it e.g. becomes linear and can therefore be solved
with one of the other techniques.
6 Recurrences
Ernst Mayr, Harald Räcke
44/117
6.1 Guessing+Induction
First we need to get rid of the O-notation in our recurrence:
(
T (n) ≤
2T
0
n 2
+ cn
n≥2
otherwise
Informal way:
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
45/117
6.1 Guessing+Induction
First we need to get rid of the O-notation in our recurrence:
(
T (n) ≤
2T
0
n 2
+ cn
Informal way:
Assume that instead we have
(
2T n
2 + cn
T (n) ≤
0
n≥2
otherwise
n≥2
otherwise
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
45/117
6.1 Guessing+Induction
First we need to get rid of the O-notation in our recurrence:
(
T (n) ≤
2T
0
n 2
+ cn
Informal way:
Assume that instead we have
(
2T n
2 + cn
T (n) ≤
0
n≥2
otherwise
n≥2
otherwise
One way of solving such a recurrence is to guess a solution, and
check that it is correct by plugging it in.
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
45/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d.
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
T (n) ≤ 2T
n
2
+ cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
n
T (n) ≤ 2T
+ cn
2
n
n
≤ 2 d log
+ cn
2
2
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
n
T (n) ≤ 2T
+ cn
2
n
n
≤ 2 d log
+ cn
2
2
= dn(log n − 1) + cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
n
T (n) ≤ 2T
+ cn
2
n
n
≤ 2 d log
+ cn
2
2
= dn(log n − 1) + cn
= dn log n + (c − d)n
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
n
T (n) ≤ 2T
+ cn
2
n
n
≤ 2 d log
+ cn
2
2
= dn(log n − 1) + cn
= dn log n + (c − d)n
≤ dn log n
if we choose d ≥ c.
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
Suppose we guess T (n) ≤ dn log n for a constant d. Then
n
T (n) ≤ 2T
+ cn
2
n
n
≤ 2 d log
+ cn
2
2
= dn(log n − 1) + cn
= dn log n + (c − d)n
≤ dn log n
if we choose d ≥ c.
Formally, this is not correct if n is not a power of 2. Also even in
this case one would need to do an induction proof.
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
46/117
6.1 Guessing+Induction
How do we get a result for all values of n?
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
47/117
6.1 Guessing+Induction
How do we get a result for all values of n?
We consider the following recurrence instead of the original one:
(
T (n) ≤
2T (
b
n
2 ) + cn
n ≥ 16
otherwise
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
47/117
6.1 Guessing+Induction
How do we get a result for all values of n?
We consider the following recurrence instead of the original one:
(
T (n) ≤
2T (
b
n
2 ) + cn
n ≥ 16
otherwise
Note that we can do this as for constant-sized inputs the running
time is always some constant (b in the above case).
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
47/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
T (n)
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
T (n) ≤ 2T
l n m
2
+ cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
n
2
l m
≤
n
2
+1
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
n
2
l m
≤
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 +1
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
n
2
l m
n
2
≤
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 +1
+1≤
9
16 n
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
= dn log n + (log 9 − 4)dn + 2d log n + cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
log n ≤
= dn log n + (log 9 − 4)dn + 2d log n + cn
n
4
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
log n ≤
n
4
= dn log n + (log 9 − 4)dn + 2d log n + cn
≤ dn log n + (log 9 − 3.5)dn + cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
log n ≤
n
4
= dn log n + (log 9 − 4)dn + 2d log n + cn
≤ dn log n + (log 9 − 3.5)dn + cn
≤ dn log n − 0.33dn + cn
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
6.1 Guessing+Induction
We also make a guess of T (n) ≤ dn log n and get
l n m
T (n) ≤ 2T
+ cn
2
lnm
l n m
≤2 d
log
+ cn
2
2
l m
n
n
≤ 2 d(n/2 + 1) log(n/2 + 1) + cn
2 ≤ 2 +1
9 n
9
+
1
≤
n
n + 2d log n + cn
≤
dn
log
2
16
16
log
9
16 n
= log n + (log 9 − 4)
log n ≤
n
4
= dn log n + (log 9 − 4)dn + 2d log n + cn
≤ dn log n + (log 9 − 3.5)dn + cn
≤ dn log n − 0.33dn + cn
≤ dn log n
for a suitable choice of d.
6.1 Guessing+Induction
Ernst Mayr, Harald Räcke
48/117
Note that the cases do not cover all possibilities.
6.2 Master Theorem
Lemma 4
Let a ≥ 1, b ≥ 1 and > 0 denote constants. Consider the
recurrence
n
T (n) = aT
+ f (n) .
b
Case 1.
If f (n) = O(nlogb (a)− ) then T (n) = Θ(nlogb a ).
Case 2.
If f (n) = Θ(nlogb (a) logk n) then T (n) = Θ(nlogb a logk+1 n),
k ≥ 0.
Case 3.
If f (n) = Ω(nlogb (a)+ ) and for sufficiently large n
n
af ( b ) ≤ cf (n) for some constant c < 1 then T (n) = Θ(f (n)).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
49/117
6.2 Master Theorem
We prove the Master Theorem for the case that n is of the form
b` , and we assume that the non-recursive case occurs for
problem size 1 and incurs cost 1.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
50/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
x
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
nx
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
nx
a
n
b
n
b
n
b
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
nx
a
n
b
n
b2
n
b2
n
b
a
n
b2
n
b2
n
b2
n
b
a
n
b2
n
b2
n
b2
a
n
b2
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
a
n
b2
a
1
a
1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
f (n)
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
a
n
b2
a
1
a
1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
f (n)
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
af ( n
b)
a
n
b2
a
1
a
1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
f (n)
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
af ( n
b)
a
n
b2
a
1
n
a
a2 f ( b 2 )
1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
f (n)
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
af ( n
b)
a
n
b2
a
1
n
a
1
a2 f ( b 2 )
alogb n
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
The Recursion Tree
The running time of a recursive algorithm can be visualized by a
recursion tree:
f (n)
nx
a
n
b
n
b2
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
n
b
a
n
b2
a
1
n
b2
a
1
1
a
1
n
b2
1
af ( n
b)
a
n
b2
a
1
n
a
1
a2 f ( b 2 )
alogb n
=
nlogb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
51/117
6.2 Master Theorem
This gives
logb a
T (n) = n
logb n−1
+
X
i=0
ai f
n
bi
.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
52/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
X
i=0
ai f
n
bi
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
i=0
ai
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
i=0
ai
b−i(logb a−) = bi (blogb a )−i = bi a−i
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
=
bi a−i
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb n−1
X
b
i
i=0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
=
=
bi a−i
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb n−1
X
b
i
i=0
qk+1 −1
q−1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
=
=
bi a−i
qk+1 −1
q−1
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb a−
= cn
logb n−1
X
(b
i=0
logb n
b
i
− 1)/(b − 1)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
=
=
bi a−i
qk+1 −1
q−1
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb a−
= cn
logb n−1
X
(b
i=0
logb n
b
i
− 1)/(b − 1)
= cnlogb a− (n − 1)/(b − 1)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
=
=
bi a−i
qk+1 −1
q−1
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb a−
= cn
logb n−1
X
(b
i=0
logb n
b
i
− 1)/(b − 1)
= cnlogb a− (n − 1)/(b − 1)
c
= nlogb a (n − 1)/(n )
b −1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
Hence,
T (n) ≤
=
=
bi a−i
qk+1 −1
q−1
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb a−
= cn
logb n−1
X
(b
i=0
logb n
b
i
− 1)/(b − 1)
= cnlogb a− (n − 1)/(b − 1)
c
= nlogb a (n − 1)/(n )
b −1
c
+
1
nlogb (a)
b − 1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 1. Now suppose that f (n) ≤ cnlogb a− .
T (n) − nlogb a =
logb n−1
≤c
b−i(logb a−)
=
bi (blogb a )−i
Pk
i=0 q
i
Hence,
T (n) ≤
=
=
bi a−i
qk+1 −1
q−1
X
ai f
n
bi
n
bi
logb a−
i=0
logb n−1
X
ai
i=0
logb a−
= cn
logb a−
= cn
logb n−1
X
(b
i=0
logb n
b
i
− 1)/(b − 1)
= cnlogb a− (n − 1)/(b − 1)
c
= nlogb a (n − 1)/(n )
b −1
c
+
1
nlogb (a)
b − 1
⇒ T (n) = O(nlogb a ).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
53/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
X
ai f
i=0
n
bi
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a
i=0
logb n−1
X
i=0
ai
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a
i=0
logb n−1
X
ai
i=0
logb a
= cn
logb n−1
X
1
i=0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a
i=0
logb n−1
X
ai
i=0
logb a
= cn
logb n−1
X
1
i=0
= cnlogb a logb n
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a
i=0
logb n−1
X
ai
i=0
logb a
= cn
logb n−1
X
1
i=0
= cnlogb a logb n
Hence,
T (n) = O(nlogb a logb n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≤ cnlogb a .
T (n) − nlogb a =
logb n−1
≤c
X
ai f
n
bi
n
bi
logb a
i=0
logb n−1
X
ai
i=0
logb a
= cn
logb n−1
X
1
i=0
= cnlogb a logb n
Hence,
T (n) = O(nlogb a logb n)
⇒ T (n) = O(nlogb a log n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
54/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
X
ai f
i=0
n
bi
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
≥c
X
ai f
i=0
logb n−1
X
i=0
ai
n
bi
n
bi
logb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
≥c
X
ai f
i=0
logb n−1
= cn
X
i=0
logb a
ai
n
bi
n
bi
logb a
logb n−1
X
1
i=0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
≥c
X
ai f
i=0
logb n−1
= cn
X
i=0
logb a
ai
n
bi
n
bi
logb a
logb n−1
X
1
i=0
= cnlogb a logb n
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
≥c
X
ai f
i=0
logb n−1
= cn
X
i=0
logb a
ai
n
bi
n
bi
logb a
logb n−1
X
1
i=0
= cnlogb a logb n
Hence,
T (n) = Ω(nlogb a logb n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≥ cnlogb a .
T (n) − nlogb a =
logb n−1
≥c
X
ai f
i=0
logb n−1
= cn
X
i=0
logb a
ai
n
bi
n
bi
logb a
logb n−1
X
1
i=0
= cnlogb a logb n
Hence,
T (n) = Ω(nlogb a logb n)
⇒ T (n) = Ω(nlogb a log n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
55/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
T (n) − nlogb a
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
X
i=0
n
af
bi
i
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
X
i=0
n
af
bi
logb n−1
X
i=0
i
ai
n
bi
logb a k
n
· logb
bi
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
X
i=0
n
af
bi
logb n−1
X
i=0
i
ai
n
bi
logb a k
n
· logb
bi
n = b` ⇒ ` = logb n
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
`−1
X
i=0
(` − i)k
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
= cnlogb a
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
`−1
X
i=0
`
X
(` − i)k
ik
i=1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
= cnlogb a
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
`−1
X
i=0
`
X
i=1
(` − i)k
1
ik ≈ k `k+1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
= cnlogb a
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
`−1
X
i=0
`
X
(` − i)k
ik
i=1
c
≈ nlogb a `k+1
k
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 2. Now suppose that f (n) ≤ cnlogb a (logb (n))k .
logb a
T (n) − n
logb n−1
=
≤c
n = b` ⇒ ` = logb n
X
i=0
n
af
bi
logb n−1
X
i
ai
i=0
= cnlogb a
= cnlogb a
= cnlogb a
n
bi
`−1
X
i=0
logb a k
n
· logb
bi
logb
b`
bi
k
`−1
X
i=0
`
X
(` − i)k
ik
i=1
c
≈ nlogb a `k+1
k
⇒ T (n) = O(nlogb a logk+1 n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
56/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb a
T (n) − n
logb n−1
=
X
i=0
n
af
bi
i
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb n−1
logb a
T (n) − n
=
X
i=0
logb n−1
≤
X
i=0
n
af
bi
i
c i f (n) + O(nlogb a )
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb n−1
logb a
T (n) − n
=
X
i=0
logb n−1
≤
q<1:
Pn
i=0 q
i
=
1−qn+1
1−q
≤
X
i=0
n
af
bi
i
c i f (n) + O(nlogb a )
1
1−q
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb n−1
logb a
T (n) − n
=
X
i=0
logb n−1
≤
q<1:
Pn
i=0 q
i
=
1−qn+1
1−q
≤
1
1−q
X
i=0
n
af
bi
i
c i f (n) + O(nlogb a )
1
≤
f (n) + O(nlogb a )
1−c
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb n−1
logb a
T (n) − n
=
X
i=0
logb n−1
≤
q<1:
Pn
i=0 q
i
=
1−qn+1
1−q
Hence,
≤
1
1−q
X
i=0
n
af
bi
i
c i f (n) + O(nlogb a )
1
≤
f (n) + O(nlogb a )
1−c
T (n) ≤ O(f (n))
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Case 3. Now suppose that f (n) ≥ dnlogb a+ , and that for
sufficiently large n: af (n/b) ≤ cf (n), for c < 1.
From this we get ai f (n/bi ) ≤ c i f (n), where we assume that
n/bi−1 ≥ n0 is still sufficiently large.
logb n−1
logb a
T (n) − n
=
X
i=0
logb n−1
≤
q<1:
Pn
i=0 q
i
=
1−qn+1
1−q
Hence,
≤
1
1−q
X
i=0
n
af
bi
i
c i f (n) + O(nlogb a )
1
≤
f (n) + O(nlogb a )
1−c
T (n) ≤ O(f (n))
⇒ T (n) = Θ(f (n)).
Where did we use f (n) ≥ Ω(nlogb a+ )?
6.2 Master Theorem
Ernst Mayr, Harald Räcke
57/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 0 1 1
B
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 0 1 1
B
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 0 11 1
B
0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 0 11 1
B
0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 01 11 1
B
0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 0 01 11 1
B
0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 01 01 11 1
B
0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 1 01 01 11 1
B
0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 10 01 01 11 1
B
1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 0 10 01 01 11 1
B
1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 01 10 01 01 11 1
B
0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 0 01 10 01 01 11 1
B
0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 01 01 10 01 01 11 1
B
0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 0 01 01 10 01 01 11 1
B
0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 00 01 01 10 01 01 11 1
B
1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
1 00 01 01 10 01 01 11 1
B
1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
1 1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
1 1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
0 1 1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
0 1 1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
1 0 1 1 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose we want to multiply two n-bit Integers, but our registers
can only perform operations on integers of constant size.
For this we first need to be able to add two integers A and B:
1
1 1 0 1 1 0 1 0 1
A
10 00 01 01 10 01 01 11 1
B
1 0 1 1 0 0 1 0 0 0
This gives that two n-bit integers can be added in time O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
58/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
• This is also nown as the “school
method” for multiplying integers.
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
• This is also nown as the “school
method” for multiplying integers.
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
• This is also nown as the “school
method” for multiplying integers.
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 1 1 0 1 1
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 1 1 0 1 1
Time requirement:
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 1 1 0 1 1
Time requirement:
ñ
Computing intermediate results: O(nm).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
Suppose that we want to multiply an n-bit integer A and an
m-bit integer B (m ≤ n).
1 0 0 0 1 × 1 0 1 1
1 0 0 0 1
• This is also nown as the “school
method” for multiplying integers.
1 0 0 0 1 0
• Note that the intermediate numbers that are generated can have
at most m + n ≤ 2n bits.
0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 1 1 0 1 1
Time requirement:
ñ
ñ
Computing intermediate results: O(nm).
Adding m numbers of length ≤ 2n:
O((m + n)m) = O(nm).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
59/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
B
×
A
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
bn−1
...
b0 × an−1
...
a0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
bn−1
...
b n2 b n2 −1
...
b0 × an−1
...
a n2 a n2 −1
...
a0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
B1
B0
×
A1
A0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
B0
B1
×
A0
A1
Then it holds that
n
n
A = A1 · 2 2 + A0 and B = B1 · 2 2 + B0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
A recursive approach:
Suppose that integers A and B are of length n = 2k , for some k.
B0
B1
×
A0
A1
Then it holds that
n
n
A = A1 · 2 2 + A0 and B = B1 · 2 2 + B0
Hence,
n
A · B = A1 B1 · 2n + (A1 B0 + A0 B1 ) · 2 2 + A0 B0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
60/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
2T ( n
2 ) + O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
2T ( n
2 ) + O(n)
n
T( 2 )
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
2T ( n
2 ) + O(n)
n
T( 2 )
O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Algorithm 3 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z1 ← mult(A1 , B0 ) + mult(A0 , B1 )
7: Z0 ← mult(A0 , B0 )
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
2T ( n
2 ) + O(n)
n
T( 2 )
O(n)
We get the following recurrence:
n
T (n) = 4T
+ O(n) .
2
6.2 Master Theorem
Ernst Mayr, Harald Räcke
61/117
Example: Multiplying Two Integers
Master Theorem: Recurrence: T [n] = aT ( n
b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
62/117
Example: Multiplying Two Integers
Master Theorem: Recurrence: T [n] = aT ( n
b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
In our case a = 4, b = 2, and f (n) = Θ(n). Hence, we are in
Case 1, since n = O(n2− ) = O(nlogb a− ).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
62/117
Example: Multiplying Two Integers
Master Theorem: Recurrence: T [n] = aT ( n
b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
In our case a = 4, b = 2, and f (n) = Θ(n). Hence, we are in
Case 1, since n = O(n2− ) = O(nlogb a− ).
We get a running time of O(n2 ) for our algorithm.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
62/117
Example: Multiplying Two Integers
Master Theorem: Recurrence: T [n] = aT ( n
b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
In our case a = 4, b = 2, and f (n) = Θ(n). Hence, we are in
Case 1, since n = O(n2− ) = O(nlogb a− ).
We get a running time of O(n2 ) for our algorithm.
=⇒ Not better then the “school method”.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
62/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
O(1)
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
O(1)
2:
return a0 · b0
O(1)
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
O(1)
2:
return a0 · b0
O(1)
3: split A into A0 and A1
O(n)
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
n
T( 2 )
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
n
T( 2 )
n
T ( 2 ) + O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We can use the following identity to compute Z1 :
Z1 = A1 B0 + A0 B1
=Z
=Z
z }| 2{ z }| 0{
= (A0 + A1 ) · (B0 + B1 ) − A1 B1 − A0 B0
Hence,
A more precise
(correct) analysis
would say that
computing Z1
needs time
T(n
2 + 1) + O(n).
Algorithm 4 mult(A, B)
1: if |A| = |B| = 1 then
2:
return a0 · b0
3: split A into A0 and A1
4: split B into B0 and B1
5: Z2 ← mult(A1 , B1 )
6: Z0 ← mult(A0 , B0 )
7: Z1 ← mult(A0 + A1 , B0 + B1 ) − Z2 − Z0
n
8: return Z2 · 2n + Z1 · 2 2 + Z0
O(1)
O(1)
O(n)
O(n)
T(n
2)
n
T( 2 )
n
T ( 2 ) + O(n)
O(n)
6.2 Master Theorem
Ernst Mayr, Harald Räcke
63/117
Example: Multiplying Two Integers
We get the following recurrence:
T (n) = 3T
n
2
+ O(n) .
n
Master Theorem: Recurrence: T [n] = aT ( b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
Again we are in Case 1. We get a running time of
Θ(nlog2 3 ) ≈ Θ(n1.59 ).
A huge improvement over the “school method”.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
64/117
Example: Multiplying Two Integers
We get the following recurrence:
T (n) = 3T
n
2
+ O(n) .
n
Master Theorem: Recurrence: T [n] = aT ( b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
Again we are in Case 1. We get a running time of
Θ(nlog2 3 ) ≈ Θ(n1.59 ).
A huge improvement over the “school method”.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
64/117
Example: Multiplying Two Integers
We get the following recurrence:
T (n) = 3T
n
2
+ O(n) .
n
Master Theorem: Recurrence: T [n] = aT ( b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
Again we are in Case 1. We get a running time of
Θ(nlog2 3 ) ≈ Θ(n1.59 ).
A huge improvement over the “school method”.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
64/117
Example: Multiplying Two Integers
We get the following recurrence:
T (n) = 3T
n
2
+ O(n) .
n
Master Theorem: Recurrence: T [n] = aT ( b ) + f (n).
ñ
ñ
ñ
Case 1: f (n) = O(nlogb a− )
T (n) = Θ(nlogb a )
Case 3: f (n) = Ω(nlogb a+ )
T (n) = Θ(f (n))
Case 2: f (n) = Θ(nlogb a logk n) T (n) = Θ(nlogb a logk+1 n)
Again we are in Case 1. We get a running time of
Θ(nlog2 3 ) ≈ Θ(n1.59 ).
A huge improvement over the “school method”.
6.2 Master Theorem
Ernst Mayr, Harald Räcke
64/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
This is the general form of a linear recurrence relation of order k
with constant coefficients (c0 , ck ≠ 0).
ñ
T (n) only depends on the k preceding values. This means
the recurrence relation is of order k.
ñ
The recurrence is linear as there are no products of T [n]’s.
ñ
If f (n) = 0 then the recurrence relation becomes a linear,
homogenous recurrence relation of order k.
Note that we ignore boundary conditions for the moment.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
65/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
6.3 The Characteristic Polynomial
Observations:
ñ
The solution T [1], T [2], T [3], . . . is completely determined
by a set of boundary conditions that specify values for
T [1], . . . , T [k].
ñ
In fact, any k consecutive values completely determine the
solution.
ñ
k non-concecutive values might not be an appropriate set of
boundary conditions (depends on the problem).
Approach:
ñ
First determine all solutions that satisfy recurrence relation.
ñ
Then pick the right one by analyzing boundary conditions.
ñ
First consider the homogenous case.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
66/117
The Homogenous Case
The solution space
n
o
S = T = T [1], T [2], T [3], . . . T fulfills recurrence relation
is a vector space. This means that if T1 , T2 ∈ S, then also
αT1 + βT2 ∈ S, for arbitrary constants α, β.
How do we find a non-trivial solution?
We guess that the solution is of the form λn , λ ≠ 0, and see what
happens. In order for this guess to fulfill the recurrence we need
c0 λn + c1 λn−1 + c2 · λn−2 + · · · + ck · λn−k = 0
for all n ≥ k.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
67/117
The Homogenous Case
The solution space
n
o
S = T = T [1], T [2], T [3], . . . T fulfills recurrence relation
is a vector space. This means that if T1 , T2 ∈ S, then also
αT1 + βT2 ∈ S, for arbitrary constants α, β.
How do we find a non-trivial solution?
We guess that the solution is of the form λn , λ ≠ 0, and see what
happens. In order for this guess to fulfill the recurrence we need
c0 λn + c1 λn−1 + c2 · λn−2 + · · · + ck · λn−k = 0
for all n ≥ k.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
67/117
The Homogenous Case
The solution space
n
o
S = T = T [1], T [2], T [3], . . . T fulfills recurrence relation
is a vector space. This means that if T1 , T2 ∈ S, then also
αT1 + βT2 ∈ S, for arbitrary constants α, β.
How do we find a non-trivial solution?
We guess that the solution is of the form λn , λ ≠ 0, and see what
happens. In order for this guess to fulfill the recurrence we need
c0 λn + c1 λn−1 + c2 · λn−2 + · · · + ck · λn−k = 0
for all n ≥ k.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
67/117
The Homogenous Case
The solution space
n
o
S = T = T [1], T [2], T [3], . . . T fulfills recurrence relation
is a vector space. This means that if T1 , T2 ∈ S, then also
αT1 + βT2 ∈ S, for arbitrary constants α, β.
How do we find a non-trivial solution?
We guess that the solution is of the form λn , λ ≠ 0, and see what
happens. In order for this guess to fulfill the recurrence we need
c0 λn + c1 λn−1 + c2 · λn−2 + · · · + ck · λn−k = 0
for all n ≥ k.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
67/117
The Homogenous Case
The solution space
n
o
S = T = T [1], T [2], T [3], . . . T fulfills recurrence relation
is a vector space. This means that if T1 , T2 ∈ S, then also
αT1 + βT2 ∈ S, for arbitrary constants α, β.
How do we find a non-trivial solution?
We guess that the solution is of the form λn , λ ≠ 0, and see what
happens. In order for this guess to fulfill the recurrence we need
c0 λn + c1 λn−1 + c2 · λn−2 + · · · + ck · λn−k = 0
for all n ≥ k.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
67/117
The Homogenous Case
Dividing by λn−k gives that all these constraints are identical to
c0 λk + c1 λk−1 + c2 · λk−2 + · · · + ck = 0
This means that if λi is a root (Nullstelle) of P [λ] then T [n] = λn
i
is a solution to the recurrence relation.
Let λ1 , . . . , λk be the k (complex) roots of P [λ]. Then, because of
the vector space property
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk
is a solution for arbitrary values αi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
68/117
The Homogenous Case
Dividing by λn−k gives that all these constraints are identical to
c0 λk + c1 λk−1 + c2 · λk−2 + · · · + ck = 0
|
{z
}
characteristic polynomial P [λ]
This means that if λi is a root (Nullstelle) of P [λ] then T [n] = λn
i
is a solution to the recurrence relation.
Let λ1 , . . . , λk be the k (complex) roots of P [λ]. Then, because of
the vector space property
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk
is a solution for arbitrary values αi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
68/117
The Homogenous Case
Dividing by λn−k gives that all these constraints are identical to
c0 λk + c1 λk−1 + c2 · λk−2 + · · · + ck = 0
|
{z
}
characteristic polynomial P [λ]
This means that if λi is a root (Nullstelle) of P [λ] then T [n] = λn
i
is a solution to the recurrence relation.
Let λ1 , . . . , λk be the k (complex) roots of P [λ]. Then, because of
the vector space property
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk
is a solution for arbitrary values αi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
68/117
The Homogenous Case
Dividing by λn−k gives that all these constraints are identical to
c0 λk + c1 λk−1 + c2 · λk−2 + · · · + ck = 0
|
{z
}
characteristic polynomial P [λ]
This means that if λi is a root (Nullstelle) of P [λ] then T [n] = λn
i
is a solution to the recurrence relation.
Let λ1 , . . . , λk be the k (complex) roots of P [λ]. Then, because of
the vector space property
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk
is a solution for arbitrary values αi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
68/117
The Homogenous Case
Lemma 5
Assume that the characteristic polynomial has k distinct roots
λ1 , . . . , λk . Then all solutions to the recurrence relation are of
the form
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk .
Proof.
There is one solution for every possible choice of boundary
conditions for T [1], . . . , T [k].
We show that the above set of solutions contains one solution
for every choice of boundary conditions.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
69/117
The Homogenous Case
Lemma 5
Assume that the characteristic polynomial has k distinct roots
λ1 , . . . , λk . Then all solutions to the recurrence relation are of
the form
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk .
Proof.
There is one solution for every possible choice of boundary
conditions for T [1], . . . , T [k].
We show that the above set of solutions contains one solution
for every choice of boundary conditions.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
69/117
The Homogenous Case
Lemma 5
Assume that the characteristic polynomial has k distinct roots
λ1 , . . . , λk . Then all solutions to the recurrence relation are of
the form
n
n
α1 λn
1 + α2 λ2 + · · · + αk λk .
Proof.
There is one solution for every possible choice of boundary
conditions for T [1], . . . , T [k].
We show that the above set of solutions contains one solution
for every choice of boundary conditions.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
69/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
70/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
α1 · λ1
+
α2 · λ2
+
···
+
αk · λk
=
T [1]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
70/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
α1 · λ1
α1 · λ21
+
+
α2 · λ2
α2 · λ22
+
+
···
···
+
+
αk · λk
αk · λ2k
=
=
T [1]
T [2]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
70/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
α1 · λ1
α1 · λ21
+
+
α2 · λ2
α2 · λ22
+
+
···
···
..
.
+
+
αk · λk
αk · λ2k
=
=
T [1]
T [2]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
70/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
α1 · λ1
α1 · λ21
+
+
α2 · λ2
α2 · λ22
+
+
α1 · λk1
+
α2 · λk2
+
···
···
..
.
···
+
+
αk · λk
αk · λ2k
=
=
T [1]
T [2]
+
αk · λkk
=
T [k]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
70/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
λ1
 2
 λ1




λk1

λ2
λ22
λk2
..
.
···
···
···

λk
α1

2 


λk   α2
 .
  ..

αk
λkk


 
 
 
=
 
 
T [1]
T [2]
..
.
T [k]







6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
71/117
The Homogenous Case
Proof (cont.).
Suppose I am given boundary conditions T [i] and I want to see
whether I can choose the α0i s such that these conditions are met:
λ1
 2
 λ1




λk1

λ2
λ22
λk2
..
.
···
···
···

λk
α1

2 


λk   α2
 .
  ..

αk
λkk


 
 
 
=
 
 
T [1]
T [2]
..
.
T [k]







We show that the column vectors are linearly independent. Then
the above equation has a solution.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
71/117
Computing the Determinant
λ1
2
λ1
.
..
k
λ
1
λ2
λ22
..
.
λk2
···
···
···
λk−1
λ2k−1
..
.
λkk−1
λk λ2k =
.. . λk k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
72/117
Computing the Determinant
λ1
2
λ1
.
..
k
λ
1
λ2
λ22
..
.
λk2
···
···
···
λk−1
λ2k−1
..
.
λkk−1
1
λk k
λ1
λ2k Y
=
λi · .
.. ..
. i=1
k−1
k
λ
λ k
1
1
λ2
..
.
λk−1
2
···
···
···
1
λk−1
..
.
λk−1
k−1
1 λk .. . λk−1
k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
72/117
Computing the Determinant
λ1
2
λ1
.
..
k
λ
1
λ2
λ22
..
.
λk2
···
···
···
λk−1
λ2k−1
..
.
λkk−1
1
λk k
λ1
λ2k Y
=
λi · .
.. ..
. i=1
k−1
k
λ
λ k
1
1
k
1
Y
λi · .
=
.
.
i=1
1
1
λ2
..
.
λk−1
2
λ1
λ2
..
.
λk
···
···
···
···
···
···
λk−2
1
λk−2
2
..
.
λk−2
k
1
λk−1
..
.
λk−1
k−1
1 λk .. . λk−1
k
λk−1
1
k−1 λ2 .. . k−1 λk
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
72/117
Computing the Determinant
1
1
.
.
.
1
λ1
λ2
..
.
λk
···
···
···
λk−2
1
λk−2
2
..
.
λk−2
k
λk−1
1
k−1 λ2 =
.. . λk−1 k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
73/117
Computing the Determinant
1
1
.
.
.
1
λ1
λ2
..
.
λk
···
···
···
1
1
.
.
.
1
λk−2
1
λk−2
2
..
.
λk−2
k
λk−1
1
k−1 λ2 =
.. . λk−1 k
λ1 − λ1 · 1
λ2 − λ1 · 1
..
.
λk − λ1 · 1
···
···
···
λ1k−2 − λ1 · λk−3
1
λ2k−2 − λ1 · λk−3
2
..
.
λkk−2 − λ1 · λk−3
k
λk−1
− λ1 · λk−2
1
1
k−1
k−2 λ2 − λ1 · λ2 ..
.
k−1
k−2 λk − λ1 · λk
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
73/117
Computing the Determinant
1
1
.
.
.
1
λ1 − λ1 · 1
λ2 − λ1 · 1
..
.
λk − λ1 · 1
···
···
···
λk−2
− λ1 · λk−3
1
1
λk−2
− λ1 · λk−3
2
2
..
.
λkk−2 − λ1 · λk−3
k
λk−1
− λ1 · λk−2
1
1
k−1
k−2 λ2 − λ1 · λ2 =
..
.
k−1
k−2 λ
− λ1 · λ
k
k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
74/117
Computing the Determinant
1
1
.
.
.
1
λ1 − λ1 · 1
λ2 − λ1 · 1
..
.
λk − λ1 · 1
1
1
.
..
1
···
···
···
λk−2
− λ1 · λk−3
1
1
λk−2
− λ1 · λk−3
2
2
..
.
λkk−2 − λ1 · λk−3
k
0
(λ2 − λ1 ) · 1
..
.
(λk − λ1 ) · 1
···
···
···
λk−1
− λ1 · λk−2
1
1
k−1
k−2 λ2 − λ1 · λ2 =
..
.
k−1
k−2 λ
− λ1 · λ
k
0
(λ2 − λ1 ) · λ2k−3
..
.
(λk − λ1 ) · λkk−3
k
0
k−2 (λ2 − λ1 ) · λ2 ..
.
k−2 (λk − λ1 ) · λk
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
74/117
Computing the Determinant
1
1
.
..
1
0
(λ2 − λ1 ) · 1
..
.
(λk − λ1 ) · 1
···
···
···
0
(λ2 − λ1 ) · λk−3
2
..
.
(λk − λ1 ) · λk−3
k
0
k−2 (λ2 − λ1 ) · λ2 =
..
.
(λk − λ1 ) · λk−2
k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
75/117
Computing the Determinant
1
1
.
..
1
0
(λ2 − λ1 ) · 1
..
.
(λk − λ1 ) · 1
···
···
···
1
.
(λi − λ1 ) · ..
i=2
1
k
Y
0
(λ2 − λ1 ) · λk−3
2
..
.
(λk − λ1 ) · λk−3
k
λ2
..
.
λk
···
···
λk−3
2
..
.
λk−3
k
0
k−2 (λ2 − λ1 ) · λ2 =
..
.
(λk − λ1 ) · λk−2
k
λk−2
2
.. . λk−2
k
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
75/117
Computing the Determinant
Repeating the above steps gives:
λ1
2
λ1
.
..
k
λ
1
λ2
λ22
..
.
λk2
···
···
···
λk−1
λ2k−1
..
.
k
λk−1
λk k
λ2k Y
Y
=
λi ·
(λi − λ` )
.. . i=1
i>`
λkk Hence, if all λi ’s are different, then the determinant is non-zero.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
76/117
The Homogeneous Case
What happens if the roots are not all distinct?
Suppose we have a root λi with multiplicity (Vielfachheit) at least
n
2. Then not only is λn
i a solution to the recurrence but also nλi .
To see this consider the polynomial
P [λ] · λn−k = c0 λn + c1 λn−1 + c2 λn−2 + · · · + ck λn−k
Since λi is a root we can write this as Q[λ] · (λ − λi )2 .
Calculating the derivative gives a polynomial that still has root
λi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
77/117
The Homogeneous Case
What happens if the roots are not all distinct?
Suppose we have a root λi with multiplicity (Vielfachheit) at least
n
2. Then not only is λn
i a solution to the recurrence but also nλi .
To see this consider the polynomial
P [λ] · λn−k = c0 λn + c1 λn−1 + c2 λn−2 + · · · + ck λn−k
Since λi is a root we can write this as Q[λ] · (λ − λi )2 .
Calculating the derivative gives a polynomial that still has root
λi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
77/117
The Homogeneous Case
What happens if the roots are not all distinct?
Suppose we have a root λi with multiplicity (Vielfachheit) at least
n
2. Then not only is λn
i a solution to the recurrence but also nλi .
To see this consider the polynomial
P [λ] · λn−k = c0 λn + c1 λn−1 + c2 λn−2 + · · · + ck λn−k
Since λi is a root we can write this as Q[λ] · (λ − λi )2 .
Calculating the derivative gives a polynomial that still has root
λi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
77/117
The Homogeneous Case
What happens if the roots are not all distinct?
Suppose we have a root λi with multiplicity (Vielfachheit) at least
n
2. Then not only is λn
i a solution to the recurrence but also nλi .
To see this consider the polynomial
P [λ] · λn−k = c0 λn + c1 λn−1 + c2 λn−2 + · · · + ck λn−k
Since λi is a root we can write this as Q[λ] · (λ − λi )2 .
Calculating the derivative gives a polynomial that still has root
λi .
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
77/117
This means
+ · · · + ck (n − k)λin−k−1 = 0
c0 nλin−1 + c1 (n − 1)λn−2
i
Hence,
c0 nλn
+ c1 (n − 1)λn−1
+ · · · + ck (n − k)λn−k
=0
| {z i}
|
{z i }
|
{z i }
T [n]
T [n−1]
T [n−k]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
78/117
This means
+ · · · + ck (n − k)λin−k−1 = 0
c0 nλin−1 + c1 (n − 1)λn−2
i
Hence,
c0 nλn
+ c1 (n − 1)λn−1
+ · · · + ck (n − k)λn−k
=0
| {z i}
|
{z i }
|
{z i }
T [n]
T [n−1]
T [n−k]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
78/117
This means
+ · · · + ck (n − k)λin−k−1 = 0
c0 nλin−1 + c1 (n − 1)λn−2
i
Hence,
c0 nλn
+ c1 (n − 1)λn−1
+ · · · + ck (n − k)λn−k
=0
| {z i}
|
{z i }
|
{z i }
T [n]
T [n−1]
T [n−k]
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
78/117
The Homogeneous Case
Suppose λi has multiplicity j. We know that
n−1
c0 nλn
+ · · · + ck (n − k)λn−k
=0
i + c1 (n − 1)λi
i
(after taking the derivative; multiplying with λ; plugging in λi )
Doing this again gives
2 n−1
+ · · · + ck (n − k)2 λn−k
=0
c0 n2 λn
i + c1 (n − 1) λi
i
We can continue j − 1 times.
Hence, n` λn
i is a solution for ` ∈ 0, . . . , j − 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
79/117
The Homogeneous Case
Suppose λi has multiplicity j. We know that
n−1
c0 nλn
+ · · · + ck (n − k)λn−k
=0
i + c1 (n − 1)λi
i
(after taking the derivative; multiplying with λ; plugging in λi )
Doing this again gives
2 n−1
+ · · · + ck (n − k)2 λn−k
=0
c0 n2 λn
i + c1 (n − 1) λi
i
We can continue j − 1 times.
Hence, n` λn
i is a solution for ` ∈ 0, . . . , j − 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
79/117
The Homogeneous Case
Suppose λi has multiplicity j. We know that
n−1
c0 nλn
+ · · · + ck (n − k)λn−k
=0
i + c1 (n − 1)λi
i
(after taking the derivative; multiplying with λ; plugging in λi )
Doing this again gives
2 n−1
+ · · · + ck (n − k)2 λn−k
=0
c0 n2 λn
i + c1 (n − 1) λi
i
We can continue j − 1 times.
Hence, n` λn
i is a solution for ` ∈ 0, . . . , j − 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
79/117
The Homogeneous Case
Suppose λi has multiplicity j. We know that
n−1
c0 nλn
+ · · · + ck (n − k)λn−k
=0
i + c1 (n − 1)λi
i
(after taking the derivative; multiplying with λ; plugging in λi )
Doing this again gives
2 n−1
+ · · · + ck (n − k)2 λn−k
=0
c0 n2 λn
i + c1 (n − 1) λi
i
We can continue j − 1 times.
Hence, n` λn
i is a solution for ` ∈ 0, . . . , j − 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
79/117
The Homogeneous Case
Suppose λi has multiplicity j. We know that
n−1
c0 nλn
+ · · · + ck (n − k)λn−k
=0
i + c1 (n − 1)λi
i
(after taking the derivative; multiplying with λ; plugging in λi )
Doing this again gives
2 n−1
+ · · · + ck (n − k)2 λn−k
=0
c0 n2 λn
i + c1 (n − 1) λi
i
We can continue j − 1 times.
Hence, n` λn
i is a solution for ` ∈ 0, . . . , j − 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
79/117
The Homogeneous Case
Lemma 6
Let P [λ] denote the characteristic polynomial to the recurrence
c0 T [n] + c1 T [n − 1] + · · · + ck T [n − k] = 0
Let λi , i = 1, . . . , m be the (complex) roots of P [λ] with
multiplicities `i . Then the general solution to the recurrence is
given by
m `X
i −1
X
T [n] =
αij · (nj λn
i ) .
i=1 j=0
The full proof is omitted. We have only shown that any choice of
αij ’s is a solution to the recurrence.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
80/117
Example: Fibonacci Sequence
T [0] = 0
T [1] = 1
T [n] = T [n − 1] + T [n − 2] for n ≥ 2
The characteristic polynomial is
λ2 − λ − 1
Finding the roots, gives
λ1/2
1
= ±
2
s
p 1
1
+1=
1± 5
4
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
81/117
Example: Fibonacci Sequence
T [0] = 0
T [1] = 1
T [n] = T [n − 1] + T [n − 2] for n ≥ 2
The characteristic polynomial is
λ2 − λ − 1
Finding the roots, gives
λ1/2
1
= ±
2
s
p 1
1
+1=
1± 5
4
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
81/117
Example: Fibonacci Sequence
T [0] = 0
T [1] = 1
T [n] = T [n − 1] + T [n − 2] for n ≥ 2
The characteristic polynomial is
λ2 − λ − 1
Finding the roots, gives
λ1/2
1
= ±
2
s
p 1
1
+1=
1± 5
4
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
81/117
Example: Fibonacci Sequence
Hence, the solution is of the form
α
√ !n
1+ 5
+β
2
√ !n
1− 5
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
82/117
Example: Fibonacci Sequence
Hence, the solution is of the form
α
√ !n
1+ 5
+β
2
√ !n
1− 5
2
T [0] = 0 gives α + β = 0.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
82/117
Example: Fibonacci Sequence
Hence, the solution is of the form
α
√ !n
1+ 5
+β
2
√ !n
1− 5
2
T [0] = 0 gives α + β = 0.
T [1] = 1 gives
α
√ !
1+ 5
+β
2
√ !
1− 5
=1
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
82/117
Example: Fibonacci Sequence
Hence, the solution is of the form
α
√ !n
1+ 5
+β
2
√ !n
1− 5
2
T [0] = 0 gives α + β = 0.
T [1] = 1 gives
α
√ !
1+ 5
+β
2
√ !
2
1− 5
= 1 =⇒ α − β = √
2
5
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
82/117
Example: Fibonacci Sequence
Hence, the solution is
1
√
5
"
√ !n
1+ 5
−
2
√ !n #
1− 5
2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
83/117
The Inhomogeneous Case
Consider the recurrence relation:
c0 T (n) + c1 T (n − 1) + c2 T (n − 2) + · · · + ck T (n − k) = f (n)
with f (n) ≠ 0.
While we have a fairly general technique for solving
homogeneous, linear recurrence relations the inhomogeneous
case is different.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
84/117
The Inhomogeneous Case
The general solution of the recurrence relation is
T (n) = Th (n) + Tp (n) ,
where Th is any solution to the homogeneous equation, and Tp
is one particular solution to the inhomogeneous equation.
There is no general method to find a particular solution.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
85/117
The Inhomogeneous Case
The general solution of the recurrence relation is
T (n) = Th (n) + Tp (n) ,
where Th is any solution to the homogeneous equation, and Tp
is one particular solution to the inhomogeneous equation.
There is no general method to find a particular solution.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
85/117
The Inhomogeneous Case
Example:
T [n] = T [n − 1] + 1
T [0] = 1
Then,
T [n − 1] = T [n − 2] + 1
(n ≥ 2)
Subtracting the first from the second equation gives,
T [n] − T [n − 1] = T [n − 1] − T [n − 2]
(n ≥ 2)
or
T [n] = 2T [n − 1] − T [n − 2]
(n ≥ 2)
I get a completely determined recurrence if I add T [0] = 1 and
T [1] = 2.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
86/117
The Inhomogeneous Case
Example:
T [n] = T [n − 1] + 1
T [0] = 1
Then,
T [n − 1] = T [n − 2] + 1
(n ≥ 2)
Subtracting the first from the second equation gives,
T [n] − T [n − 1] = T [n − 1] − T [n − 2]
(n ≥ 2)
or
T [n] = 2T [n − 1] − T [n − 2]
(n ≥ 2)
I get a completely determined recurrence if I add T [0] = 1 and
T [1] = 2.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
86/117
The Inhomogeneous Case
Example:
T [n] = T [n − 1] + 1
T [0] = 1
Then,
T [n − 1] = T [n − 2] + 1
(n ≥ 2)
Subtracting the first from the second equation gives,
T [n] − T [n − 1] = T [n − 1] − T [n − 2]
(n ≥ 2)
or
T [n] = 2T [n − 1] − T [n − 2]
(n ≥ 2)
I get a completely determined recurrence if I add T [0] = 1 and
T [1] = 2.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
86/117
The Inhomogeneous Case
Example:
T [n] = T [n − 1] + 1
T [0] = 1
Then,
T [n − 1] = T [n − 2] + 1
(n ≥ 2)
Subtracting the first from the second equation gives,
T [n] − T [n − 1] = T [n − 1] − T [n − 2]
(n ≥ 2)
or
T [n] = 2T [n − 1] − T [n − 2]
(n ≥ 2)
I get a completely determined recurrence if I add T [0] = 1 and
T [1] = 2.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
86/117
The Inhomogeneous Case
Example:
T [n] = T [n − 1] + 1
T [0] = 1
Then,
T [n − 1] = T [n − 2] + 1
(n ≥ 2)
Subtracting the first from the second equation gives,
T [n] − T [n − 1] = T [n − 1] − T [n − 2]
(n ≥ 2)
or
T [n] = 2T [n − 1] − T [n − 2]
(n ≥ 2)
I get a completely determined recurrence if I add T [0] = 1 and
T [1] = 2.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
86/117
The Inhomogeneous Case
Example: Characteristic polynomial:
λ2 − 2λ + 1 = 0
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
87/117
The Inhomogeneous Case
Example: Characteristic polynomial:
2
λ
2λ + 1} = 0
| − {z
(λ−1)2
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
87/117
The Inhomogeneous Case
Example: Characteristic polynomial:
2
λ
2λ + 1} = 0
| − {z
(λ−1)2
Then the solution is of the form
T [n] = α1n + βn1n = α + βn
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
87/117
The Inhomogeneous Case
Example: Characteristic polynomial:
2
λ
2λ + 1} = 0
| − {z
(λ−1)2
Then the solution is of the form
T [n] = α1n + βn1n = α + βn
T [0] = 1 gives α = 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
87/117
The Inhomogeneous Case
Example: Characteristic polynomial:
2
λ
2λ + 1} = 0
| − {z
(λ−1)2
Then the solution is of the form
T [n] = α1n + βn1n = α + βn
T [0] = 1 gives α = 1.
T [1] = 2 gives 1 + β = 2 =⇒ β = 1.
6.3 The Characteristic Polynomial
Ernst Mayr, Harald Räcke
87/117
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
The Inhomogeneous Case
If f (n) is a polynomial of degree r this method can be applied
r + 1 times to obtain a homogeneous equation:
T [n] = T [n − 1] + n2
Shift:
T [n − 1] = T [n − 2] + (n − 1)2 = T [n − 2] + n2 − 2n + 1
Difference:
T [n] − T [n − 1] = T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
Shift:
T [n − 1] = 2T [n − 2] − T [n − 3] + 2(n − 1) − 1
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
Shift:
T [n − 1] = 2T [n − 2] − T [n − 3] + 2(n − 1) − 1
= 2T [n − 2] − T [n − 3] + 2n − 3
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
Shift:
T [n − 1] = 2T [n − 2] − T [n − 3] + 2(n − 1) − 1
= 2T [n − 2] − T [n − 3] + 2n − 3
Difference:
T [n] − T [n − 1] =2T [n − 1] − T [n − 2] + 2n − 1
− 2T [n − 2] + T [n − 3] − 2n + 3
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
Shift:
T [n − 1] = 2T [n − 2] − T [n − 3] + 2(n − 1) − 1
= 2T [n − 2] − T [n − 3] + 2n − 3
Difference:
T [n] − T [n − 1] =2T [n − 1] − T [n − 2] + 2n − 1
− 2T [n − 2] + T [n − 3] − 2n + 3
T [n] = 3T [n − 1] − 3T [n − 2] + T [n − 3] + 2
T [n] = 2T [n − 1] − T [n − 2] + 2n − 1
Shift:
T [n − 1] = 2T [n − 2] − T [n − 3] + 2(n − 1) − 1
= 2T [n − 2] − T [n − 3] + 2n − 3
Difference:
T [n] − T [n − 1] =2T [n − 1] − T [n − 2] + 2n − 1
− 2T [n − 2] + T [n − 3] − 2n + 3
T [n] = 3T [n − 1] − 3T [n − 2] + T [n − 3] + 2
and so on...
6.4 Generating Functions
Definition 7 (Generating Function)
Let (an )n≥0 be a sequence. The corresponding
ñ
generating function (Erzeugendenfunktion) is
X
F (z) :=
an z n ;
n≥0
ñ
exponential generating function (exponentielle
Erzeugendenfunktion) is
X an
F (z) =
zn .
n!
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
90/117
6.4 Generating Functions
Definition 7 (Generating Function)
Let (an )n≥0 be a sequence. The corresponding
ñ
generating function (Erzeugendenfunktion) is
X
F (z) :=
an z n ;
n≥0
ñ
exponential generating function (exponentielle
Erzeugendenfunktion) is
X an
F (z) =
zn .
n!
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
90/117
6.4 Generating Functions
Example 8
1. The generating function of the sequence (1, 0, 0, . . .) is
F (z) = 1 .
2. The generating function of the sequence (1, 1, 1, . . .) is
F (z) =
1
.
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
91/117
6.4 Generating Functions
Example 8
1. The generating function of the sequence (1, 0, 0, . . .) is
F (z) = 1 .
2. The generating function of the sequence (1, 1, 1, . . .) is
F (z) =
1
.
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
91/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
There are two different views:
A generating function is a formal power series (formale
Potenzreihe).
Then the generating function is an algebraic object.
Let f =
ñ
ñ
ñ
P
n≥0 an z
n
and g =
n≥0 bn z
P
n.
Equality: f and g are equal if an = bn for all n.
P
Addition: f + g := n≥0 (an + bn )zn .
P
Pn
Multiplication: f · g := n≥0 cn zn with cn = p=0 ap bn−p .
There are no convergence issues here.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
92/117
6.4 Generating Functions
The arithmetic view:
We view a power series as a function f : C → C.
Then, it is important to think about convergence/convergence
radius etc.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
93/117
6.4 Generating Functions
The arithmetic view:
We view a power series as a function f : C → C.
Then, it is important to think about convergence/convergence
radius etc.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
93/117
6.4 Generating Functions
The arithmetic view:
We view a power series as a function f : C → C.
Then, it is important to think about convergence/convergence
radius etc.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
93/117
6.4 Generating Functions
What does
P
n≥0 z
n
=
1
1−z
mean in the algebraic view?
It means that the power series 1 − z and the power series
P
n
n≥0 z are invers, i.e.,
∞
X
1−z ·
zn = 1 .
n≥0
This is well-defined.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
94/117
6.4 Generating Functions
What does
P
n≥0 z
n
=
1
1−z
mean in the algebraic view?
It means that the power series 1 − z and the power series
P
n
n≥0 z are invers, i.e.,
∞
X
1−z ·
zn = 1 .
n≥0
This is well-defined.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
94/117
6.4 Generating Functions
What does
P
n≥0 z
n
=
1
1−z
mean in the algebraic view?
It means that the power series 1 − z and the power series
P
n
n≥0 z are invers, i.e.,
∞
X
1−z ·
zn = 1 .
n≥0
This is well-defined.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
94/117
6.4 Generating Functions
Suppose we are given the generating function
X
n≥0
zn =
1
.
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
95/117
6.4 Generating Functions
Suppose we are given the generating function
X
n≥0
zn =
1
.
1−z
We can compute the derivative:
X
n≥1
nzn−1 =
1
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
95/117
6.4 Generating Functions
Suppose we are given the generating function
X
n≥0
zn =
1
.
1−z
We can compute the derivative:
X
n≥1
|
P
nzn−1 =
{z
1
(1 − z)2
}
n≥0 (n+1)z
n
6.4 Generating Functions
Ernst Mayr, Harald Räcke
95/117
Formally the derivative of a formal
P
power series n≥0 an zn is defined
P
as n≥0 nan zn−1 .
6.4 Generating Functions
The known rules for differentiation
Suppose we are given the generating function
X
n≥0
zn =
1
.
1−z
We can compute the derivative:
X
n≥1
|
P
nzn−1 =
{z
work for this definition. In partic1
ular, e.g. the derivative of 1−z
is
1
.
(1−z)2
Note that this requires a proof if we
consider power series as algebraic
objects. However, we did not prove
this in the lecture.
1
(1 − z)2
}
n≥0 (n+1)z
n
Hence, the generating function of the sequence an = n + 1
is 1/(1 − z)2 .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
95/117
6.4 Generating Functions
We can repeat this
6.4 Generating Functions
Ernst Mayr, Harald Räcke
96/117
6.4 Generating Functions
We can repeat this
X
(n + 1)zn =
n≥0
1
.
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
96/117
6.4 Generating Functions
We can repeat this
X
(n + 1)zn =
n≥0
1
.
(1 − z)2
Derivative:
X
n≥1
n(n + 1)zn−1 =
2
(1 − z)3
6.4 Generating Functions
Ernst Mayr, Harald Räcke
96/117
6.4 Generating Functions
We can repeat this
X
(n + 1)zn =
n≥0
1
.
(1 − z)2
Derivative:
X
n≥1
|P
n(n + 1)zn−1 =
{z
n≥0 (n+1)(n+2)z
n
2
(1 − z)3
}
6.4 Generating Functions
Ernst Mayr, Harald Räcke
96/117
6.4 Generating Functions
We can repeat this
X
(n + 1)zn =
n≥0
1
.
(1 − z)2
Derivative:
X
n≥1
|P
n(n + 1)zn−1 =
{z
n≥0 (n+1)(n+2)z
n
2
(1 − z)3
}
Hence, the generating function of the sequence
2
an = (n + 1)(n + 2) is (1−z)
3.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
96/117
6.4 Generating Functions
Computing the k-th derivative of
P
zn .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
Computing the k-th derivative of
X
n≥k
P
zn .
n(n − 1) · . . . · (n − k + 1)zn−k
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
Computing the k-th derivative of
X
n≥k
P
zn .
n(n − 1) · . . . · (n − k + 1)zn−k =
X
(n + k) · . . . · (n + 1)zn
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
Computing the k-th derivative of
X
n≥k
P
zn .
n(n − 1) · . . . · (n − k + 1)zn−k =
=
X
(n + k) · . . . · (n + 1)zn
n≥0
k!
.
(1 − z)k+1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
Computing the k-th derivative of
X
n≥k
P
zn .
n(n − 1) · . . . · (n − k + 1)zn−k =
=
Hence:
X
n≥0
X
(n + k) · . . . · (n + 1)zn
n≥0
k!
.
(1 − z)k+1
!
n+k n
1
z =
.
k
(1 − z)k+1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
Computing the k-th derivative of
X
n≥k
P
zn .
n(n − 1) · . . . · (n − k + 1)zn−k =
=
Hence:
X
n≥0
X
(n + k) · . . . · (n + 1)zn
n≥0
k!
.
(1 − z)k+1
!
n+k n
1
z =
.
k
(1 − z)k+1
The generating function of the sequence an =
n+k
k
is
1
.
(1−z)k+1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
97/117
6.4 Generating Functions
X
n≥0
nzn =
X
n≥0
(n + 1)zn −
X
zn
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
98/117
6.4 Generating Functions
X
n≥0
nzn =
X
n≥0
(n + 1)zn −
X
zn
n≥0
1
1
=
−
2
(1 − z)
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
98/117
6.4 Generating Functions
X
n≥0
nzn =
X
n≥0
(n + 1)zn −
X
zn
n≥0
1
1
=
−
2
(1 − z)
1−z
z
=
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
98/117
6.4 Generating Functions
X
n≥0
nzn =
X
n≥0
(n + 1)zn −
X
zn
n≥0
1
1
=
−
2
(1 − z)
1−z
z
=
(1 − z)2
The generating function of the sequence an = n is
z
.
(1−z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
98/117
6.4 Generating Functions
We know
X
n≥0
yn =
1
1−y
Hence,
X
n≥0
an z n =
1
1 − az
The generating function of the sequence fn = an is
1
1−az .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
99/117
6.4 Generating Functions
We know
X
n≥0
yn =
1
1−y
Hence,
X
n≥0
an z n =
1
1 − az
The generating function of the sequence fn = an is
1
1−az .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
99/117
6.4 Generating Functions
We know
X
n≥0
yn =
1
1−y
Hence,
X
n≥0
an z n =
1
1 − az
The generating function of the sequence fn = an is
1
1−az .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
99/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
= a0 +
X
(an−1 + 1)zn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
= a0 +
=1+z
X
(an−1 + 1)zn
n≥1
X
n≥1
an−1 zn−1 +
X
zn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
= a0 +
=1+z
=z
X
n≥0
X
(an−1 + 1)zn
n≥1
X
n≥1
an−1 zn−1 +
n
an z +
X
z
X
zn
n≥1
n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
= a0 +
=1+z
=z
X
n≥0
X
(an−1 + 1)zn
n≥1
X
n≥1
an−1 zn−1 +
n
an z +
= zA(z) +
X
X
z
X
zn
n≥1
n
n≥0
zn
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Suppose we have the recurrence an = an−1 + 1 for n ≥ 1 and
a0 = 1.
A(z) =
X
an zn
n≥0
= a0 +
=1+z
=z
X
n≥0
X
(an−1 + 1)zn
n≥1
X
n≥1
an−1 zn−1 +
n
an z +
= zA(z) +
= zA(z) +
X
X
z
X
zn
n≥1
n
n≥0
zn
n≥0
1
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
100/117
Example: an = an−1 + 1, a0 = 1
Solving for A(z) gives
6.4 Generating Functions
Ernst Mayr, Harald Räcke
101/117
Example: an = an−1 + 1, a0 = 1
Solving for A(z) gives
A(z) =
1
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
101/117
Example: an = an−1 + 1, a0 = 1
Solving for A(z) gives
X
n≥0
an zn = A(z) =
1
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
101/117
Example: an = an−1 + 1, a0 = 1
Solving for A(z) gives
X
n≥0
an zn = A(z) =
X
1
=
(n + 1)zn
2
(1 − z)
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
101/117
Example: an = an−1 + 1, a0 = 1
Solving for A(z) gives
X
n≥0
an zn = A(z) =
X
1
=
(n + 1)zn
2
(1 − z)
n≥0
Hence, an = n + 1.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
101/117
Some Generating Functions
n-th sequence element
generating function
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1
1−z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
n+1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
(1 − z)k+1
1
n+1
n+k
k
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
(1 − z)k+1
z
(1 − z)2
1
n+1
n+k
k
n
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
(1 − z)k+1
z
(1 − z)2
1
1 − az
1
n+1
n+k
k
n
an
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
(1 − z)k+1
z
(1 − z)2
1
1 − az
z(1 + z)
(1 − z)3
1
n+1
n+k
k
n
an
n2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
1
1−z
1
(1 − z)2
1
(1 − z)k+1
z
(1 − z)2
1
1 − az
z(1 + z)
(1 − z)3
1
n+1
n+k
k
n
an
n2
ez
1
n!
6.4 Generating Functions
Ernst Mayr, Harald Räcke
102/117
Some Generating Functions
n-th sequence element
generating function
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
Pn
i=0
F ·G
fi gn−i
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
Pn
i=0
F ·G
fi gn−i
fn−k (n ≥ k); 0 otw.
zk F
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
Pn
i=0
F ·G
fi gn−i
fn−k (n ≥ k); 0 otw.
Pn
i=0
zk F
F (z)
1−z
fi
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
Pn
i=0
F ·G
fi gn−i
fn−k (n ≥ k); 0 otw.
Pn
i=0
zk F
F (z)
1−z
dF (z)
z
dz
fi
nfn
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Some Generating Functions
n-th sequence element
generating function
cfn
cF
fn + gn
F +G
Pn
i=0
F ·G
fi gn−i
fn−k (n ≥ k); 0 otw.
Pn
i=0
zk F
F (z)
1−z
dF (z)
z
dz
fi
nfn
c n fn
F (cz)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
103/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
4. Solving for A(z) gives an equation of the form A(z) = f (z),
where hopefully f (z) is a simple function.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
4. Solving for A(z) gives an equation of the form A(z) = f (z),
where hopefully f (z) is a simple function.
5. Write f (z) as a formal power series.
Techniques:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
4. Solving for A(z) gives an equation of the form A(z) = f (z),
where hopefully f (z) is a simple function.
5. Write f (z) as a formal power series.
Techniques:
ñ
partial fraction decomposition (Partialbruchzerlegung)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
4. Solving for A(z) gives an equation of the form A(z) = f (z),
where hopefully f (z) is a simple function.
5. Write f (z) as a formal power series.
Techniques:
ñ
ñ
partial fraction decomposition (Partialbruchzerlegung)
lookup in tables
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Solving Recursions with Generating Functions
1. Set A(z) =
n≥0 an z
P
n.
2. Transform the right hand side so that boundary condition
and recurrence relation can be plugged in.
3. Do further transformations so that the infinite sums on the
right hand side can be replaced by A(z).
4. Solving for A(z) gives an equation of the form A(z) = f (z),
where hopefully f (z) is a simple function.
5. Write f (z) as a formal power series.
Techniques:
ñ
ñ
partial fraction decomposition (Partialbruchzerlegung)
lookup in tables
6. The coefficients of the resulting power series are the an .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
104/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
2. Transform right hand side so that recurrence can be
plugged in:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
2. Transform right hand side so that recurrence can be
plugged in:
X
A(z) = a0 +
an z n
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
2. Transform right hand side so that recurrence can be
plugged in:
X
A(z) = a0 +
an z n
n≥1
2. Plug in:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
2. Transform right hand side so that recurrence can be
plugged in:
X
A(z) = a0 +
an z n
n≥1
2. Plug in:
A(z) = 1 +
X
(2an−1 )zn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
105/117
Example: an = 2an−1 , a0 = 1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
= 1 + 2z
X
an−1 zn−1
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
= 1 + 2z
= 1 + 2z
X
an−1 zn−1
n≥1
X
an z n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
= 1 + 2z
= 1 + 2z
X
an−1 zn−1
n≥1
X
an z n
n≥0
= 1 + 2z · A(z)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
= 1 + 2z
= 1 + 2z
X
an−1 zn−1
n≥1
X
an z n
n≥0
= 1 + 2z · A(z)
4. Solve for A(z).
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
3. Transform right hand side so that infinite sums can be
replaced by A(z) or by simple function.
X
A(z) = 1 +
(2an−1 )zn
n≥1
= 1 + 2z
= 1 + 2z
X
an−1 zn−1
n≥1
X
an z n
n≥0
= 1 + 2z · A(z)
4. Solve for A(z).
A(z) =
1
1 − 2z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
106/117
Example: an = 2an−1 , a0 = 1
5. Rewrite f (z) as a power series:
A(z) =
1
1 − 2z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
107/117
Example: an = 2an−1 , a0 = 1
5. Rewrite f (z) as a power series:
X
n≥0
an zn = A(z) =
1
1 − 2z
6.4 Generating Functions
Ernst Mayr, Harald Räcke
107/117
Example: an = 2an−1 , a0 = 1
5. Rewrite f (z) as a power series:
X
n≥0
an zn = A(z) =
X
1
=
2n z n
1 − 2z
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
107/117
Example: an = 3an−1 + n, a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
108/117
Example: an = 3an−1 + n, a0 = 1
1. Set up generating function:
A(z) =
X
an z n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
108/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
= a0 +
X
an z n
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
= a0 +
=1+
X
an z n
n≥1
X
(3an−1 + n)zn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
= a0 +
=1+
X
an z n
n≥1
X
(3an−1 + n)zn
n≥1
= 1 + 3z
X
n≥1
an−1 zn−1 +
X
nzn
n≥1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
= a0 +
=1+
X
an z n
n≥1
X
(3an−1 + n)zn
n≥1
= 1 + 3z
= 1 + 3z
X
n≥1
X
n≥0
an−1 zn−1 +
n
an z +
X
X
nzn
n≥1
nzn
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
2./3. Transform right hand side:
X
A(z) =
an z n
n≥0
= a0 +
=1+
X
an z n
n≥1
X
(3an−1 + n)zn
n≥1
= 1 + 3z
= 1 + 3z
X
n≥1
X
n≥0
an−1 zn−1 +
n
an z +
X
X
nzn
n≥1
nzn
n≥0
z
= 1 + 3zA(z) +
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
109/117
Example: an = 3an−1 + n, a0 = 1
4. Solve for A(z):
6.4 Generating Functions
Ernst Mayr, Harald Räcke
110/117
Example: an = 3an−1 + n, a0 = 1
4. Solve for A(z):
A(z) = 1 + 3zA(z) +
z
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
110/117
Example: an = 3an−1 + n, a0 = 1
4. Solve for A(z):
A(z) = 1 + 3zA(z) +
z
(1 − z)2
gives
A(z) =
(1 − z)2 + z
(1 − 3z)(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
110/117
Example: an = 3an−1 + n, a0 = 1
4. Solve for A(z):
A(z) = 1 + 3zA(z) +
z
(1 − z)2
gives
A(z) =
(1 − z)2 + z
z2 − z + 1
=
(1 − 3z)(1 − z)2
(1 − 3z)(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
110/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
z2 − z + 1
(1 − 3z)(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
B
C
z2 − z + 1
A
!
+
+
=
2
(1 − 3z)(1 − z)
1 − 3z
1−z
(1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
B
C
z2 − z + 1
A
!
+
+
=
2
(1 − 3z)(1 − z)
1 − 3z
1−z
(1 − z)2
This gives
z2 − z + 1 = A(1 − z)2 + B(1 − 3z)(1 − z) + C(1 − 3z)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
B
C
z2 − z + 1
A
!
+
+
=
2
(1 − 3z)(1 − z)
1 − 3z
1−z
(1 − z)2
This gives
z2 − z + 1 = A(1 − z)2 + B(1 − 3z)(1 − z) + C(1 − 3z)
= A(1 − 2z + z2 ) + B(1 − 4z + 3z2 ) + C(1 − 3z)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
We use partial fraction decomposition:
B
C
z2 − z + 1
A
!
+
+
=
2
(1 − 3z)(1 − z)
1 − 3z
1−z
(1 − z)2
This gives
z2 − z + 1 = A(1 − z)2 + B(1 − 3z)(1 − z) + C(1 − 3z)
= A(1 − 2z + z2 ) + B(1 − 4z + 3z2 ) + C(1 − 3z)
= (A + 3B)z2 + (−2A − 4B − 3C)z + (A + B + C)
6.4 Generating Functions
Ernst Mayr, Harald Räcke
111/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
This leads to the following conditions:
A+B+C =1
2A + 4B + 3C = 1
A + 3B = 1
6.4 Generating Functions
Ernst Mayr, Harald Räcke
112/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
This leads to the following conditions:
A+B+C =1
2A + 4B + 3C = 1
A + 3B = 1
which gives
A=
7
4
B=−
1
4
C=−
1
2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
112/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
A(z) =
7
1
1
1
1
1
·
− ·
− ·
4 1 − 3z
4 1−z
2 (1 − z)2
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
A(z) =
=
7
1
1
1
1
1
·
− ·
− ·
4 1 − 3z
4 1−z
2 (1 − z)2
7 X n n 1 X n 1 X
·
3 z − ·
z − ·
(n + 1)zn
4 n≥0
4 n≥0
2 n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
A(z) =
7
1
1
1
1
1
·
− ·
− ·
4 1 − 3z
4 1−z
2 (1 − z)2
=
7 X n n 1 X n 1 X
·
3 z − ·
z − ·
(n + 1)zn
4 n≥0
4 n≥0
2 n≥0
=
X 7
1 1
· 3n − − (n + 1) zn
4
4 2
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
A(z) =
7
1
1
1
1
1
·
− ·
− ·
4 1 − 3z
4 1−z
2 (1 − z)2
=
7 X n n 1 X n 1 X
·
3 z − ·
z − ·
(n + 1)zn
4 n≥0
4 n≥0
2 n≥0
=
X 7
1 1
· 3n − − (n + 1) zn
4
4 2
n≥0
=
X 7
1
3 n
· 3n − n −
z
4
2
4
n≥0
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
Example: an = 3an−1 + n, a0 = 1
5. Write f (z) as a formal power series:
A(z) =
7
1
1
1
1
1
·
− ·
− ·
4 1 − 3z
4 1−z
2 (1 − z)2
=
7 X n n 1 X n 1 X
·
3 z − ·
z − ·
(n + 1)zn
4 n≥0
4 n≥0
2 n≥0
=
X 7
1 1
· 3n − − (n + 1) zn
4
4 2
n≥0
=
X 7
1
3 n
· 3n − n −
z
4
2
4
n≥0
6. This means an = 74 3n − 12 n − 43 .
6.4 Generating Functions
Ernst Mayr, Harald Räcke
113/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
Define
gn := log fn .
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
Define
gn := log fn .
Then
gn = gn−1 + gn−2 for n ≥ 2
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
Define
gn := log fn .
Then
gn = gn−1 + gn−2 for n ≥ 2
g1 = log 2 = 1(for log = log2 ), g0 = 0
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
Define
gn := log fn .
Then
gn = gn−1 + gn−2 for n ≥ 2
g1 = log 2 = 1(for log = log2 ), g0 = 0
gn = Fn (n-th Fibonacci number)
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 9
f0 = 1
f1 = 2
fn = fn−1 · fn−2 for n ≥ 2 .
Define
gn := log fn .
Then
gn = gn−1 + gn−2 for n ≥ 2
g1 = log 2 = 1(for log = log2 ), g0 = 0
gn = Fn (n-th Fibonacci number)
fn = 2Fn
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
114/117
6.5 Transformation of the Recurrence
Example 10
f1 = 1
fn = 3f n + n; for n = 2k , k ≥ 1 ;
2
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
115/117
6.5 Transformation of the Recurrence
Example 10
f1 = 1
fn = 3f n + n; for n = 2k , k ≥ 1 ;
2
Define
gk := f2k .
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
115/117
6.5 Transformation of the Recurrence
Example 10
f1 = 1
fn = 3f n + n; for n = 2k , k ≥ 1 ;
2
Define
gk := f2k .
Then:
g0 = 1
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
115/117
6.5 Transformation of the Recurrence
Example 10
f1 = 1
fn = 3f n + n; for n = 2k , k ≥ 1 ;
2
Define
gk := f2k .
Then:
g0 = 1
gk = 3gk−1 + 2k , k ≥ 1
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
115/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
h
i
= 32 3gk−3 + 2k−2 + 32k−1 + 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
h
i
= 32 3gk−3 + 2k−2 + 32k−1 + 2k
= 33 gk−3 + 32 2k−2 + 32k−1 + 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
h
i
= 32 3gk−3 + 2k−2 + 32k−1 + 2k
= 33 gk−3 + 32 2k−2 + 32k−1 + 2k
= 2k ·
k X
3 i
i=0
2
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
h
i
= 32 3gk−3 + 2k−2 + 32k−1 + 2k
= 33 gk−3 + 32 2k−2 + 32k−1 + 2k
= 2k ·
=
k X
3 i
2
i=0
3
( )k+1
2k · 2
1/2
−1
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
We get
gk = 3 gk−1 + 2k
h
i
= 3 3gk−2 + 2k−1 + 2k
= 32 gk−2 + 32k−1 + 2k
h
i
= 32 3gk−3 + 2k−2 + 32k−1 + 2k
= 33 gk−3 + 32 2k−2 + 32k−1 + 2k
= 2k ·
=
k X
3 i
2
i=0
3
( )k+1
2k · 2
1/2
−1
= 3k+1 − 2k+1
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
116/117
6 Recurrences
Let n = 2k :
gk = 3k+1 − 2k+1 , hence
fn = 3 · 3k − 2 · 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
117/117
6 Recurrences
Let n = 2k :
gk = 3k+1 − 2k+1 , hence
fn = 3 · 3k − 2 · 2k
= 3(2log 3 )k − 2 · 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
117/117
6 Recurrences
Let n = 2k :
gk = 3k+1 − 2k+1 , hence
fn = 3 · 3k − 2 · 2k
= 3(2log 3 )k − 2 · 2k
= 3(2k )log 3 − 2 · 2k
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
117/117
6 Recurrences
Let n = 2k :
gk = 3k+1 − 2k+1 , hence
fn = 3 · 3k − 2 · 2k
= 3(2log 3 )k − 2 · 2k
= 3(2k )log 3 − 2 · 2k
= 3nlog 3 − 2n .
6.5 Transformation of the Recurrence
Ernst Mayr, Harald Räcke
117/117
Herunterladen