*(Guest post by Steven J Hsieh)*

Today in lecture, we finished covering our introduction to chapter 2 that we started last lecture.

We defined an efficient algorithm as an algorithm that scales with input size, that is, if input size increases by a constant factor, so should the run time. Polynomial runtimes are a good example of this.

We also defined the runtime of an algorithm, the worst case runtime T(n) for an input of size n to be the maximum number of steps taken by an algorithm for input size n.

So the main points that we covered in lecture 10 include:

1. Definitions of asymptotic notations

2. Properties of asymptotic notations

We learned 3 notations, Big O, Ω, and θ. Defined colloquially and somewhat vaguely, O is a <= (less than), Ω is a >= (greater than), and θ is an =.

More precise definitions are as follows:

O -> f(n) is O(g(n)) if there exists n0 >= 1, c > 0 such that for all n >= n0, f(n) <= c*g(n)

(No Faster than)

Ω -> f(n) is Ω(g(n)) if there exists n0 >= 1 (different from above), e>0 such that for all n >= n0, f(n) >= e*g(n)

(No Slower than)

θ -> f(n) is θ(g(n)) if it is both O(g(n)) and Ω(g(n))

(Approx. Same rate)

An example would be:

100n^2 + 51n + 2 <= 100n^2 + 51n^2 + 2n^2 = (153 n^2) so it is O(n^2) where n = 1 and c = 153

An example for Ω would be:

n(n+2)/2 = (n^2)/2 + n/2 >= n^2/2, so Ω(n^2) for n0 =1, e=1/2

Now, it is also the case that it is Ω(n), but n^2 would be a tighter bound.

We also defined some properties for this notation:

Property 1: If f(n) is θ(g(n)), g(n) is θ(f(n))

Property 2: If f(n) is O(g(n)) then h(n)f(n) is O(h(n)g(n))

Property 3: If fi(n)…….ft(n) all O(h(n)), then f1(n) + … + ft(n) is O(h(n))

To demonstrate these properties, we found the runtime of a maximum search algorithm on an unsorted list.

Input: A[1]….A[n] unsorted

Output: max

Temp <- A[1] ————> O(1)

for i = 2…n ————> O(n-1) -> O(n)

if A[1] > Temp ————> O(1)

Temp <- A[1] ————> O(1)

i++ ————> O(1)

Output Temp ————> O(1)

All of the instructions in the loop are O(n), so using property 3, they are in total O(n), an since the loop is executed n-1 times,

it is O(n-1) * O(1) which is O(n) * O(1) which is O(n). Setting the temp to A[1] is O(1), and Outputting is O(1), so the total is:

Overal: O(n) + O(1) + O(1) -> O(n) + O(n) + O(n) = O(n)

Now by the definition of Omega, each O(1) step in the algorithm also runs no slower than 1, so they are Ω(1), and since you must check every single item in the list, the loop must execute n-1 times, so it is Ω(n-1) or Ω(n), so the total is Ω(n) + Ω(1) + Ω(1), so the algorithm is also Ω(n).

This shows that the algorithm is θ(n).

## Leave a Reply