Asymptotics

Algorithms and Data Structures


Growth of functions

The more logs, the slower it grows Large constants before the terms will cause the crossover point to be pushed far into larger digits

ExampleGrowth Rate
ExampleGrowth Rate
  • Subbing in values will allow you to find the crossover point

Time complexity

Definition

The time complexity of an algorithm can be expressed in terms of the number of basic operations used by the algorithm when the input has a particular size.

Examples of basic operations are:

  • additions

  • multiplications

  • comparisons of two numbers (tends to be the slowest of the basic operations)

  • swaps

  • assignments of values to variables

The space complexity of an algorithm is expressed in terms of the memory required by the algorithm for an input of a particular size. Will mainly be concerned with time complexity

ExampleEvaluation of Polynomials
ExampleSorting

Worst case time complexity

Definition

The worst case time complexity of an algorithm can be expressed in terms of the largest number of basic operations used by the algorithm when the input has a particular size

More on worst case time complexity

  • Worst case analysis tells us how many operations an algorithm requires to guarantee that it will produce a solution

  • The worst-case time analysis is a standard way to estimate the efficiency of algorithms. Usually time complexity means worst case time complexity

  • Another important type of complexity analysis is called average case analysis. Here we are interested in the average number of operations over all inputs of a given size

  • It is difficult to compute the exact number of operations

  • Usually we don’t need it. It is sufficient to estimate this number i.e. give bounds

  • We are more interested in upper bounds for the worst case analysis

  • These bounds should give us the possibility to estimate growth of the number of operations when the input size increases

  • It is important to estimate the number of operations then the input size is large

Big O

Definition

Let f(x) and g(x) be functions from the set of integers or the set of real numbers to the set of real numbers. We say that $f(x)$ is $\mathcal{O}(g(x))$ of there are constant C and k such that

$$| f ( x ) | \leq C \cdot | g ( x ) |$$

whenever $x\geqslant k$

More on Big O

  • The definition is introduced for general functions. If we consider time complexity functions all functions will have positive values, and we do not have to be concerned about the absolute value signs.

  • The definition says that after a certain point, namely after k, the absolute value of $f(x)$ is bounded by C times the absolute value of $g(x)$

  • In terms of time complexities $f(x)$ is no worse than $C\cdot g(x)$ for all relatively large input sizes x

  • C is a fixed constant usually depending on the choice of k. We are not allowed to increase C as x increases

  • The constants C and k in the definition of big-$\mathcal{O}$ are called the witnesses to the relationship $f(x)$ is $\mathcal{O}(g(x))$

  • If there is a pair of witnesses to the relationship $f(x)$ is $\mathcal{O}(g(x))$, then there are infinitely many pairs of witnesses to that relationship

  • Indeed, if C and k are one pair of witnesses, then any pair C’ and k’, where $C\leqslant C'$ and $k\leqslant k'$, is also a pair of witnesses

  • To establish that $f(x)$ is $\mathcal{O}(g(x))$ we need only one pair of witnesses to this relationship. (We can be \“generous\”, i.e., we do not have to look for the best values of C and k)

Examples

Example
Example
Example
Example

Summary

Big-$\mathcal{O}$ formName
$\mathcal{O}(1)$constant
$\mathcal{O}(log n)$logarithmic
$\mathcal{O}(n)$linear
$\mathcal{O}(n\log n)$n log n
$\mathcal{O}(n^2)$quadratic
$\mathcal{O}(n^3)$cubic
$\mathcal{O}(n^k)$polynomial
$\mathcal{O}(c^n)$exponential

More examples

Example
Example
Example
Example
Example

Sum and Product rules

The sum rule

If $f_1(x)$ is $\mathcal{O}(g_1(x))$ and $f_2(x)$ is $\mathcal{O}(g_2(x))$, then $f_1(x)+f_2(x)$ is $\mathcal{O}(\max \{|g_1{x}|,|g_2{x}|\})$ Two functions $f_1(x)$ and $f_2(x)$ for each we know what their $\mathcal{ O }$ are. Adding these together, the big o of the sum of them is the maximum of their $\mathcal{O}$s

Example

The product rule

If $f_1(x)$ is $\mathcal{O}(g_1(x))$ and $f_2(x)$ is $\mathcal{O}(g_2(x))$, then $f_1(x)\cdot f_2(x)$ is $\mathcal{O}(g_1{x}\cdot g_2{x})$

When multiplying two functions $f_1$ and $f_2$, then the $\mathcal{ O }$ of the product is the product of the $\mathcal{ O }$ of the two functions.

Let $C_i$ and $k_i$ be witness pairs for $f_i(x)$ is $\mathcal{O}(g_i(x))$, for i=1,2

Let $k=\max{k_1,k_2}$ and $C=C_1\cdot C_2$. Then for $x>k$ we have

$$|f_1(x)\cdot f_2(x)| = |f_1(x)| \cdot |f_2(x)| \leq C_1 \cdot |g_1(x)| \cdot C_2 \cdot |g_2(x)| = { C \cdot \left| g _ { 1 } ( x ) \cdot g _ { 2 } ( x ) \right| }$$
Example

Big Omega

The Big-O notation is very useful to find reasonable upper bounds for growth rates, but does not really help much f we are interested in the best function that matches the growth rate\ As a first step in this direction, we introduce a similar defintion for lower bounds which is called Big-Omega notation

Defintion

Let $f(x)$ and $g(x)$ be functions from the set of real number to the set of real numbers. We say that $f(x)$ is $\Omega(g(x))$ if there are positive constants C and k such that

$$|f(x)|\geqslant C\cdot |g(x)|$$

whenever $x>k$. Note that this implies that $f(x)$ is $\Omega(g(x))$ if and only if $g(x)$ is $\mathcal{O}(f(x))$

Theta

Definition

This provides a tight bound on the time complexity of something\ Let $f(x)$ and $g(x)$ be functions from the set of real numbers to the set of real numbers. We say that $f(x)$ is $\Theta(g(x))$ if $f(x)$ is $\mathcal{O}(g(x))$ and $f(x)$ is $\Omega(g(x))$

This is equivalent to saying that $f(x)$ is $\Theta(g(x))$ if $f(x)$ is $\mathcal{O}(g(x))$ and $g(x)$ is $\mathcal{O}(f(x))$

And this is equivalent to saying that there are constants $C_1,C_1$ and k such that $|f(x)|\leqslant C_1\cdot |g(x)|$ and $|g(x)|\leqslant C_2\cdot |f(x)|$ whenever $x\geqslant k$

Proving something is $\Theta$

$$3x^2+2x+1$$
$$1\leqslant x\leqslant x^2 \ where \ x\geqslant 1$$
$$3x^2+2x+1\leqslant 3x^2+2x^2+x^2=6x^2$$
$$3x^2\leqslant 3x^2+2x+1\leqslant 6x^2$$
$$k=1, C_1=3, C_2=6, g(x)=x^2$$

Little-o notation

We would like to have a tool for disregarding or neglecting “smaller order” terms. Little o notation gives us such a tool

It is based on the concept of limits

This is something that grows strictly faster, whereas big O would allow something at the same rate

Definition

Let $f(x)$ and $g(x)$ be functions from the set of real numbers to the set of real numbers. We say that $f(x)$ is $o(g(x))$ when:

$$\lim\_{x\to\infty} \dfrac{f(x)}{g(x)}=0$$

Without the limit, this can be shown as:

$$o ( g ) = \{ f : \mathbb { N } \rightarrow \mathbb { N } | \forall C > 0 \exists k > 0 : C \cdot f ( n ) < g ( n ) \forall n \geq k \}$$

This is actually saying

$$C\cdot f(n)<g(n) \ \text{for all values of C greater than 0}$$

Solving is best done using the limit formula

The non limit formula is best for proving that something isn’t little o

More on little-o notation

This clearly shows that $f(x)$ is $o(g(x))$ implies $f(x)$ is $\mathcal{O}(g(x))$

If we suppose $f(x)$ is not $\mathcal{O}(g(x))$, then for all positive constants C and k, there exists a value of $x>k$ such that $|f(x)|>C\cdot |g(x)|$, and then clearly either $\lim_{x\to\infty} \dfrac{f(x)}{g(x)}$ does not exist or it is not 0. Then $f(x)$ is not $o(g(x))$

Special Case - Sublinear Functions

A function is called sublinear if it grows slower than a linear function. With little o notation we can make this very precise.

Definition

A function $f(x)$ is called sublinear if $f(x)$ is $o(x)$, so if

$$\lim\_{x\to\infty}\dfrac{f(x)}{x}=0$$

Examples

Example
Example

Little omega

$\omega$ is to $o$ what $\Omega$ is to $\mathcal{O}$

$$f=\omega(g) \qquad \Leftrightarrow \qquad g=o(f)$$

Definition

$$\omega ( g ) = \{ f : \mathbb { N } \rightarrow \mathbb { N } | \forall C > 0 \exists k > 0 : f ( n ) > C \cdot g ( n ) \forall n \geq k \}$$

General Rules

The following results show you how to apply asymptotic notation more generally. You can use the rules without proving them

Theorem

$$\text { If } f _ { 1 } ( x ) \text { is } o ( g ( x ) ) \text { and } f _ { 2 } ( x ) \text { is o } ( g ( x ) ) , \text { then } f _ { 1 } ( x ) + f _ { 2 } ( x ) \text { is } { o ( g ( x ) ) . }$$

Theorem

$${ \text { If } f _ { 1 } ( x ) \text { is } O ( g ( x ) ) \text { and } f _ { 2 } ( x ) \text { is o } ( g ( x ) ) , \text { then } f _ { 1 } ( x ) + f _ { 2 } ( x ) \text { is } } { O ( g ( x ) ) . }$$

Theorem

If $f_1(x)$ is $\Theta(g(x))$ and $f_2(x)$ is $o(g(x))$, then $f_1(x)+f_2(x)$ is $\Theta(g(x))$

Summary for $g:\mathbb{ N }\rightarrow \mathbb{ N }$

Equivalent to $\leqslant$

$$\mathcal { O } ( g ) = \{ f : \mathbb{ N } \rightarrow \mathbb{ N } | \textcolor{blue}{\exists} C , k > 0 : \mathbf{f ( n ) \leq C \cdot g ( n )} \forall n \geq k \}$$

Equivalent to $\geqslant$

$$\Omega ( g ) = \{ f : \mathbb{ N } \rightarrow \mathbb{ N } | \textcolor{blue}{\exists} C , k > 0 :\mathbf{ f ( n ) \geq C \cdot g ( n )} \forall n \geq k \}$$

Equivalent to =

$$\Theta ( g ) = \left\{ f : \mathbb{ N } \rightarrow \mathbb{ N } | \textcolor{blue}{\exists} C _ { \mathbf { 1 } } , C _ { 2 } , k > 0 : \mathbf{\quad C _ { \mathbf { 1 } } \cdot g ( n ) \leq f ( n ) \leq C _ { 2 } \cdot g ( n )} \forall n \geq k \right\}$$

Equivalent to $<$

$$o ( g ) = \{ f : \mathbb{ N } \rightarrow \mathbb{ N } | \textcolor{red}{\forall} C > 0 \exists k > 0 : \mathbf{C \cdot f ( n ) < g ( n )} \forall n \geq k \}$$

Equivalent to $>$

$$\omega ( g ) = \{ f : \mathbb{ N } \rightarrow \mathbb{ N } | \textcolor{red}{\forall} C > 0 \exists k > 0 : \mathbf{f ( n ) > C \cdot g ( n )} \forall n \geq k \}$$