If you choose the right data structure, the algorithm will be self-evident Data structure dictates which algorithms you can apply
Set of X to a set of Y assigns each element of X exactly one element of Y Functions don't have side effects and only depend on their input f(x) = 42 (constant) f(x) = 2x (linear) f(x) = x^2 (quadratic) f(x) = x^3 (cubic) f(x) = log2(x) (logarithmic) f(x) = 2^x (exponential) f(x) = x! (factorial)
Factorial is recursive N! = (N - 1)! * N for all N > 0
Constant < log(n) < √n < n < n^2 < 2^n
The inverse of exponentiation
If Y is product of B to the power of P ( B^P ) Y = B^P, therefore P = logB(Y)
Express time or space complexity (asymptotic complexity). Rough approximation of number of operations needed to execute. Time relative to number of elements. Descrbines which function best describes its growth. Big O is always a function or combination of functions where n is the number of elements.
f(n) = O(g(n)) means there are positive constants: c and k 0 <= f(n) <= cg(n) for all n >= k
Does not depend on number of elements Examples:
Not much worse than constant time Divide to conquer Base is not relevant (especially as we approach infinity), but usually 2 Examples:
Examples:
Avoid for large data sets because number of operations grows quickly