Know Thy Complexities!
In computer science, much of the theoretical elements have their uses and their applications can turn out to be quite practical. In this article, we will come across terminologies such as the 'Big O notation' and algorithm complexity analysis, a concept which many developers as well as learners find hard to understand, fear, or avoid altogether as useless.
Ideally, Algorithm complexities are just a way to formally measure how fast a program or algorithm runs, so it really is quite a pragmatic approach to matters programming. In data structures and algorithms, We define complexity as a numerical function T(n) - time versus the input size n.
Here is where the circus begins:
T(n) refers to the maximum amount of time taken on any input of size n. A given algorithm will take different amounts of time on the same inputs depending on such factors as: processor speed; instruction set, disk speed, type of compiler among other factors. The time complexity of algorithms is most commonly expressed using the big O notation. Since Since an algorithm's performance may vary with different types of input data, hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that is the maximum time taken for any input size.
For now, that's all you need to in order familiarize yourself with as far as algorithm terminologies are concerned. Further emphasis will be put into calculating time complexity, using mathematical approaches, a somehow confusing concept which will form the content of my next blog post.