Loading

Time Complexity: A Guide To Building Better Software

3 minute read

https://wordpress-755758-2552786.cloudwaysapps.com/wp-content/uploads/2022/08/sigmund-VsomuMUKTrw-unsplash-scaled.jpg

written by Christopher Clemmons

What is time complexity?

Time complexity is a mathematical way of describing the running time of an algorithm inside of a software function. It is used to indicate how the runtime of an algorithm increases as the input size increases. This blog post will discuss Big O notation and how it affects software applications. We will also provide examples to help you understand this concept better.

This approach is an asymptotic way of understanding how a certain algorithm will affect an application over time. Is not exactly accurate in showing exactly how long an application will take to complete a problem but will give you a rough idea.

It’s similar to getting an understanding of your vehicle’s miles-per-gallon rating to determine how you’ll spend on fuel on a given trip. Not all trips or inputs will be the same size and Big O notation helps us understand time and memory costs within our application.

There are 4 common algorithms in computer science from fastest to slowest.

Constant – 0(n)

Constant time complexity means that it will only take one step to complete. Examples of these would be math operations, accessing an index of an array, pushing and popping a stack, and retaining a value from a function.

Example:

let num = 10
function addNum(num){
return num + 2
}

const result = addNum(19)

Linear – O(n)

Linear time complexity is considered to be very efficient and is often the goal of algorithm designers. However, it can be difficult to achieve, especially for large and complex inputs. When designing an algorithm, it is important to consider both the worst-case and best-case scenarios. The worst-case scenario is the input that will take the longest to process, while the best-case scenario is the input that can be processed most quickly. For algorithms with linear time complexity, the worst-case and best-case scenarios will always take approximately the same amount of time. This makes them very consistent and predictable, which can be valuable in many applications.

Some examples of functions with linear time complexities are

  • For-Loops
  • .forEach() (JavaScript)
  • .map() (JavaScript)
  • .filter() (JavaScript)

I find the algorithm to be useful only when you need to go through each value inside a data structure.

Quadratic – O(n²)

Quadratic time complexity describes the amount of time it takes an algorithm to run as a function of the input size. In other words, it measures how long the algorithm takes to complete its task relative to the amount of data it is given. A quadratic time algorithm is one that takes the input of size n and requires O(n^2) operations to complete. This means that, as the input size increases, the number of operations required to complete the task increases at a rate that is proportional to the square of the input size. Quadratic time algorithms are often used for tasks such as sorting and searching, where the input size is relatively small and the number of operations required is not excessively large. However, for larger input sizes, quadratic time algorithms can become prohibitively slow, making them unusable for many practical applications.

Logarithmic – O(log n)

Logarithmic time complexity is when the input size increases, the time it takes to run only increases by a constant amount. That is, if the input size doubles, the runtime only increases by a small, constant amount. Logarithmic time complexity is therefore very efficient and is often preferred to other types of time complexity.

However, it should be noted that logarithmic time complexity can still be impacted by the specific input values. For example, if the input values are already sorted, then the runtime will be even shorter. As such, it is important to consider both the input size and the input values when determining the time complexity of an algorithm

Conclusion

As mentioned, you can code without grasping big O. Keep in mind that it’s only an estimate, as calculated run times may vary depending on actual conditions like processing speed and languages used. However, this shouldn’t stop you from learning more about big O notation – doing so gives you helpful context for understanding what matters and doesn’t matter when designing algorithms.

Build a digital product designed to scale with your audience and your business with low maintaince and overhead

  • Contact
    • Email us
    • 1420 Washington Blvd #301 Detroit, Michigan