Runtime to Algo Cheat Sheet - Getting Started / Overview

https://algo.monster/problems/runtime_summary

The python code for amortized complexity does not update capacity (also there is typo on the same variable).

Are you sure the answer to #5 is correct?
Time complexity is a function of input problem size. I think N is NOT the size of the input problem here.
I think the right answer is o(2^len) and N can be ignored as log2(N) is just a single math operation on N and this operation is not affected by the size of the input.

Hi, the question asks for the time complexity for the whole code block which has N as input.

please add C# as well! it is one of the most used languages…

Working on it. Stay tuned :slight_smile:

Looking at: O(K log(N)) I reckon that’s supposed to be O(N * log(K)) => You have n values, your heap will have size k. You will push every single one element eventually O(N) and for every single push you’ll pay log(K).

Let me know if I’m missing something.

Cheers!

Can anyone explain more on question 5 ? I thought the answer is O(2^log(N)) and can not understand why it is O(N). Thanks

2 Likes

For #5, string append is O(n) in basically every language I’m aware of, since the entire string plus one extra spot need to be copied to a new location in memory. This comes out to a total O(n^2)

Can someone explain why Question #3 is O(n^2 log(n)) and not just O(n^2)? I plotted the two graphs, and O(n^2) grows faster than O(n^2 log(n)). Since O(n^2) grows faster, doesn’t that mean, if given O(n^2) + O(n^2 log(n)) , we should reduce it to O(n^2)?

Nevermind actually, I think I understand. This website helped me out: https://www.bigocheatsheet.com/

I agree with you.

You are right, but O(2^log N) is O(N) since log and exp are inverse functions, so exponenting a logarithm cancels it

“Pushing and poping elements from a stack”

Small typo on bullet point 3 of the 0(1) list

This is how I understood it. Someone please correct me if I am wrong. The two outer loops are O(n). So if we compare the two inner loops the first one has an O(n) complexity, and the second one has O(n log(n)) complexity because of sort. Since O(n log(n)) grows faster than O(n) the function gets a O(n^2 log(n)) complexity. I got it wrong initially as well.

^this

N = int(input())
ar = [[0] * N for _ in range(N)]
for i in range(N):
    for j in range(N):
        ar[i][j] = int(input())

for i in range(N):
    ar[i].sort()

Maybe?

I’m very new to this. When it says above ‘Typically for n ≤ 12’ does that mean that we would want to stay in that range for an algo with that time complexity? In other words if it is larger than that it is going to be very slow? Thanks

Yes, it means if the constraint given by the problem is greater than 12, then the algorithm will likely not work.

May i ask why in question 4, the inner for loops is O(n) instead of O(log(n))? Since the for loop take half of the time to run