Runtime to Algo Cheat Sheet - Getting Started / Overview

Earl your comment is mostly right but there are language where strings are mutable and have constant time append (like arrays). And also the N is the original input size which is reduces to log2(N) before starting the calls. I did an analysis above where I explained why I think this is actually N*log2(N)

Hi! Whatā€™s the meaning of Typically for n > 10^x? Itā€™s applied to every section, but I do not understand how it can be used

Bump

I had the same question. That sentence is really unhelpful and itā€™s unclear how important the concept is to remember

My understanding is itā€™s referring to the input size.

So the size of the input (such as an array) to where an algorithm with that runtime complexity is still ā€œfastā€ or "good

N <= 10^9 = 1,000,000,000

In other words, we could still expect an O(1) algorithm to still be fast even with a billion inputs

hi, it means if the problem gives a constraints like this then you can use it to deduce the algo. https://i.imgur.com/Kabbnci.png

Thereā€™s a typo in the sixth practice question, C++ variant.

On line 3, it should read vector<int> bin; (not bool):

int N;
cin >> N;
vector<int> bin;
while (N) {
  bin.emplace_back(N);
  N /= 2;
}

For example, if we had N O(1) tasks but only a single O(N) task, we could still consider the total O(N) instead of O(N^2) if N is large enough.

Is there a typo or am I misunderstanding? Why would the time ever be O(N^2)?

There are N tasks and a few of them are O(N), the rest are O(1). If we donā€™t amortize and only consider the worse case, we may consider it to be N*N

can anyone explain what is the * Typically for n > 10āø etc at the end of each section mean .

what does this mean in Q5 O(N) ā€” O(2^log(N)) = O(N) ?

O(n^2) is still in a different family of functions. A good rule of thumb for the family of functions involves dropping the constant, like O(5n) => O(n), or O(n + 6) => O(n), but you keep any non-constant multipliers around. So O(2n * log n) => O(n * log n).

Even if O(n^2) grows faster, itā€™s also not the more accurate answer. Imagine if you plotted O(n^3) on a graph, you wouldnā€™t assume O(n^3) is the answer if you got O(n^2) just because the former grew faster.

For # 5, this syntax is a bit confusing, theyā€™re saying the solution is O(N), because O(2^log(N)) = O(N). The dash might be misleading, itā€™s just a separator.

Hereā€™s what Iā€™m understanding of # 5 so far.

The number of nodes in a tree, based on the Permutations chapter, is 2^n, where n = height of the tree(?). Because we know the height of the tree is log(N) based on the input parameter, we get the second term, O(2^logN).

As someone said previously, the log2N and exponential 2 cancel each other out because log is an inverse function of 2, therefore we end up with O(N).

If someone could confirm to help explain

Ohhh I think I understand #5 now after much staring.

The chapter already explains that recursive algos are O(2^N), however here the input is log2(N). Then math means 2^(log2(N)) cancels out to N.

If this is correct, I find the comment misleadingā€¦ Admin pls confirm!

# assume the log2(N) function takes O(1) time

Also is the amortization python example meant to look like the below? Iā€™m no expert on the syntax of the other languages, but they imply the vars should all be ints/ arrays of ints. Plus they need to be declared inside the fn or passed in.

def append(x):
    size = 0 # number of elements in array
    capacity = 0 # maximum number of elements array can store
    arr = [0]
    if size == capacity:
        new_arr = [0] * (2 * capacity)
        capacity *= 2
        for i in range(size):
            new_arr[i] = arr[i]
            arr = new_arr
    arr[size] = x
    size += 1

Can someone explain why Q3 is not O(N^3 Log(N))?

int N = sc.nextInt();
int[][] ar = new int[N][N];
for (int i = 0; i < N; i++) {
  for (int j = 0; j < N; j++) {
    ar[i][j] = sc.nextInt();
  }
}
for (int i = 0; i < N; i++) {
  Arrays.sort(ar[i]);
}

To me it looks like the first two loops are N^2
And the final sort is N Log(N)

If I am understanding a pattern here, in terms of previous discussions, it seems like most people are looking for worst case time complexities to answer the quiz questions at the end, but it looks like the answers are based on average time complexity? is this a correct assumption?

for question 4 the answer statesā€¦

O(N^2) ā€” The outer loop iterates O(N) times, the inner loops O(N / 2) times which is O(N), and swapping is O(1).

isntā€™ the inner loop O(log n) since it cuts n in halfā€¦ ?

Can someone explain #5? I donā€™t get it - if the string concatenation is O(1) as written in Javascript version comment. The time complexity should be O(n) and not O(n*log n).

The sentence in solution is not clear because it says strings are formed and then we form it again: O(N log(N)) ā€” By logarithm rules, O(2^log(N)) = O(N) strings are formed and each string takes O(log(N)) to form and print.

For #5, there are two parts to calculate:

  • How many func do we have to reach idx == len. It is O(2^len) = O(2^log(N)) = O(N).
  • For each func that reaches idx == len, it needs to print str, which is a string with length len, and it requires O(len) = O(log(N)) to print.

So the answer is O(N) * O(log(N)) = O(N log(N))

1 Like