Kodr

Kodr

  • Content
  • Blog
  • About
  • Languages iconEnglish
    • Español

›Key Concepts

Technical Interviews

  • What are technical interviews based on?
  • Essential content & how to solve algorithms

Big-O Notation

  • Big-O Notation
  • Big-O Examples
  • Big-O Exercises

Optimisation Techniques

  • BUD optimisation
  • DIY Optimisation

Key Concepts

  • Linked Lists
  • Stack & Queues
  • Trees
  • Graphs
  • Tries
  • Sorting
  • Searching
  • Recursion
  • Dynamic Programming

Problems: Arrays & Strings

  • Is Unique
  • String Rotation
  • Check Permutation
  • URLify
  • One Edit Away
  • String Compression
  • Rotate Matrix
  • Zero Matrix
  • Valid Parenthesis

Problems: Linked Lists

  • Remove duplicates
  • Kth to last node
  • Delete Middle Node
  • Partition
  • Sum List
  • Palindrome
  • Intersection
  • Loop Detection

Problems: Stacks & Queues

  • Stack Min
  • Stack Of Plates
  • Queue via Stacks
  • Sort Stack
  • Animal Shelter

Problems: Trees

  • Path Sum
  • Construct Binary Tree from Inorder and Postorder Traversal

Problems: Binary Search Trees & Graphs

  • Route Between Two Nodes
  • Minimal Tree
  • List of Depths
  • Check Balanced
  • Validate BST
  • Successor
  • Build Order
  • First Common Ancestor
  • Check Sub-Tree
  • (Harder) Random Node

Problems: Sorting & Searching

  • Merge Sorted Array
  • First Bad Version

Problems: Dynamic Programming

  • Triple Step
  • Magic Index
  • (Hard) Towers Of Hanoi

Dynamic Programming

Recursive problems tend to follow very similar patterns. A good hint is that a problem is recursive is that it can be built off sub-problems.

How to approach them?

By definition, recursive solutions are built off of solutions to many sub-problems. Many times, this will mean simply compute f(n) by adding something, removing something, or otherwise changing the solution for f(n-1). In other cases, you might solve the problem for the first half of the data set, then the second half and then merge those results.

Recursive vs Iterative Solutions

Is important to note that recursive solutions can be very inefficient when it comes to space. With each recursive call done, a new layer will be added to the stack, if your recursive algorithm goes to a depth of N, the space required will be at least O(N).

But then why do we use recursion, and not just implement all algorithms iteratively?

Mainly because some solutions are a lot cleaner implemented recursively. Try implementing some of your normal recursive algorithms iteratively, you'll notice how much more difficult it is.

Now, let's explore the most common ways to divide a problem to sub-problems.

Bottom-Up Approach

This is the most common and intuitive approach. We start by knowing how to solve the problem for a simple case, like an array with only one element. Then we figure out how to fix it for two elements, and so on.

Top-Down Approach

For this approach, we think of ways we could subdivide the problem into N amount of sub-problems. This can be more complex, as it is less concrete.

Half and Half Approach

In this approach, we will be often looking to divide the data sets in half. For instance, the binary search works in a half and half approach. When we look for an element in a sorted array, we first figure which half of the array contains the value. Then we recurse and search for the other half, and so on.

Dynamic Programming

The idea is to take a recursive algorithm and find the overlapping sub-problems, in this case, repeated calls. The idea is to cache all those operations, so we don't do them again.

Let's see a trivial example like Fibonnacci but in a memoized way.

Your typical Fibonacci algorithm would look something like this:

int fibonacci(int i) {
    if (i == 0) return 0;
    if (i == 1) return 1;
    return fibonacci(i - 1) + fibonacci(i - 2);
}

Let's walk through how many calls are being done here to calculate the final number:

fibonacci calls

Can you recall what the runtime of this function is?

Tip: When trying to resolve a recursive algorithm - drawing the recursive calls as a tree is a great way to figure out the runtime.

In case you need to refresh yourself, in the big-O notation section, there are recursion examples. The formula for the recursive algorithm is O(Number of Branches ^ Depth).

Observe in the tree, the base cases are the leafs f(1) and f(0). The total number of nodes will represent the runtime since each call only does O(1) work outside of its recursive calls.

In the case of Fibonacci, we have two branches, and N is the depth of the tree. We can assume that the runtime is O(2 ^ N).

The problem

If you can see the recursion tree, we have a lot of identical nodes, f(3) appears twice, whilst f(2) appears three times. We are recomputing this values every single time, if we saved them in a cache we would save all the duplicated computations.

Top-Down Dynamic Programming - Memoization

If you think about it, we shouldn't be making more calls than O(N), since there are only O(N) possible values we can throw at our recursive function. This is what we call Memoization, with a small modification we can change the algorithm to run in O(N) time.

int memoized(int n) {
    int[] cache = new int[n + 1]; // we need to create a cache the same length as N.
    return memoized(n, cache);
}

private int memoized(int n, int[] cache) {
    if (n == 1 || n == 0) return n;
    if (cache[n] == 0) { // add the calculated result to the cache, in that index
        cache[n] = memoized(n - 1, cache) + memoized(n - 2, cache);
    }
    return cache[n]; // return the computation that we have done saved in the cache
}

Whilst the first solution may take over a minute to generate the 50th Fibonacci number on a domestic computer, the memoized approach can generate the 10,000th number in just fractions of a millisecond, this is how powerful dynamic programming is.

Let's have a look at how the recursion tree would look now:

memoized fibonacci

The black boxes in the diagram represent the precomputed values. There are fewer nodes in the tree now due to caching, also notice that the depth remains the same. Except for this time we have successfully achieved an O(N) runtime.

Bottom-Up Dynamic Programming

We can also try to implement this in a bottom-up approach. Think about doing the same as in the recursive approach, but in reverse.

First, we compute f(1) and fib(0), which are already known from the base cases. Then we use those to compute f(2). Then we use the prior answer to compute f(3), then f(4) and so on.

int botomUp(int n) {
    if (n == 1) return 1;
    if (n == 0) return 0;

    int[] cache = new int[n];
    cache[0] = 0;
    cache[1] = 1;

    for (int i = 2; i < n; i++) {
        cache[i] = cache[i - 1] + cache[i - 2];
    }
    return cache[n - 1] + cache[n - 2];
}

If we think about it, we are only using the previous value in the cache, never actually accessing the rest, we could simply use another int to represent the previous computation, instead of an array.

int simplifiedBottomUp(int n) {
    if (n == 1) return 1;
    if (n == 0) return 0;

    int a = 0;
    int b = 1;

    for (int i = 2; i < n; i++) {
        int c = a + b;
        a = b;
        b = c;
    }
    return a + b;
}

This might seem like an overkill explanation for such a simple problem, but truly understanding this process will make more difficult problems much easier.

Last updated on 3/24/2020
← RecursionIs Unique →
  • How to approach them?
    • Bottom-Up Approach
    • Top-Down Approach
    • Half and Half Approach
  • Dynamic Programming
    • The problem
    • Top-Down Dynamic Programming - Memoization
    • Bottom-Up Dynamic Programming
Kodr
Content
IntroBig O NotationOptimisation
Coding Problems
Check PermutationIsUniqueURLify
Blogs
Tecnica Pomodoro
Social
About MeGitHub
Follow @diego_romero_x
Copyright © 2021 Kodr