5 Minutes Easy to Differ Recursion from Dynamic Programming

0
426
dynamic programming
21 / 100

Recursion and dynamic programming are where the difficulty lies in algorithm-like problems. The core of algorithms lies in finding the state transfer equation, i.e., how to solve the original problem by subproblems. At algo.monster, we summarize some learning points to distinguish them better. And this article shares at huffingtonmedia.com and you can also check the article here.

What is recursion?

Recursion is the use of a function itself in the definition of a function.

The use of recursion in algorithms makes it easy to perform functions implemented with loops, such as the left-middle-right traversal of a binary tree. Recursion widely used in algorithms includes functional programming, which is becoming increasingly popular.

No loops, but recursion is in purely functional programming.

A meaningful recursive algorithm decomposes the problem into similar subproblems of reduced size, and when the subproblem abbreviates to the unusual, we can know its solution. Then we establish the connection between recursive functions to solve the original problem, which is the point of using recursion. Remember a million times in your brain that recursion is not an algorithm. It is a programming method that corresponds to iteration. It is just that we usually use recursion to decompose a problem.

For a problem solved by recursion, there must be a recursive termination condition (the exhaustive nature of the algorithm), i.e., the recursion will gradually reduce in size to the unusual.

Practicing recursion

Change all the iterations you write into recursive form. Let’s start practicing this easy way. For example, if you write a program that does the function “output a string in reverse order”, it would be effortless to write it using iteration, but could you write it using recursion? This exercise will allow you to get used to using recursion to write programs.

If you are already familiar with recursion, then let’s move on to the next section.

Repeated computations in recursion

There can be so much repetition in recursion that a simple way to eliminate it is mnemonic. Recursion uses a “record table” (such as a hash table or an array) to record what we have already computed. When we reencounter it, we can return it directly if we have already calculated it

before, thus avoiding double computation. The DP array in dynamic programming is the same as the “record table” here.

What is dynamic programming?

If you are already familiar with recursive techniques, recursion to solve problems is intuitive and easy to write code. Let’s focus on another problem – repeated computation. We can analyze (try drawing a recursive tree) to see if recursion is likely to double-compute while reducing the size of the problem. If there is no double-counting, then there is no need to use memetic recursion or dynamic programming. Thus, dynamic programming is an enumeration, so it is possible.

But compared to violent reckoning, dynamic programming does not have double computation. Therefore, ensuring that there is no duplication or omission when enumerating is one of the critical points. Recursion uses a function called stack to store data, so if the stack becomes large, it will be easy to blow up the stack.

How to determine recursion and dynamic programming?

Determining the critical conditions is usually relatively simple and can be quickly mastered by doing a few more problems.

[3] How to determine the state transfer equation is more

complicated. But fortunately, these sets are relatively strong. For example, the state of a string is usually dp[i] for the string sending in i. The form of two lines, for example, is usually dp[i][j] for strings s1 ending in i and s2 ending in j . If you can’t, you can draw a diagram honestly and keep observing to improve your sense of the problem.

Regarding enumerating the states, the transfer equation will determine how to list them if there is no rolling array. If you use rolling arrays, you should pay attention to the dynamic programming correspondence after and before compression.

Does dynamic programming use recurrence?

Typically, dynamic programming algorithms are on recursive relations involving optimal solutions, so the proof of correctness will focus primarily on proving why that recursive relation is correct.

How do you develop recurrence relationships for dynamic programming problems?

Just as you would when developing a recursive algorithm.

1. Finish all the problem for all possible inputs such that i<n

2. Try to find the solution for the input n based on those solutions (e.g. f(n)=f(n-1)+f(n-2) )

3. Find the base case(s), that is, find inputs such that the problem is easy to solve for those inputs. For example, f(0)=1, f(1)=1. You’ve developed the recurrence relationship! It is just

connecting the problem with its easier subproblems. Now you have to develop the actual dynamic programming algorithm by storing the solutions of the subproblems in a table to avoid having to recompute them.

Why do I need to draw tables for dynamic programming?

Some people don’t know why they need to draw a table, but they think it is inevitable and necessary to remove a table to be dynamic.

Dynamic programming is essentially the transformation of a significant problem into a small problem, and then the solution of the significant issue is related to the small problem. In other words, we can calculate the significant problem from the minor problem. It is the same as

recursion, but dynamic programming is similar to lookup tables to reduce time and space complexity.

The purpose of drawing a table is to deduce and complete the state transfer. Each cell in the table is a small problem and filling the table in solving the problem.

Usually, the bottom right corner of the table is the maximum size of the problem, which is the size we want to solve.

For example, if we use dynamic programming to solve the backpack problem, we are, in fact, constantly asking, based on the previous small problem A[i – 1][j] A[i – 1][w – wj].

Should choose it or not, as the judgment of the criteria is straightforward, is the maximum value. Hence, we have to do is for the choice and not choose two cases respectively to find the value, take the maximum, and finally update the cell.

Most of the dynamic programming problem sets are “choose.” or “do not choose”, that is, a “choice”. Most dynamic programming problems are accompanied by space optimization (rolling arrays),

which is the advantage of dynamic programming over traditional memorized recursion.

To summarize.

When you learn dynamic programming, you can first try to use mnemonic recursion to solve it. Then convert it to dynamic programming so that you can practice it a few more times and get a feel for it. After that, you can practice rolling arrays. This technique is beneficial and relatively simple.

The hard part of comparing dynamic programming is enumerating such states (no duplicates) and finding state transfer equations.

If you can only remember one phrase, remember that recursion is working backward from the result of the problem until the problem is in size to the ordinary. Dynamic programming starts from the ordinary and scales up to the optimal substructure.