Prerequisites:
- Install go using my guide
Introduction to Dynamic Programming
Dynamic Programming (DP) is a powerful algorithmic technique used to solve problems by breaking them down into smaller overlapping subproblems. It optimizes the solution by storing the results of subproblems and reusing them to solve larger instances of the problem. DP is widely used to solve problems in various domains, including optimization, sequence alignment, graph algorithms, and more. This post will explore the implementation of dynamic programming in go in depth.
Core Principles of Dynamic Programming
Overlapping Subproblems
Overlapping Subproblems, a pivotal concept in DP, centers around identifying and addressing smaller instances of a problem that recur multiple times within the larger context. By recognizing and solving these recurring subproblems, DP endeavors to eliminate repetitive calculations. This strategy hinges on storing the outcomes of previously solved subproblems to circumvent redundant computations. As a result, when encountering these known subproblems again, the DP approach retrieves the stored solutions, bypassing redundant computation and enhancing overall efficiency.
Optimal Substructure
Optimal Substructure, another key principle of DP, elucidates that an optimal solution for a larger problem can be constructed by effectively leveraging the optimal solutions to its subproblems. This principle underscores the interconnectedness between a problem and its subproblems, signifying that optimal solutions at the smaller scale contribute to achieving an optimal solution for the overarching problem. DP leverages these optimal substructures to systematically build solutions for larger and more complex problems, effectively breaking down intricate challenges into manageable subparts.
Implementing Dynamic Programming in Go
Memoization
Memoization is a strategy in dynamic programming that optimizes computation by storing previously solved subproblem results in a data structure like arrays or maps. This approach aims to prevent redundant recalculations, enhancing efficiency. By caching computed values, subsequent calls for the same subproblems retrieve stored solutions rather than re-executing calculations, significantly reducing time complexity.
Typically employed in top-down dynamic programming methodologies, memoization enhances algorithmic performance by trading off space for time. When encountering recurring subproblems in tasks like Fibonacci sequence calculations or solving complex recursive problems, memoization proves beneficial.
It offers a more efficient path to solve problems by retaining already computed solutions, thereby streamlining the overall computational process and reducing the execution time of algorithms significantly. This technique’s implementation ensures faster execution, especially in scenarios where identical subproblems are recurrently encountered, optimizing the algorithm’s overall efficiency.
var memo map[int]int
func fibonacci(n int) int {
if val, ok := memo[n]; ok {
return val
}
if n <= 1 {
return n
}
memo[n] = fibonacci(n-1) + fibonacci(n-2)
return memo[n]
}
Tabulation
Tabulation is a bottom-up technique in dynamic programming that constructs solutions iteratively, commencing with the smallest subproblems and gradually progressing toward the larger problem. Unlike memoization, which uses recursion and storage of solved subproblems, tabulation directly computes and stores results in a table-like structure.
By systematically filling in values for each subproblem in a predefined order, this method eliminates redundancy and ensures that each computation depends only on previously computed smaller subproblems. Tabulation simplifies complex problems by iteratively solving smaller components, gradually building towards resolving the overarching problem, making it an efficient strategy for dynamic programming implementations.
func fibonacci(n int) int {
if n <= 1 {
return n
}
dp := make([]int, n+1)
dp[0], dp[1] = 0, 1
for i := 2; i <= n; i++ {
dp[i] = dp[i-1] + dp[i-2]
}
return dp[n]
}
Examples of Dynamic Programming in go
Longest Common Subsequence (LCS)
Given two sequences, find the longest subsequence present in both of them.
Please see below solution for above problem using DP in go.
func longestCommonSubsequence(text1 string, text2 string) int {
m, n := len(text1), len(text2)
dp := make([][]int, m+1)
for i := range dp {
dp[i] = make([]int, n+1)
}
for i := 1; i <= m; i++ {
for j := 1; j <= n; j++ {
if text1[i-1] == text2[j-1] {
dp[i][j] = dp[i-1][j-1] + 1
} else {
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
}
}
}
return dp[m][n]
}
Advanced Dynamic Programming Techniques in Go
Dynamic Programming (DP) is a versatile technique that involves breaking down complex problems into simpler subproblems to optimize solutions. While basic DP involves memoization and tabulation, advanced DP techniques further enhance problem-solving capabilities by utilizing more intricate strategies.
1. Bitmask DP
Bitmask Dynamic Programming (DP) leverages bitwise operations and masks to represent states effectively in solving problems. This technique efficiently handles scenarios related to subsets, permutations, or combinations by using the binary representation of integers to denote the presence or absence of elements within a set.
By assigning bits to elements and manipulating these masks through bitwise operations like AND, OR, XOR, and shifts, Bitmask DP efficiently explores various states and their combinations. This approach proves invaluable in optimizing solutions for problems involving sets, subsets, permutations, or combinations, offering a compact and speedy means to handle state representations in dynamic programming paradigms.
Let us see the solution of Minimum Subset Sum Difference using DP in Go.
func minimumSubsetSumDifference(nums []int) int {
total := 0
for _, num := range nums {
total += num
}
target := total / 2
dp := make([]bool, target+1)
dp[0] = true
for _, num := range nums {
for j := target; j >= num; j-- {
dp[j] = dp[j] || dp[j-num]
}
}
for i := target; i >= 0; i-- {
if dp[i] {
return total - 2*i
}
}
return -1
}
2. Convex Hull Optimization (CHT)
Convex hull is a fundamental computational geometry technique used to determine the smallest convex polygon that encloses a set of points. In dynamic programming (DP), this method optimizes the solution by breaking down a complex problem into smaller subproblems, solving each independently, and combining the results to obtain the final solution. The convex hull in DP aims to efficiently calculate the convex hull of a set of points by leveraging the optimal substructure and overlapping subproblems principles.
An example of convex hull using DP in Go involves processing a set of 2D points and finding the smallest convex polygon containing all these points. The algorithm breaks this task into smaller subsets of points, computes their convex hulls, and merges them to construct the overall convex hull. Utilizing memoization or tabulation techniques, DP minimizes redundant calculations, enhancing the efficiency of determining the convex hull for larger point sets.
package main
import (
"fmt"
"sort"
)
type Point struct {
X, Y int
}
func crossProduct(o, a, b Point) int {
return (a.X-o.X)*(b.Y-o.Y) - (a.Y-o.Y)*(b.X-o.X)
}
func convexHull(points []Point) []Point {
n := len(points)
if n <= 1 {
return points
}
sort.Slice(points, func(i, j int) bool {
return points[i].X < points[j].X || (points[i].X == points[j].X && points[i].Y < points[j].Y)
})
lower := make([]Point, 0)
for _, p := range points {
for len(lower) >= 2 && crossProduct(lower[len(lower)-2], lower[len(lower)-1], p) <= 0 {
lower = lower[:len(lower)-1]
}
lower = append(lower, p)
}
upper := make([]Point, 0)
for i := len(points) - 1; i >= 0; i-- {
p := points[i]
for len(upper) >= 2 && crossProduct(upper[len(upper)-2], upper[len(upper)-1], p) <= 0 {
upper = upper[:len(upper)-1]
}
upper = append(upper, p)
}
upper = upper[:len(upper)-1]
lower = lower[:len(lower)-1]
lower = append(lower, upper...)
return lower
}
func main() {
points := []Point{
{0, 3}, {2, 2}, {1, 1}, {2, 1}, {3, 0}, {0, 0}, {3, 3},
}
convexHullPoints := convexHull(points)
fmt.Println("Convex Hull Points:", convexHullPoints)
}
3. Divide and Conquer DP
Divide and Conquer Dynamic Programming (DP) is a problem-solving approach that dissects a complex problem into smaller, self-contained subproblems, solving them independently to ultimately solve the overarching problem. This methodology entails breaking down the main problem into smaller, more manageable sub-parts, often recursively. Each subproblem’s solution is computed separately and then combined to derive the solution for larger, interrelated concerns. By strategically partitioning the problem and addressing individual segments efficiently, Divide and Conquer DP significantly reduces redundant computations. This approach fosters optimal utilization of resources by focusing on solving distinct smaller tasks, utilizing their outcomes, and effectively merging these solutions to derive the final solution for the entire problem.
Let us see the below solution using dynamic programming in Go for the Burst Balloons problem.
func maxCoins(nums []int) int {
nums = append([]int{1}, nums...)
nums = append(nums, 1)
n := len(nums)
dp := make([][]int, n)
for i := range dp {
dp[i] = make([]int, n)
}
for length := 2; length < n; length++ {
for left := 0; left < n-length; left++ {
right := left + length
for k := left + 1; k < right; k++ {
dp[left][right] = max(dp[left][right], nums[left]*nums[k]*nums[right]+dp[left][k]+dp[k][right])
}
}
}
return dp[0][n-1]
}
4. Tree DP
Tree Dynamic Programming (DP) is a technique focused on solving dynamic programming problems on tree structures. Leveraging traversal methods like Depth-First Search (DFS) or Breadth-First Search (BFS), Tree DP efficiently computes solutions by exploring and analyzing nodes within the tree.
By employing DFS or BFS, Tree DP navigates the tree’s nodes, enabling efficient examination and computation of various subproblems. It involves strategically traversing the tree, considering parent-child relationships, and utilizing memoization or tabulation techniques to store and reuse computed results.
Through DFS, Tree DP systematically processes nodes, visiting deeper levels first, while BFS explores the tree level by level. These traversal strategies aid in efficiently solving problems that involve tree-related scenarios like finding the longest path, calculating subtree sums, determining optimal values considering parent-child relations, or analyzing node connections.
Tree DP’s utilization of DFS or BFS facilitates effective problem-solving on tree structures, providing a robust approach for dynamic programming challenges inherent to trees.
Let us understand the solution of the famous problem Maximum Product of Splitted Binary Tree using Tree DP in go
package main
type TreeNode struct {
Val int
Left *TreeNode
Right *TreeNode
}
func maxProduct(root *TreeNode) int {
totalSum := calculateSum(root) // Calculate the total sum of the tree
var maxProduct int
calculateProduct(root, totalSum, &maxProduct)
return maxProduct % int(1e9+7)
}
func calculateSum(node *TreeNode) int {
if node == nil {
return 0
}
return node.Val + calculateSum(node.Left) + calculateSum(node.Right)
}
func calculateProduct(node *TreeNode, totalSum int, maxProduct *int) int {
if node == nil {
return 0
}
leftSum := calculateProduct(node.Left, totalSum, maxProduct)
rightSum := calculateProduct(node.Right, totalSum, maxProduct)
subtreeSum := leftSum + rightSum + node.Val
product := subtreeSum * (totalSum - subtreeSum)
if product > *maxProduct {
*maxProduct = product
}
return subtreeSum
}
func main() {
// Sample Tree
root := &TreeNode{
Val: 1,
Left: &TreeNode{
Val: 2,
Left: &TreeNode{Val: 4},
Right: &TreeNode{Val: 5},
},
Right: &TreeNode{
Val: 3,
Left: &TreeNode{Val: 6},
},
}
result := maxProduct(root)
fmt.Println("Maximum product of splitted binary tree:", result)
}
Conclusion
Dynamic Programming is a powerful algorithmic technique that optimizes solutions by breaking down problems into smaller overlapping subproblems. In Go, employing memoization or tabulation helps efficiently solve problems, offering performance benefits by avoiding redundant computations.
Advanced Dynamic Programming techniques in Go provide powerful tools to solve complex optimization problems. Bitmask DP, Convex Hull Optimization, Divide and Conquer DP, and Tree DP offer efficient strategies for tackling a wide array of problems, enhancing the versatility of DP algorithms.
Dynamic Programming is easy to know but hard to apply to new problems but that is the true test of your mastery over dynamic programming. It is particularly challenging during interviews due to time pressure. The core of the challenge is in finding overlapping subproblems and recurrence relations between them. If you practice new DP problems with this approach in mind then you will surely master it.
Enjoy the post!!
1 thought on “Dynamic Programming in Go: A Simple Guide With 5 Important Examples”