Watch 10 video solutions for Longest Binary Subsequence Less Than or Equal to K, a medium level problem involving String, Dynamic Programming, Greedy. This walkthrough by codestorywithMIK has 9,438 views views. Want to try solving it yourself? Practice on FleetCode or read the detailed text solution.
You are given a binary string s and a positive integer k.
Return the length of the longest subsequence of s that makes up a binary number less than or equal to k.
Note:
0.
Example 1:
Input: s = "1001010", k = 5 Output: 5 Explanation: The longest subsequence of s that makes up a binary number less than or equal to 5 is "00010", as this number is equal to 2 in decimal. Note that "00100" and "00101" are also possible, which are equal to 4 and 5 in decimal, respectively. The length of this subsequence is 5, so 5 is returned.
Example 2:
Input: s = "00101001", k = 1 Output: 6 Explanation: "000001" is the longest subsequence of s that makes up a binary number less than or equal to 1, as this number is equal to 1 in decimal. The length of this subsequence is 6, so 6 is returned.
Constraints:
1 <= s.length <= 1000s[i] is either '0' or '1'.1 <= k <= 109Problem Overview: You receive a binary string s and an integer k. The task is to pick a subsequence of s that forms a binary number less than or equal to k, while maximizing the subsequence length. The subsequence must preserve order, but you can skip characters.
Approach 1: Dynamic Programming with Memoization (O(n * log k) time, O(n * log k) space)
This approach models the decision process for each index: include the current bit or skip it. The state typically tracks the current position and the value of the binary number formed so far. Because the value grows exponentially with appended bits, you cap the tracked value at k. Use memoization to avoid recomputing overlapping states. Each recursive step branches into two choices and stores the best result. The complexity depends on the maximum number of bits needed to represent k, which is roughly logâ‚‚(k). This method is useful when demonstrating classic dynamic programming thinking or when reasoning about subsequences systematically.
Approach 2: Greedy from Right to Left (O(n) time, O(1) space)
The optimal solution relies on a key observation about binary numbers. Bits on the right contribute less to the value than bits on the left. When building the subsequence, you want to include as many low-weight bits as possible. Start scanning from the rightmost character of the string. Keep track of the current value and the bit weight (1, 2, 4, 8...). If adding a '1' keeps the value ≤ k, include it and update the value. Always include '0' because it does not increase the number's value but increases subsequence length. Stop considering '1's once the bit weight exceeds k, since any further '1' would push the value beyond the limit. This greedy logic works because choosing smaller positional weights first maximizes the number of usable bits. The algorithm performs a single pass and uses constant extra memory, making it ideal for interview settings involving greedy algorithms.
Recommended for interviews: The greedy solution is what interviewers typically expect. It shows you understand how binary positional weights affect numeric value and how to exploit that structure for an O(n) pass. Discussing the dynamic programming formulation first can help demonstrate problem‑solving depth, but implementing the greedy approach shows stronger optimization skills.
| Approach | Time | Space | When to Use |
|---|---|---|---|
| Dynamic Programming with Memoization | O(n * log k) | O(n * log k) | When exploring all subsequence decisions or explaining a systematic DP formulation |
| Greedy Right-to-Left Bit Selection | O(n) | O(1) | Best choice for interviews and large inputs where a single linear pass is required |