The Living Giant Python Syntax and Traps LeetCode Document
Yes, this is long. It is the continuously updated, ultimate compilation of every Python idiom, trick, and trap for efficient interview coding.
The Living Giant Python Syntax and Traps LeetCode Document
Yes, this document is long and that is entirely by design. It serves as the ultimate all in one compilation of every technical Python tip, idiom, syntax pattern, and common trap you will encounter when solving LeetCode questions. It is designed to be your continuously updatable single source of truth. The goal is to help you internalize syntax so completely that you can focus all your mental energy entirely on algorithmic logic and problem solving during high pressure interviews.
The document is organized strictly from basic Python syntax fundamentals to advanced algorithmic structures.
Table of Contents
- Variables & Syntax Basics
- Math & Numbers
- Strings & Characters
- Iteration & Loops
- Lists & Arrays
- Dictionaries & Sets
- Queues & Stacks
- Heaps
- Linked Lists
- Trees & Graphs
- Advanced Patterns (Intervals, Sliding Window)
- Recursion & Caching
Variables & Syntax Basics
Unpacking
# Unpack list/tuple
a, b, c = [1, 2, 3]
# Unpack with rest
first, *rest = [1, 2, 3, 4, 5] # first=1, rest=[2,3,4,5]
*start, last = [1, 2, 3, 4, 5] # start=[1,2,3,4], last=5
# Unpack in loop
queries = [[0, 1], [1, 3]]
for x, y in queries:
print(x, y)
Slicing
arr = [0, 1, 2, 3, 4, 5]
arr[1:4] # [1, 2, 3]
arr[:3] # [0, 1, 2] (from start)
arr[3:] # [3, 4, 5] (to end)
arr[::-1] # [5, 4, 3, 2, 1, 0] (reverse)
arr[::2] # [0, 2, 4] (every 2nd element)
arr[-1] # Last element
arr[-2] # Second to last element
Mutable vs Immutable Multiplication (* n)
One of the most common Python traps is using the * operator to initialize a list of lists (or any container).
The Trap: [[]] * n
This does not create n independent empty lists. In Python, list * n is essentially repeated concatenation (+).
# Create 3 "references" to the SAME inner list
g = [[]] * 3
# is equivalent to: g = [list_a] + [list_a] + [list_a]
# Add a neighbor to node 0
g[0].append(1)
print(g)
# Expected: [[1], [], []]
# Actual: [[1], [1], [1]] <-- Every index changed!
The “Double List” Principle:
[] * 3is[] + [] + []which results in an empty list[]. (Concatenating “nothing” is still nothing).[[]] * 3is[ref] + [ref] + [ref]which results in a list of 3 pointers to the same memory address.
The Fix: List Comprehension
To create independent mutable objects, you must use a list comprehension.
g = [[] for _ in range(n)]
g[0].append(1)
print(g) # Output: [[1], [], []] <-- Independent!
Why is [False] * n safe?
You might notice we often use seen = [False] * n. Why doesn’t this break?
- Immutable types (integers, booleans, strings) cannot be mutated.
- When you do
seen[0] = True, you aren’t “changing” theFalseobject; you are replacing the reference at index 0 with a completely new object (True). - Mutable types (lists, dicts, sets) can be changed in-place (e.g.,
.append(),.add()). These changes are seen by all references pointing to that object.
Mental Rule
[constant] * nis safe for immutable primitives (int, bool, str, None).[container] * nis dangerous for mutable objects (list, dict, set).
Always use a list comprehension for graph adjacency lists or 2D matrices.
Grid Boundary Helper: valid()
When working with matrices or grids, defining a valid() helper function makes your code much cleaner and less error-prone.
The Tip
Instead of repeating a complex if condition in every loop, define a helper (often a nested function) to handle boundary checks and even visited/blocked status.
def solve(grid):
ROWS, COLS = len(grid), len(grid[0])
def is_valid(r, c):
return 0 <= r < ROWS and 0 <= c < COLS and grid[r][c] != "BLOCKED"
# Use it everywhere
for r in range(ROWS):
for c in range(COLS):
if is_valid(r + 1, c):
# logic...
Why it’s Awesome
- Readability:
if is_valid(nr, nc):reads like English. - Maintainability: If you need to add a condition (e.g., “don’t walk into walls”), you only change it in one place.
- Error Reduction: Prevents “off-by-one” errors or swapping
ROWSandCOLSin multiple locations. - Nested Visibility: By defining it inside your main function, it automatically has access to
ROWS,COLS, andgridvia closure.
Lambda Functions
# Anonymous function
square = lambda x: x ** 2
# Often used with map, filter, sorted
numbers = [1, 2, 3, 4, 5]
squared = list(map(lambda x: x ** 2, numbers))
evens = list(filter(lambda x: x % 2 == 0, numbers))
sorted_by_abs = sorted([-3, 1, -2], key=lambda x: abs(x))
The if not x Pitfall (Truthiness)
In Python, many values evaluate to False in a boolean context. This is called “falsiness”.
The Danger
If you are checking if a variable exists or has been assigned, and that variable could be 0, "", or [], using if not x: will lead to bugs.
def find_closest(node, target, best_val):
# BAD: if node.val is 0, this logic might incorrectly skip it
if not best_val or abs(node.val - target) < abs(best_val - target):
best_val = node.val
The Solution: Explicit None Check
Always use is None or is not None when 0 is a valid piece of data (like in LeetCode tree/array problems).
def find_closest(node, target, best_val):
# GOOD: Explicitly check for None
if best_val is None or abs(node.val - target) < abs(best_val - target):
best_val = node.val
Falsy Values in Python
The following are all “Falsy”:
NoneFalse0(int)0.0(float)""(empty string)[](empty list){}(empty dict)set()(empty set)
Interview Tip
If you’re writing a tree problem and you catch yourself writing if not left:, ask yourself: “Could the left-subtree-result be 0?”. If yes, change it to if left is None:. This single habit prevents many “silent” bugs that are hard to debug during an interview.
Collection Truthiness (Implicit Empty Check)
In Python, sequences (lists, strings, tuples) and collections (sets, dictionaries) are “falsy” if they are empty and “truthy” if they contain at least one element.
Idiomatic Way
stack = []
# To check if empty:
if not stack:
print("Stack is empty")
# To check if NOT empty (has elements):
if stack:
print("Stack has elements")
# Very common in loops:
while stack:
item = stack.pop()
# process item
Less Idiomatic
Avoid checking length explicitly unless you actually need the count.
if len(stack) == 0: # Use 'if not stack' instead
...
if len(stack) > 0: # Use 'if stack' instead
...
Using Underscore for Unused Loop Variables
In Python, it is a common convention to use an underscore (_) as a variable name when you need to loop for a specific number of times but don’t actually need the index within the loop body.
Example: Finding the kth Node from the End
This technique is often used in linked list patterns, such as finding the kth node from the end.
def find_node(head, k):
slow = head
fast = head
# Use _ because we don't need the index value
for _ in range(k):
fast = fast.next
while fast:
slow = slow.next
fast = fast.next
return slow
Why Use _?
- Readability: It explicitly signals to other developers (and your future self) that the loop index is intentionally ignored.
- Linting: Many linters will warn about unused variables. Using
_is the standard way to bypass these warnings for loop indices. - Clarity: It keeps the focus on the loop’s purpose (repetitive action) rather than the iteration state.
Swap Two Variables (No Temp Variable Needed)
# Traditional swap with temp variable
temp = a
a = b
b = temp
# Python's elegant way using tuple unpacking
a, b = b, a
# Swap array elements
arr[i], arr[j] = arr[j], arr[i]
Useful Built-in Functions
all([True, True, False]) # False (all must be True)
any([True, False, False]) # True (any must be True)
sum([1, 2, 3]) # 6
len([1, 2, 3]) # 3
range(5) # 0, 1, 2, 3, 4 (stops before 5)
range(1, 5) # 1, 2, 3, 4 (start inclusive, end exclusive)
range(0, 10, 2) # 0, 2, 4, 6, 8 (step by 2)
bin(5) # '0b101' (binary string)
'110'.count('1') # 2 (count occurrences in string)
Bit manipulation tricks:
i >> 1: Shift right by 1 (same asi // 2).i & 1: Get the last bit (same asi % 2).
Common Python Syntax Pitfalls
1. Operator Precedence (The Midpoint Bug)
When calculating the middle of two numbers, the order of operations matters.
[!CAUTION] Wrong:
mid = low + high // 2Python seeshigh // 2first, then adds it tolow. Example:low=10, high=20. Calculation:10 + (20 // 2) = 20. You never found the middle!
[!TIP] Right:
mid = (low + high) // 2The parentheses force the addition to happen first. Or even better:mid = low + (high - low) // 2to avoid integer overflow.
2. Generator Expressions with sum()
Python allows you to sum up items in a single line, but the placement of the loop matters.
[!CAUTION] Wrong:
total = sum(math.ceil(x / y)) for x in itemsThis tries to callsum()on a single number, then tries to start a loop afterward.
[!TIP] Right:
total = sum(math.ceil(x / y) for x in items)The entire... for ... in ...expression must be inside thesum()parentheses. This is called a “generator expression.” It’s fast and memory-efficient.
3. Floor Division (//) vs. True Division (/)
[!CAUTION] The Trap: If you use
range(),list[index], or binary search pointers, you MUST use an integer. Using/will cause aTypeErrorbecause it always returns a float (e.g.,4 / 2 = 2.0).
[!TIP] The Fix: Use floor division
//to ensure you get an integer (e.g.,5 // 2 = 2).
4. list.append() and list.sort() return None
In Python, methods that modify a list in-place return None. You cannot chain them.
[!CAUTION] Wrong:
result.append(val).count('1')result.append(val)returnsNone. You are trying to callNone.count('1').
[!TIP] Right:
val_count = val.count('1') result.append(val_count)
5. String/List Slicing [start:stop]
The second argument is the stop index, NOT the length.
[!CAUTION] Wrong:
bin(i)[2:n](thinking you want n characters).
[!TIP] Right:
bin(i)[2:](to slice from index 2 to the end).
6. Bitwise Operator Precedence
Arithmetic operators (+, -, *, /) have higher precedence than bitwise operators (&, |, ^).
[!CAUTION] Wrong:
dp[i >> 1] + i & 1Evaluated as:(dp[i >> 1] + i) & 1
[!TIP] Right:
dp[i >> 1] + (i & 1)Always use parentheses when mixing arithmetic and bitwise logic.
Math & Numbers
Modulo
10 % 3 # 1 (remainder)
10 // 3 # 3 (division without remainder)
Floor Division Assignment
# //= is floor division assignment operator
curr = 10
curr //= 3 # Same as: curr = curr // 3
# Result: curr = 3 (integer division, no remainder)
# Common in sliding window problems
curr //= nums[left] # Divide curr by nums[left] using integer division
Power
2 ** 3 # 8
pow(2, 3) # 8
Python Tip: Arbitrary Precision Integers (The Bit-Depth Cheat Code)
In many languages (Java, C++, Go), integers have a fixed size (usually 32 or 64 bits). If you exceed 2^{63}-1, the number “wraps around” or overflows, leading to negative results and broken logic.
In Python, integers have arbitrary precision. They will grow to consume as much memory as your computer has.
The “Maximum Width” Cheat Code
When solving problems like Maximum Width of Binary Tree, you need to index nodes as 2i, 2i+1.
- In a tree with depth 1000, the index would be 2^1000.
- Java/C++: You must “normalize” the level (subtract the leftmost index) to prevent overflow.
- Python: You can ignore overflow entirely. Just keep doubling the numbers. Python will handle the 300-digit number without breaking a sweat.
The Engineering WHY
Python’s int is actually a struct that points to a list of “digits” (usually in base 2^30). As the number gets larger, Python dynamically allocates more “digits” to store it.
Pitfalls
- Performance: While arbitrary precision is convenient, math on 10,000-digit numbers is slower than math on 64-bit hardware-level integers.
- Memory: If you create enough massive integers, you can eventually hit a
MemoryError. - The “Normalize” Habit: In a real interview, even if you use Python, you should mention the normalization technique. It shows “Senior Signal”—that you understand how lower-level memory works and are aware that your code might not be portable to other languages without it.
#python #integers #overflow #senior-signal
Min/Max
min(1, 2, 3) # 1
max([1, 2, 3]) # 3
min(arr, key=len) # Min by length
Infinity
import math
# Initialize a number to infinity
max_pattern_count = math.inf # Positive infinity
min_value = -math.inf # Negative infinity
# Common use case: Initialize min to infinity when finding minimum
min_val = math.inf
for num in nums:
min_val = min(min_val, num)
Calculate Sum of Digits
def digit_sum(num):
digit_sum = 0
while num:
digit_sum += num % 10 # Get last digit
num //= 10 # CRITICAL: Use //= not /= (integer division)
return digit_sum
# Example: digit_sum(123) -> 1 + 2 + 3 = 6
# num % 10 gets the last digit (remainder when dividing by 10)
# num //= 10 removes the last digit (integer division by 10)
# CRITICAL: Must use //= (integer division), not /= (floating point division)
# WRONG: num /= 10 # This creates float, breaks the loop condition
# CORRECT: num //= 10 # Integer division, removes last digit correctly
Strings & Characters
Character Arithmetic: ord() and chr()
Python does not allow direct arithmetic on characters (like char + 1 or char - 'a'). Instead, you must use the “bridge” functions to convert between characters and their ASCII/Unicode integer values.
The Core Functions
ord(char): Character \rightarrow Integer (ASCII code)chr(int): Integer \rightarrow Character
# 1. Increment/Decrement
next_char = chr(ord('a') + 1) # 'b'
prev_char = chr(ord('z') - 1) # 'y'
# 2. Get Alphabetical Index (0-25)
# Mnemonic: "Python makes you say 'ord' out loud"
index = ord('c') - ord('a') # 2
# 3. Handle Wrap-around (z -> a)
char = 'z'
next_wrapped = chr((ord(char) - ord('a') + 1) % 26 + ord('a')) # 'a'
Comparisons across Languages
| Task | C++ / Java | Python |
|---|---|---|
| Index | c - 'a' | ord(c) - ord('a') |
| Shift | 'a' + 2 | chr(ord('a') + 2) |
When to use what?
- Use
ord/chr: When you need to iterate through the alphabet or treat letters like a numeric range. - Use a lookup string: When the alphabet is custom or small (e.g.,
"ACGT").alphabet = "ACGT" next_gene = alphabet[(alphabet.index('A') + 1) % 4] # 'C'
Grid Coordinates: r, c vs. x, y
In grid problems, never use x and y for variables. This is a common trap that leads to “Cartesian Thinking” and swap bugs.
The Cartesian Trap (x, y)
In geometry:
x= horizontal (left/right) = Columnsy= vertical (up/down) = Rows
In coding:
grid[x][y]often leads to people checking0 <= x < rowsbutxshould be compared tocolsin standard geometry.
The Matrix Standard (r, c)
Always name your variables r (row) and c (column) to match the indexing of the matrix: grid[r][c].
rows, cols = len(grid), len(grid[0])
# Move vertically -> row change
for dr, dc in ((1, 0), (-1, 0), (0, 1), (0, -1)):
nr, nc = r + dr, c + dc
# Logic remains perfectly consistent:
# nr belongs with rows, nc belongs with cols
if 0 <= nr < rows and 0 <= nc < cols:
...
Mental Rule
r(row) -> Height -> compare withlen(grid)c(column) -> Width -> compare withlen(grid[0])
By using r and c, you physically cannot mix them up.
String Methods
s = "Hello World"
s.split() # ['Hello', 'World']
s.split('l') # ['He', '', 'o Wor', 'd']
''.join(['a', 'b']) # 'ab'
s.strip() # Remove whitespace
s.replace('l', 'L') # 'HeLLo WorLd'
s.startswith('He') # True
s.endswith('ld') # True
s.count('l') # 2 (Handy for counting characters/bits)
# Convert string to array of characters
list("hello") # ['h', 'e', 'l', 'l', 'o']
String Formatting
name = "Alice"
age = 30
# f-strings (Python 3.6+)
f"My name is {name} and I'm {age}"
f"Value: {value:.2f}" # Format float to 2 decimals
Common f-string mistakes
# WRONG: Using instead of {} (other languages use )
f"x=x, y=y" # Syntax error!
# CORRECT: Use curly braces
f"x={x}, y={y}"
Swapcase for Case-Insensitive Comparison
When you need to check if a character is the same letter as another but with the opposite case (common in “string reduction” or “Great String” problems):
# The "Double Case" Trick
if stack and c == stack[-1].swapcase():
stack.pop()
else:
stack.append(c)
Why use it?
Instead of the verbose:
if stack and c.lower() == stack[-1].lower() and c != stack[-1]:
The swapcase() method handles the “same letter, different case” logic in a single call.
Slicing Efficiency in Recursion
When slicing strings in recursive functions (e.g., s[1:] or s[2:]), you create a new string copy at every call. This requires O(n) memory and time on every single recursion stack frame.
# Slicing creates a full copy of the trailing string
def dfs_slow(s: str):
...
res = dfs(s[1:])
For high performance (like tight LeetCode algorithms with strings of 10,000+ length), pass an expanding index i instead. Looking up an index is O(1).
# Passing an index guarantees O(1) step performance
def dfs_fast(i: int):
# Instead of s[1:], we just access i + 1
if i < len(s):
char = s[i]
res = dfs(i + 1)
Instead of sending substrings around, keep the raw string intact globally and only send “pointers” defining where you are looking.
String Concatenation and Join Errors
Mistake 1: Concatenating integers with strings
# WRONG: Can't concatenate int with str using +
x, y = 2, 5
result = "x=" + x + ", y=" + y # TypeError!
# CORRECT: Convert to string first or use f-string
result = "x=" + str(x) + ", y=" + str(y)
# OR use f-string (preferred)
result = f"x={x}, y={y}"
Mistake 2: Wrong join syntax
# WRONG: join is a string method, not a list method
arr = ['a', 'b', 'c']
result = arr.join(', ') # AttributeError!
# CORRECT: Call join on the separator string
result = ', '.join(arr) # 'a, b, c'
Iteration & Loops
Python Iterators and Reversing Patterns
The “Big Three” Reversing Ways
| Method | Returns | Mutates? | Best Use Case |
|---|---|---|---|
list(reversed(x)) | list | No | Interviews/LeetCode. Explicit, safe, readable. |
x[::-1] | list | No | Short, idiomatic “slicing” shortcut. |
x.reverse() | None | Yes | Performance-sensitive code where you don’t need the original. |
** Watch Out:**
reverse(x)is NOT a thing. Python doesn’t have a global reverse function.
The Iterator Family (The “Other Iterators”)
Python has a family of built-in functions that don’t return lists—they return Iterators. They are “lazy” (they only calculate values as you ask for them).
1. reversed(x)
Returns a list_reverseiterator.
it = reversed([1, 2, 3])
# Result: [3, 2, 1] when converted to list
2. enumerate(x)
Returns pairs of (index, value).
for i, val in enumerate(["a", "b"]):
# (0, "a"), (1, "b")
3. zip(a, b)
Iterates through multiple sequences in parallel.
for name, score in zip(["Alice", "Bob"], [10, 20]):
# ("Alice", 10), ("Bob", 20)
4. map(fn, x) & filter(fn, x)
Transform or filter items lazily.
squares = map(lambda x: x*x, [1, 2, 3])
evens = filter(lambda x: x%2 == 0, [1, 2, 3, 4])
Crucial Iterator Rules
- Lazy Evaluation: No memory is used for the full list until you actually iterate or convert it.
- Single-Use: Once you loop through an iterator (like
reversed(x)), it is “exhausted.” You can’t loop through it again. - Not Indexable: You cannot do
reversed(x)[0]. - Materializing: If you need a real list (e.g., to return in LeetCode), wrap it:
list(reversed(x)).
Summary for Zigzag BFS
When you need to flip a row in a BFS:
Use list(reversed(row)). It follows the Python iterator model perfectly: safe, explicit, and non-mutating.
Enumerate: The Pythonic Way to Track Indices
enumerate() returns an iterable of tuples containing (index, value). It is the standard way to loop when you need both the element and its position.
1. Basic Syntax
arr = ['apple', 'banana', 'cherry']
# Standard usage
for i, val in enumerate(arr):
print(f"Index {i} has value {val}")
# Starting at a different index (e.g., 1-based indexing)
for i, val in enumerate(arr, start=1):
print(f"Product #{i}: {val}")
2. Common Interview Patterns
A. Flipping the First Match
Perfect for “Maximum 69 Number” or finding the first occurrence of a target.
for i, d in enumerate(digits):
if d == '6':
digits[i] = '9'
break
B. Building a Map of Values to Indices
Used for “Two Sum” or tracking last-seen positions.
nums = [10, 20, 30]
idx_map = {val: i for i, val in enumerate(nums)}
# Result: {10: 0, 20: 1, 30: 2}
C. Grid Traversal (Flattened)
for i, row in enumerate(matrix):
for j, cell in enumerate(row):
# r = i, c = j
pass
Why Use This?
- Readable:
for i, x in enumerate(L)is much cleaner thanfor i in range(len(L)): x = L[i]. - Performance: It uses an iterator, which is memory efficient.
- Less Error-Prone: Prevents “off-by-one” errors common with manual index incrementing.
Tags: #python #basics #clean-code
Zip (iterate multiple lists)
arr1 = [1, 2, 3]
arr2 = ['a', 'b', 'c']
# Iterate both simultaneously
for num, letter in zip(arr1, arr2):
print(num, letter)
Adjacent Pairs (Sliding Window of 2)
You can zip a list with itself, shifted by 1, to elegantly iterate over adjacent pairs without using indices:
arr = [10, 20, 30, 40]
for prev_item, curr_item in zip(arr, arr[1:]):
print(prev_item, curr_item)
# Prints:
# 10 20
# 20 30
# 30 40
Range Off-by-One Errors
The most common bug is missing the last element because range is exclusive at the end.
1. Arrays/Lists
arr = [1, 2, 3, 4]
# WRONG: range(1, len(arr) - 1)
# Stops at index 2. Processed: indices 1, 2. Missing: index 3.
for i in range(1, len(arr) - 1):
print(arr[i])
# CORRECT: range(1, len(arr))
# Processed: indices 1, 2, 3.
for i in range(1, len(arr)):
print(arr[i])
2. Inclusive N (0 to N inclusive)
If 0 <= i <= n:
[!CAUTION] Wrong:
range(n)(stops atn-1)
[!TIP] Right:
range(n + 1)(stops atn)
Lists & Arrays
Basic List Comprehension
Traditional for loop:
result = []
for x in range(10):
result.append(x * 2)
One-liner list comprehension:
result = [x * 2 for x in range(10)]
List Comprehension with Condition
Traditional for loop with if:
result = []
for x in range(10):
if x % 2 == 0:
result.append(x * 2)
One-liner with condition:
result = [x * 2 for x in range(10) if x % 2 == 0]
Nested List Comprehension
Traditional nested loops:
matrix = []
for i in range(3):
row = []
for j in range(3):
row.append(i * j)
matrix.append(row)
One-liner nested comprehension:
matrix = [[i * j for j in range(3)] for i in range(3)]
Flatten Lists
If you have a nested list of lists (e.g., [[1, 2], [3, 4]]) and need a flat list ([1, 2, 3, 4]), here are the standard approaches:
1. Using a double loop (Most Readable)
nested_list = [[1, 2], [3, 4]]
flat_list = []
for sublist in nested_list:
for item in sublist:
flat_list.append(item)
2. Using List Comprehension (Idiomatic)
This is essentially the double loop above, but written in a single line. The for clauses remain in the exact same left-to-right order as the nested loops.
flat_list = [item for sublist in nested_list for item in sublist]
3. Using itertools.chain (Best for large lazy evaluation)
import itertools
flat_list = list(itertools.chain.from_iterable(nested_list))
Iterate Array/String with Index
# Iterate with index using range(len())
nums = [1, 2, 3, 4, 5]
for right in range(len(nums)):
print(right, nums[right]) # Access index and value
# Works the same for strings
s = "hello"
for i in range(len(s)):
print(i, s[i])
Initialize Array with Same Value
When you need an array of a fixed size initialized with a default value (like zeros for a result array), use the * operator.
Common Use Case: Result Arrays
This is extremely common in problems where you need to return an array of the same length as the input, such as in monotonic stack problems (e.g., “Daily Temperatures”).
# Initialize an array of same length as 'temperatures' with zeros
answer = [0] * len(temperatures)
Why use this?
- Concise:
[0] * nis much shorter than a loop or list comprehension. - Performance: It is highly optimized in Python.
- Standard: This is the idiomatic way to pre-allocate a list in Python when the size is known.
The 2D Array Pitfall
Only use this for primitive types (integers, strings, booleans).
To initialize a 2D array, do not use [[0] * cols] * rows, as this will create multiple references to the same inner list object.
For 2D arrays, always use a list comprehension:
# Correct way for 2D arrays
matrix = [[0] * cols for _ in range(rows)]
Initialize Two-Dimensional Array
# WRONG: Cannot use [][] syntax
answer = [][] # SyntaxError!
# CORRECT: Initialize as list of lists
answer = []
answer.append([]) # Append the first empty list
answer.append([]) # Append the second empty list
# OR more simply:
answer = [[], []] # List containing two empty lists
List Aliasing (Multiple Assignment Pitfall)
In Python, assigning multiple variables to a mutable object in a single line (chained assignment) makes them all point to the same object.
The Pitfall: Chained Assignment
cs = ts = [] # Both variables point to the same list object!
cs.append(1)
print(ts) # Output: [1] (Wait, I only modified 'cs'!)
The Correct Way: Separate Initialization
Initialize them individually to create two distinct list objects.
cs = []
ts = []
cs.append(1)
print(ts) # Output: [] (Correct, they are independent)
Why this happens?
In Python, = doesn’t copy objects; it creates references. Chained assignment a = b = [] is equivalent to:
- Create an empty list
[]. - Point
bto that list. - Point
ato whateverbis pointing to. Both are now “aliases” for the same memory location.
Dictionaries & Sets
Dictionary Comprehension
Traditional for loop:
squares = {}
for x in range(5):
squares[x] = x ** 2
One-liner dictionary comprehension:
squares = {x: x ** 2 for x in range(5)}
Set Comprehension
Traditional for loop:
unique_squares = set()
for x in range(5):
unique_squares.add(x ** 2)
One-liner set comprehension:
unique_squares = {x ** 2 for x in range(5)}
Set Initialization & Pitfalls
# Define an empty set
a = set()
# Initialize with values using curly braces (not empty braces - that's a dict!)
b = {1, 2, 3, 4}
# Convert iterable to set
c = set([1, 2, 3]) # {1, 2, 3}
Common Pitfalls
Invalid Multiple Arguments:
The set() constructor takes at most one argument (an iterable).
# INVALID: set() takes at most 1 argument
closing = set(']', '}', ')')
# CORRECT: Use curly braces
closing = {']', '}', ')'}
# CORRECT: Pass a single string (which is iterable)
closing = set(')]}')
The String Initialization Pitfall
Be very careful when initializing a set with a single string.
s = "hello"
# WRONG: Splits string into characters
seen = set(s) # {'h', 'e', 'l', 'o'}
# CORRECT: Use curly braces
seen = {s} # {'hello'}
# CORRECT: Wrap in a list
seen = set([s]) # {'hello'}
Hashmap/Dictionary Operations
# Declaration: a hash map is declared like any other variable. The syntax is {}
hash_map = {}
# If you want to initialize it with some key value pairs, use the following syntax:
hash_map = {1: 2, 5: 3, 7: 2}
# Checking if a key exists: simply use the `in` keyword
1 in hash_map # True
9 in hash_map # False
# Accessing a value given a key: use square brackets, similar to an array.
hash_map[5] # 3
# Adding or updating a key: use square brackets, similar to an array.
# If the key already exists, the value will be updated
hash_map[5] = 6
# If the key doesn't exist yet, the key value pair will be inserted
hash_map[9] = 15
# Deleting a key: use the del keyword. Key must exist or you will get an error.
del hash_map[9]
# Get size
len(hash_map) # 3
# Get keys: use .keys(). You can iterate over this using a for loop.
keys = hash_map.keys()
for key in keys:
print(key)
# Iterating directly over dictionary iterates over keys
for key in hash_map: # Same as for key in hash_map.keys()
print(key)
# Get values: use .values(). You can iterate over this using a for loop.
values = hash_map.values()
for val in values:
print(val)
# Convert values to a list
values_list = list(hash_map.values()) # [2, 3, 2]
# Iterate over both keys and values using .items()
for key, value in hash_map.items():
print(key, value) # Prints both key and value
# Common pattern in coding problems
for key, value in summed_map.items():
# Process both key and value together
pass
Counter (from collections)
from collections import Counter
arr = [1, 2, 2, 3, 3, 3]
count = Counter(arr)
# count[1] = 1, count[2] = 2, count[3] = 3
# count.most_common(2) returns [(3, 3), (2, 2)]
Check if All Occurrences Are Equal (One-liner)
from collections import Counter
# Check if all character occurrences in a string are equal
s = "aabbcc"
# Counter(s).values() gives all counts, set() removes duplicates
# If all counts are equal, set will have length 1
len(set(Counter(s).values())) == 1 # True if all chars appear same number of times
Defaultdict (from collections)
from collections import defaultdict
# No need to check if key exists
dd = defaultdict(int)
dd['key'] += 1 # Works even if 'key' doesn't exist
dd_list = defaultdict(list)
dd_list['key'].append(1) # Automatically creates list
# Common pattern: Group items by a key
summed_map = defaultdict(list)
summed_map[digits_sum].append(num) # Adding item to list in value of map
# If digits_sum key doesn't exist, defaultdict automatically creates empty list
Hashability Pitfall (List vs. Tuple)
The Pitfall: Adding a List to a Set/Dict
In Python, mutable objects like lists, sets, and dictionaries cannot be used as keys in a dictionary or elements in a set.
result = set()
result.add([1, 2, 3])
# TypeError: unhashable type: 'list'
The Fix: Convert to Tuple
Tuples are immutable and therefore hashable. Convert your list to a tuple before adding it to a set or using it as a dictionary key.
result = set()
result.add((1, 2, 3)) # Works!
Why this matters?
To provide O(1) lookup, sets and dictionaries use a hash function to calculate the object’s “fingerprint.” If the object is mutable (like a list), its contents could change, which would change its hash and break the data structure’s internal mapping.
Common Interview Scenario
In problems like 3Sum or Group Anagrams, you often need to store a “triplet” or a “frequency signature.” Always use a tuple for these cases.
Queues & Stacks
Queue Operations (using deque)
In Python, collections.deque is the standard way to implement a queue because it provides O(1) time complexity for both append and pop operations from both ends.
import collections
# Declaration: we will use deque from the collections module
queue = collections.deque()
# If you want to initialize it with some initial values:
queue = collections.deque([1, 2, 3])
# Enqueueing/adding elements:
queue.append(4)
queue.append(5)
# Dequeuing/removing elements:
queue.popleft() # Returns 1
queue.popleft() # Returns 2
# Check element at front of queue (next element to be removed)
queue[0] # 3
# Get size
len(queue) # 3
Why not use a list?
While you can use list.pop(0), it is an O(n) operation because all other elements have to be shifted. deque.popleft() is O(1).
The pop(0) Trap
A common mistake is trying to call queue.pop(0) on a deque object.
list.pop(index)accepts an index.deque.pop()takes no arguments and only pops from the RIGHT.- If you want the left, you MUST use
popleft(). Callingdeque.pop(0)will raise aTypeError.
List as Stack
In Python, a standard list is the most common way to implement a stack.
Conceptual Clarification
Think of a list-stack like a stack of plates:
- The “Top” of the stack is the LAST element of the list.
append()adds a plate to the top (the end of the list).pop()removes a plate from the top (the end of the list).- Crucial: We never touch the beginning (index 0) because shifting elements is O(n), while working at the end is O(1).
# Declaration
stack = []
# Pushing elements: O(1)
# Adds to the END of the list (the "Top")
stack.append(1) # [1]
stack.append(2) # [1, 2]
stack.append(3) # [1, 2, 3] <-- 3 is the top
# Popping elements: O(1)
# Removes from the END of the list
stack.pop() # Returns 3 (now stack is [1, 2])
stack.pop() # Returns 2 (now stack is [1])
# Check element at top (Peek)
# Always use [-1] for the top element
stack[-1] # 1
# Check if empty (True if empty, False otherwise)
not stack # False
# Get size
len(stack) # 1
When to Use a Stack
Beyond the obvious LIFO property, a stack is a powerful tool whenever elements in the input interact with each other.
Key Recognition Patterns
- LIFO Interaction: Elements need to be matched or compared with the most recent element seen.
- Matching Elements: Classic examples include valid parentheses or matching opening/closing tags.
- Property Queries: Finding the “next largest element” or “next smallest element” (Monotonic Stacks).
- Expression Evaluation: Mathematical equations provided as strings, where operator precedence or nested sub-expressions require temporary storage.
- Abstract Comparison: Any problem where you need to compare the current element against a “history” that changes as you process the input.
Tip: If the LIFO property is hard to see, ask yourself: “Does the current element need to interact with the most recently stored element?” If yes, a stack is likely the answer.
Atomic Deque Pop & Update Trick
When maintaining a running total or count while removing elements from a queue (common in sliding windows or streams), you can use the return value of popleft() directly in an expression.
# The "Double Action" Trick
# No need to peek with queue[0] first!
self.running_sum -= self.queue.popleft()
Why use it?
It combines two operations into one line:
- Extracts the value being removed.
- Removes the element from the deque.
Instead of:
val = queue[0]
running_sum -= val
queue.popleft()
Use:
running_sum -= queue.popleft()
This is cleaner and prevents bugs where you might subtract the wrong element if you’re not careful with the order of operations.
Monotonic Decreasing Stack (Storing Indices)
A Monotonic Decreasing Stack is a powerful pattern used to find the Next Greater Element. By keeping the stack sorted in decreasing order, we “hold onto” values until we encounter a larger one that “resolves” them.
The Index Pattern
In many problems, you should store indices in the stack instead of values. This is crucial because indices allow you to:
- Access the value:
array[stack[-1]] - Calculate distance:
current_index - popped_index(as seen in “Daily Temperatures”).
Implementation Template
When you see a higher value, you pop from the stack until the decreasing invariant is restored.
class Solution:
def dailyTemperatures(self, temperatures: List[int]) -> List[int]:
stack = [] # Stores ONLY indices
answer = [0] * len(temperatures)
for i in range(len(temperatures)):
# While the current temp is HIGHER than the temp at the top of the stack
# It means we've found the "next warmer day" for the item at the top.
while stack and temperatures[stack[-1]] < temperatures[i]:
j = stack.pop()
answer[j] = i - j # Calculate the distance
stack.append(i) # Push current index to maintain the decreasing stack
return answer
Mental Model
- Decreasing Stack: “I’m waiting for someone bigger than me.”
- The
whileloop: “Now that I’ve found someone bigger, I can calculate my result and leave the stack.” - Time Complexity: O(n) because each element is pushed and popped exactly once.
Heaps
Heap Operations (heapq)
In Python, the heapq module provides an implementation of the heap queue algorithm, also known as the priority queue algorithm.
Note: heapq only implements min heaps.
from heapq import *
# Declaration: heapq does not give you a heap data structure.
# You just use a normal list, and heapq provides you with
# methods that can be used on this list to perform heap operations
heap = []
# Add to heap
heappush(heap, 1)
heappush(heap, 2)
heappush(heap, 3)
# Check minimum element (O(1))
heap[0] # 1
# Pop minimum element (O(log n))
heappop(heap) # 1
# Get size
len(heap) # 2
# Bonus: convert a list to a heap in linear time (O(n))
nums = [43, 2, 13, 634, 120]
# PITFALL: heapify is IN-PLACE and returns None
heapify(nums)
# Now 'nums' is a valid heap. Do NOT do: nums = heapify(nums)
# Now, you can use heappush and heappop on nums
# and nums[0] will always be the minimum element
Max Heap in Python
Python 3.14+ (Native Support)
As of Python 3.14, heapq provides native public functions for max-heaps:
heapify_max(list): In-place transformation.heappush_max(heap, item)heappop_max(heap)heapreplace_max(heap, item)
Pre-Python 3.14 (The Negation Trick)
Since heapq was historically a MIN heap only, use negative values to simulate a Max Heap:
max_heap = []
heappush(max_heap, -10)
heappush(max_heap, -20)
# Pop the largest
largest = -heappop(max_heap) # 20
The “Kth Largest” Pattern (The Best Interview Answer)
Don’t use a Max-Heap for finding the Kth largest element unless you have to. Use a Min-Heap of size K.
Strategy (The “Bouncer” Logic):
- Maintain a min-heap of size K.
- The root (
heap[0]) is the “shortest person in the club.” - If a new number is larger than the root, kick the root out and let the new number in.
- After processing everything, the root is the Kth largest.
Complexity:
- Time: O(N \log K) — Better than O(N \log N) if k \ll N.
- Space: O(K) — Better than O(N) for streaming data.
def findKthLargest(nums, k):
heap = nums[:k]
heapq.heapify(heap) # O(K)
for i in range(k, len(nums)):
if nums[i] > heap[0]:
heapq.heapreplace(heap, nums[i]) # O(log K)
return heap[0]
Common Heap Pitfalls
- Mixing Logic: Never use
heappop()on a heap created withheapify_max()(pre-3.14). Standardheappopuses Min-Heap logic and will corrupt your Max-Heap structure. heapifyIn-Place: Remembern = heapify(nums)setsntoNone.- Index Access: Only
heap[0]is guaranteed.heap[-1]is not the maximum/minimum.
Tags: #python #heap #priority-queue #data-structures #complexity #top-k
Linked Lists
Linked List Insertion Logic
When inserting a new node into a single linked list, the order of operations is critical. It can be counter-intuitive because you must update two .next pointers in a specific order to avoid losing the rest of the list.
Implementation
class ListNode:
def __init__(self, val):
self.val = val
self.next = None
# Let prev_node be the node at position i - 1
def add_node(prev_node, node_to_add):
# 1. First, point the new node to the rest of the list
node_to_add.next = prev_node.next
# 2. Then, point the previous node to the new node
prev_node.next = node_to_add
Why This Order?
Think of it as “securing the rest of the list first”.
- Step 1 (
node_to_add.next = prev_node.next): You first connect your new node to the “tail” of the list (the part that comes after the insertion point). This ensures you have a pointer to the rest of the list. - Step 2 (
prev_node.next = node_to_add): Once the rest of the list is safely “held” by the new node, you can safely update theprev_nodeto point to the new node.
The Pitfall: If you reversed these steps, you would point prev_node to the new node first. But then you would lose the reference to the original prev_node.next, making it impossible to connect your new node to the rest of the list (the rest of the list becomes “orphaned”).
Linked Lists with Sentinel Nodes
Sentinel nodes (dummy nodes) simplify linked list operations by eliminating edge cases like empty lists or deleting the last node.
Core Concept
- Head Sentinel:
head.nextpoints to the first “real” node. - Tail Sentinel:
tail.prevpoints to the last “real” node. - Operations are always O(1) when adding/removing from both ends.
Implementation
class ListNode:
def __init__(self, val):
self.val, self.next, self.prev = val, None, None
# Initialization
head, tail = ListNode(None), ListNode(None)
head.next, tail.prev = tail, head
def add_to_start(node):
node.prev, node.next = head, head.next
head.next.prev = node
head.next = node
def remove_from_start():
if head.next == tail: return # Empty list
to_remove = head.next
to_remove.next.prev = head
head.next = to_remove.next
Why Use Them?
Without sentinels, you’d need if node.next is None checks everywhere. With sentinels, every “real” node is guaranteed to have a neighbor, so node.next.prev or node.prev.next never fails.
Middle of Linked List (Manual Counting)
If you prefer counting nodes/steps manually instead of using Fast and Slow pointers, you can avoid messy if statements for odd/even lengths by using a specific integer division pattern.
The “Step Counting” Pattern
If you count the number of jumps (edges) rather than nodes, you have to handle the parity manually unless you use this formula:
class Solution:
def middleNode(self, head: Optional[ListNode]) -> Optional[ListNode]:
count = 0
curr = head
# Count the "steps" (number of next pointers)
while curr.next:
count += 1
curr = curr.next
# The 'if count % 2 == 0' can be replaced with (count + 1) // 2
mid = (count + 1) // 2
curr = head
for _ in range(mid):
curr = curr.next
return curr
Why (count + 1) // 2?
| Nodes (length) | Jumps (count) | Resulting mid | Target Index |
|---|---|---|---|
| 5 (Odd) | 4 | (4 + 1) // 2 = 2 | 2 (Node 3) |
| 6 (Even) | 5 | (5 + 1) // 2 = 3 | 3 (Node 4) |
This pattern ensures you always hit the “second middle” node for even-length lists as required by most LeetCode-style problems, without needing an explicit if check.
Linked List Reversal Pitfalls (Reverse Sub-list)
When reversing a sub-segment of a linked list (e.g., Reverse Linked List II), there are five critical pitfalls to watch out for.
1. The “Vanishing Head”
Pitfall: Returning the original head pointer.
Why: If left = 1, the first node moves, and head is no longer the start of the list.
Fix: Always use a dummy node (dummy = ListNode(0, head)) and return dummy.next.
2. Reconnection Confusion
Pitfall: Connecting lag.next to the wrong node or creating a cycle.
Why: After the loop, prev is the new head of the sub-segment, and cur is the start of the remaining list.
Fix:
lag.next = prev(Connect the node before the reversal to the new head).left_node.next = cur(Connect the original start of the sub-segment to the rest of the list).
3. Pointer Ending Positions
Pitfall: Assuming cur is the last node of the reversed segment.
Why: The for loop logic or while cur logic usually pushes cur one step past the segment being processed.
Fix:
previs on the last processed node (the new head).curis on the next unprocessed node (the tail’s successor).
4. The lag Initialization
Pitfall: Initializing lag at head.
Why: If left = 1, your while index < left loop won’t run, and lag remains at head. This breaks the connection logic.
Fix: Initialize lag = dummy. This ensures lag is always exactly one node before the reversal start.
5. Variable Scope & Off-by-Ones
Pitfall: Using a temporary variable like after outside the loop.
Why: If the range is small or empty, after might not be defined or might point to an old state.
Fix: Use cur for reconnection instead of the temporary after variable used inside the loop.
Quiz Question Ideas:
- If reversing nodes 2 to 4, where does
prevsit after the loop? - Why is
lag = dummysafer thanlag = head? - What happens if you return
headwhenleft = 1?
Trees & Graphs
The Recursive Reattachment Pattern
This is one of the most critical patterns for Tree problems where you modify the tree structure.
The Problem
In Python, if you pass root.left to a function, you are passing the object it points to. If that function returns a new node, root.left doesn’t magically update to point to it.
The Solution: “Return and Catch”
Every recursive call must return the “root” of its subtree, and the caller must “catch” it and assign it to the correct pointer.
def modifyTree(root):
if not root:
return NewNode() # 1. CREATE
# 2. ASSIGN / CONNECT
root.left = modifyTree(root.left)
root.right = modifyTree(root.right)
# 3. PROPAGATE
return root
Why this is tricky
Most of the time, modifyTree(root.left) returns the exact same node that was already there. It feels like you are doing redundant work by re-assigning root.left = root.left.
However, at the internal leaf/insertion point:
modifyTree(None)returns a brand newTreeNode.- The caller (the parent) does
root.left = [New Node]. - This is the only time the pointer actually changes!
Where you’ll use this
- BST Insert:
root.left = insert(root.left, val) - BST Delete:
root.left = delete(root.left, val) - Invert Binary Tree:
root.left, root.right = invert(root.right), invert(root.left) - Pruning:
root.left = prune(root.left)
Mental Checklist
- Does my base case return a node?
- Am I assigning the result of the recursive call to
root.leftorroot.right? - Am I returning
rootat the end of the function?
Rule of Thumb: If you are changing where a pointer points, you probably need
root.left = recurse(...).
BFS Pattern: Layer-by-Layer Traversal
Most Breadth-First Search (BFS) implementations use a deque from collections to achieve O(1) popleft() operations. The “Level-by-Level” variation is critical for problems requiring distance or layer processing.
The Template
from collections import deque
def bfs_traversal(root):
if not root:
return
queue = deque([root])
while queue:
# 1. Capture the exact number of nodes in the CURRENT layer
nodes_in_current_level = len(queue)
# [Optional] Logic that happens once per level (e.g., depth tracking)
for _ in range(nodes_in_current_level):
node = queue.popleft()
# 2. Logic for the INDIVIDUAL node
print(node.val)
# 3. Queue up the NEXT level
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
Why len(queue) inside the while loop?
The for _ in range(nodes_in_current_level) loop ensures that you process precisely one “generation” of nodes at a time. Without this, you wouldn’t be able to distinguish between levels, which is required for problems like Level Order Traversal or Right Side View.
When to use
- Shortest Path in an unweighted graph/grid.
- Level Order Traversal (Binary Tree).
- Multi-source BFS (e.g., Rotting Oranges).
DFS Pattern: Iterative Stack Traversal
For Depth-First Search (DFS), while recursion is common, an iterative approach using a stack is often safer for very deep trees (avoiding RecursionError) and is a standard interview pattern.
The Template
def dfs_iterative(root):
if not root:
return
# 1. Initialize the stack with an array containing the root
stack = [root]
while stack:
# 2. Pop the LATEST added node (LIFO)
node = stack.pop()
# 3. Process the node
print(node.val)
# 4. Push children onto the stack
# To maintain the same order as recursive DFS (Left then Right),
# we push Right THEN Left because it's a stack.
if node.right:
stack.append(node.right)
if node.left:
stack.append(node.left)
Key Differences from BFS
- Data Structure: Uses a standard Python list
[]as a Stack (O(1)pop()) instead of adeque(O(1)popleft()). - Order of Children: We push children in reverse order (Right then Left) if we want the Left child to be the first one popped and processed next.
- Initialization: Just like BFS, we initialize the structure with
[root], but the behavior changes entirely based on whether we usepop()orpopleft().
When to use
- Preorder Traversal (Iterative).
- Path-finding where you want to go as deep as possible before backtracking.
- When you want to avoid recursion limits.
BFS Invariant: Cleaning up Layer-by-Layer Logic
The Problem
Many BFS implementations over-complicate things by manually tracking levels, using multiple passes, or duplicating logic.
Key Insight
In any layer-by-layer BFS (using len(q)), the last level you process is the deepest level.
Instead of tracking max_level or running a first pass to find depth:
- Initialize
ans = 0(orlevel_sum) inside thewhile q:loop but outside thefor _ in range(len(q)):loop. - Process the level.
- When the queue is empty, the
ansfrom the last finished iteration is your result.
BFS Solution (Cleanest Canonical Version)
from collections import deque
def deepest_leaves_sum(root):
if not root:
return 0
q = deque([root])
while q:
level_sum = 0 # <--- Reset for EVERY level
for _ in range(len(q)):
node = q.popleft()
level_sum += node.val
if node.left: q.append(node.left)
if node.right: q.append(node.right)
# When loop finishes, level_sum holds the LAST level's total
return level_sum
Mental Model to Remember
BFS invariant: “Each loop iteration = one tree level” “The last computed level sum = deepest leaves sum”
Benefits
- Single traversal: O(N)
- Minimal state: No level counters or
max_depthvariables. - Obvious Intent: The code structure mirrors the problem’s logic.
Advanced Patterns (Intervals, Sliding Window)
Interval Sorting
When dealing with interval problems (e.g., Meeting Rooms, Merge Intervals, Interval Scheduling), the sorting criteria is critical.
1. Sort by Start Time (Default)
Use this for Merge Intervals or Meeting Rooms. It helps you process intervals as they “arrive”.
# Python sorts by the first element of the sub-lists by default
intervals.sort()
# Explicitly:
intervals.sort(key=lambda x: x[0])
2. Sort by End Time
Use this for Interval Scheduling / Maximum Non-overlapping Intervals. Picking the interval that finishes earliest (greedy) leaves the most space for others.
# MUST use key=lambda or itemgetter
intervals.sort(key=lambda x: x[1])
# Or using itemgetter (slightly faster for large lists)
from itemgetter import itemgetter
intervals.sort(key=itemgetter(1))
Pitfalls
- The Lambda Syntax:
sorted(arr, lambda x: x[1])will error. You must use thekey=keyword argument:sorted(arr, key=lambda x: x[1]). - In-place vs. New List:
intervals.sort()modifies the list in place and returnsNone.sorted(intervals)returns a new sorted list. - Comparing Start with End: In overlap checks, always compare the current start with the previous end.
Efficient Sliding Window (Fixed Size k)
The “Elegant” Refactor
The most idiomatic way to write a fixed-size sliding window in Python is to iterate through the list once and use the loop index as your “right” boundary. This eliminates manual index incrementing and awkward “peek-ahead” logic.
def maxSum(nums: List[int], k: int):
# 1. Initial window sum
cur_sum = sum(nums[:k])
max_sum = cur_sum
# 2. Start from the first element AFTER the initial window
for i in range(k, len(nums)):
# Slide: Add the new element, subtract the one that fell off
cur_sum += nums[i] - nums[i - k]
if cur_sum > max_sum:
max_sum = cur_sum
return max_sum
Why this version feels better:
- No “Off-by-One” Anxiety: By using
range(k, len(nums)), you eliminate the need to checkr + 1. The loop naturally stops when the data ends. - Simplified Pointers: You only manage one index (
i). The “left” side of your window is always justi - k. - Readability: It’s immediately clear that you are processing the array from the k-th element to the end.
The Analogy
That while (r + 1) < len(nums) check feels a bit like trying to look over a fence while standing on your tiptoes—it works, but it’s not the most comfortable position. Letting the for loop handle the pointer management for you is much more ergonomic.
Edge Cases
In a real-world scenario or technical interview, always add a quick check:
if not nums or k <= 0:
return 0
This prevents the code from exploding if you receive an empty list or a window size of zero.
Range for Sliding Window
# For window of size k: last valid start = len(nums) - k
# Use range(len(nums) - k + 1) to include all starting positions
for i in range(len(nums) - k + 1):
window = nums[i:i+k]
Dummy Pointers for Traversal
When traversing a linked list, use a dummy pointer (often named curr or dummy) to iterate through the nodes instead of moving the head pointer itself.
Importance
Moving the head pointer during a traversal causes you to lose the reference to the start of the list. By using a dummy pointer, you preserve the head reference so you can return it or traverse the list again later.
Example
def get_sum(head):
ans = 0
# Use a dummy pointer for traversal
curr = head
while curr:
ans += curr.val
curr = curr.next
# We still have the 'head' pointer at the start of the list
return ans
Fast and Slow Pointers
The fast and slow pointer technique (also known as Tortoise and Hare) is a common pattern for linked list problems, such as finding the middle of a list or detecting a cycle.
Implementation
# head is the head node of a linked list
def fn(head):
slow = head
fast = head
while fast and fast.next:
# Do something here
slow = slow.next
fast = fast.next.next
Why the fast.next Check?
The reason we need the while condition to check for both fast and fast.next is to prevent an AttributeError:
- If
fastisNone, the loop stops (handles even-length lists or empty lists). - If
fastis the final node, thenfast.nextisNone. Trying to accessfast.next.nextwould result in an error (e.g.,AttributeError: 'NoneType' object has no attribute 'next'). Checkingfast.nextensures we only advancefastwhen it is safe to skip two nodes ahead.
Common Pitfalls (Manual Counting)
Initially, you might try to find the middle by counting nodes first. This is prone to “off-by-one” errors and messy if statements.
The “Step Count” Trap
If you count “steps” (using while curr.next), you end up with (length - 1) counts.
count = 0
while dummy.next: # Counting jumps, not nodes
count += 1
dummy = dummy.next
# Wrong approach for even-length lists (returns first middle):
mid = count // 2
# Correct approach without using an 'if':
mid = (count + 1) // 2
The “Node Count” Solution
Counting total nodes (using while curr) makes the math much cleaner:
count = 0
while curr: # Counting nodes
count += 1
curr = curr.next
# Works for both odd and even lists (returns second middle):
mid = count // 2
Using Fast and Slow Pointers avoids this entire counting overhead and math logic.
Recursion & Caching
Nested Functions & Scope (Closures)
In Python, an internal (nested) function has access to the variables defined in its parent function’s scope. This is called a closure.
The Tip: Don’t Re-Pass Outer Arguments
Many people waste time passing variables like target, k, or graph into their helper dfs function. If those variables don’t change, you don’t need to pass them!
Redundant Passing
def solution(root, target):
def dfs(node, target): # <--- target is redundant
if not node: return
if node.val == target:
# logic
pass
dfs(node.left, target)
dfs(node.right, target)
dfs(root, target)
Clean & Fast
def solution(root, target):
def dfs(node): # <--- target is inherited from outer scope
if not node: return
if node.val == target:
# logic
pass
dfs(node.left)
dfs(node.right)
dfs(root)
Important: Mutation vs Access
- Accessing: You can read any outer variable for free.
- Mutating: If the outer variable is a list or dict, you can mutate it (e.g.,
res.append(val)) WITHOUTnonlocal. - Updating: If you want to reassign an outer variable (e.g.,
count = count + 1), you have two choices:- The
nonlocalkeyword: Declarenonlocal countinside the nested function. - The
selfpattern (Recommended): Use a class member. This is often cleaner and avoids scope confusion.
- The
The “Self” Pattern
In LeetCode, since your code is inside a Solution class, you can anchor state to the instance. This is often more “elegant” than nonlocal.
class Solution:
def solve(self, root):
self.count = 0 # Anchor state to the instance
def dfs(node):
if not node: return
if node.val == 7:
self.count += 1 # No nonlocal needed!
dfs(node.left)
dfs(node.right)
dfs(root)
return self.count
Interview Why
During an interview, typing target or res 5-6 extra times in your recursion adds up and increases the chance of a typo. Keep your internal signatures as small as possible—usually just node and any state that actually changes per call (like curr_sum or depth).
Built-in Memoization with @cache
When writing recursive functions (especially in dynamic programming problems like “Decode Ways” or “Fibonacci”), you risk hitting exponential time complexities because of redundant subtree calculations (e.g. O(2^n)).
Python provides a built-in memory/cache notebook called @cache.
Usage
from functools import cache
class Solution:
@cache
def dfs(self, s: str) -> int:
if not s:
return 1
# ... logic ...
By adding @cache directly above the recursive function, it will save the results of past function calls and reuse them if the same arguments are seen again. This drops the complexity from O(2^n) to O(n).
Minimum Depth of Binary Tree Pitfall
The Conceptual Trap
When calculating the minimum depth of a tree, a common mistake is to treat a missing child (None) as a path of depth 0.
Why the standard recursion fails
# INCORRECT LOGIC
def minDepth(root):
if not root: return 0
return min(minDepth(root.left), minDepth(root.right)) + 1
In a tree where a node has only one child (e.g., 1 -> 2), the logic above will:
- See
root.rightisNone-> return0. - Take
min(left_depth, 0) + 1. - Conclude the min depth is
1(as if the root itself was a leaf).
The “Interview Gold” Rule
A leaf is a node with no left AND no right children. A missing child is an empty set of paths, not a path of length 0.
Correct Conceptual Logic
- Both children exist: Take
min(left, right) + 1. - Only one child exists: You must follow that path. Return
1 + depth_of_existing_child. - No children (Leaf): Return
1. - No root: Return
0.
Key Insight for Interviews
“I must filter out sentinel values (0 for None) before using min(). In maxDepth, this doesn’t matter because 0 never wins against a positive depth, but in minDepth, the sentinel ‘cheats’ the comparison.”
13000 – The @cache Decorator (Instant Memoization)
In Python, you can convert a slow recursive function into a fast Dynamic Programming (DP) solution by adding a single line. This is the ultimate “cheat code” for top-down DP.
The “Cheat Code”
Instead of manually creating a memo dictionary and checking if state in memo, just use @cache.
from functools import cache
@cache
def solve(i, j):
# Standard recursive logic...
return result
Phrase -> Logic -> Code
- Phrase: “Just remember what you did so you don’t do it again.”
- The WHY: Recursion without memoization is exponential (O(2^n)) because it re-solves the same subproblems.
@cacheautomatically stores function arguments as keys and results as values in a hidden dictionary, turning it into O(N * M). - The Code:
from functools import cache class Solution: def climbStairs(self, n: int) -> int: @cache def dp(i): if i <= 2: return i return dp(i - 1) + dp(i - 2) return dp(n)
Pitfalls & Requirements
- Hashable Arguments: All arguments to the function (e.g.,
i,j,state_tuple) must be hashable (integers, strings, tuples). You cannot pass alistorsetdirectly; convert them to atupleorfrozensetfirst. - Recursion Limit: Large constraints might hit Python’s default recursion limit (usually 1000). Use
sys.setrecursionlimit()if needed. - Python Version:
@cachewas added in Python 3.9. For older versions, use@lru_cache(None).
When to use it?
Whenever you are writing Top-Down DP or DFS with memoization. It keeps the code clean and lets you focus on the transition logic rather than state management.
Tags: #python #dp #memoization #trick