Recursive Algorithms for Tree Traversal
Recursion is a natural fit for data structures that are themselves defined recursively. A tree, for instance, is a node connected to smaller trees. A linked list is a node connected to a smaller linked list. This self-similar structure is exactly why recursive algorithms work so well here: the structure of the code mirrors the structure of the data.
That said, recursive solutions come with real trade-offs in time and space complexity, particularly around call stack usage. This section covers the major applications of recursion in trees, graphs, and linked lists, along with how to analyze their efficiency.
Recursive algorithms for tree traversal
Tree traversals visit every node in a tree exactly once. The three classic orderings differ only in when you process the current node relative to its children:
- Preorder (root-left-right): Visit the root first, then recursively traverse the left subtree, then the right subtree. Useful for creating a copy of the tree or generating a prefix expression.
- Inorder (left-root-right): Recursively traverse the left subtree, visit the root, then traverse the right subtree. On a binary search tree (BST), this visits nodes in sorted (ascending) order.
- Postorder (left-right-root): Recursively traverse the left subtree, then the right subtree, then visit the root. This is the right choice when you need to process children before their parent, such as when deleting or freeing nodes.
All three traversals follow the same recursive pattern. The only thing that changes is where the "visit" step happens relative to the two recursive calls.

Recursion in tree and graph problems
Height calculation of a binary tree
The height of a tree is the number of edges on the longest path from the root to a leaf. Recursion handles this cleanly because the height of any node depends on the heights of its subtrees:
- If the node is
null(empty tree), return -1 (or 0 if you define height as the number of nodes on the longest path rather than edges; be consistent with your course's convention). - Recursively calculate the height of the left subtree.
- Recursively calculate the height of the right subtree.
- Return .
This runs in time since every node is visited once.
Shortest path finding using recursive DFS
Recursive depth-first search can find paths in a graph by exploring as deep as possible before backtracking:
- From the starting vertex, recursively explore each unvisited neighbor, building up the current path.
- When you reach the target vertex, compare the current path length to the shortest found so far and update if shorter.
- Pruning optimization: If the current path length already exceeds the best known shortest path, stop exploring that branch early.
Note that DFS is not the most efficient way to find shortest paths in general. BFS finds shortest paths in unweighted graphs in time, and algorithms like Dijkstra's handle weighted graphs better. Recursive DFS for shortest paths is mainly useful as a learning exercise or in constrained scenarios.

Recursion for data structure implementation
Recursive linked list operations
Since a linked list is a node followed by a smaller linked list, many operations translate directly into recursive solutions:
- Insertion: Recurse through the list until you reach the desired position, then create and link the new node.
- Deletion: Recurse until you find the target node, then update the previous node's pointer to skip over it.
- Reversal: Recurse to the end of the list, then on each return, flip the
nextpointer so each node points back to the one before it. The original tail becomes the new head (returned up through every call).
Recursive construction of a binary tree from a sorted array
Building a balanced BST from a sorted array is a classic divide-and-conquer application:
- If the subarray is empty, return
null(base case). - Find the middle element and create a new node with that value (this becomes the root of the current subtree).
- Recursively build the left subtree from the left half of the subarray (elements before the middle).
- Recursively build the right subtree from the right half (elements after the middle).
Because you split the array in half each time, the resulting tree is balanced, and the algorithm runs in time.
Efficiency of recursive algorithms
Time complexity analysis
To determine the time complexity of a recursive algorithm:
- Write the recurrence relation that describes the running time. For example, binary tree traversal gives .
- Solve the recurrence using the Master Theorem, recursion tree method, or substitution method.
- Express the result in Big O notation (e.g., , ).
Space complexity analysis
Recursive algorithms use stack space proportional to the maximum depth of recursion:
- Each recursive call adds a frame to the call stack, consuming memory for local variables, parameters, and the return address.
- The maximum recursion depth determines the stack space. For a balanced binary tree, that's . For a skewed tree or a linked list, it's .
- Deep recursion on large inputs can cause a stack overflow if the recursion depth exceeds the system's stack size limit.
When analyzing space complexity, account for both the call stack and any additional data structures your algorithm uses. The total space is the sum of both.