Recursion vs iteration time complexity. For example, using a dict in Python (which has (amortized) O (1) insert/update/delete times), using memoization will have the same order ( O (n)) for calculating a factorial as the basic iterative solution. Recursion vs iteration time complexity

 
 For example, using a dict in Python (which has (amortized) O (1) insert/update/delete times), using memoization will have the same order ( O (n)) for calculating a factorial as the basic iterative solutionRecursion vs iteration time complexity  Some say that recursive code is more "compact" and simpler to understand

Recursion is slower than iteration since it has the overhead of maintaining and updating the stack. In terms of (asymptotic) time complexity - they are both the same. We can choose which to use either recursion or iteration, considering Time Complexity and size of the code. The towers of Hanoi problem is hard no matter what algorithm is used, because its complexity is exponential. But at times can lead to difficult to understand algorithms which can be easily done via recursion. pop() if node. Recursion will use more stack space assuming you have a few items to transverse. First, you have to grasp the concept of a function calling itself. Step2: If it is a match, return the index of the item, and exit. Iteration is your friend here. Why is recursion so praised despite it typically using more memory and not being any faster than iteration? For example, a naive approach to calculating Fibonacci numbers recursively would yield a time complexity of O(2^n) and use up way more memory due to adding calls on the stack vs an iterative approach where the time complexity would be O(n. Identify a pattern in the sequence of terms, if any, and simplify the recurrence relation to obtain a closed-form expression for the number of operations performed by the algorithm. The above code snippet is a function for binary search, which takes in an array, size of the array, and the element to be searched x. Iteration, on the other hand, is better suited for problems that can be solved by performing the same operation multiple times on a single input. It consists of three poles and a number of disks of different sizes which can slide onto any pole. In this tutorial, we’ll introduce this algorithm and focus on implementing it in both the recursive and non-recursive ways. ). It allows for the processing of some action zero to many times. Tail recursion optimization essentially eliminates any noticeable difference because it turns the whole call sequence to a jump. The recursive step is n > 0, where we compute the result with the help of a recursive call to obtain (n-1)!, then complete the computation by multiplying by n. 1) Partition process is the same in both recursive and iterative. Looping will have a larger amount of code (as your above example. Recursive functions are inefficient in terms of space and time complexity; They may require a lot of memory space to hold intermediate results on the system's stacks. 1 Answer. "Recursive is slower then iterative" - the rational behind this statement is because of the overhead of the recursive stack (saving and restoring the environment between calls). e. A recursive function solves a particular problem by calling a copy of itself and solving smaller subproblems of the original problems. Iteration terminates when the condition in the loop fails. So, let’s get started. Before going to know about Recursion vs Iteration, their uses and difference, it is very important to know what they are and their role in a program and machine languages. In this video, we cover the quick sort algorithm. Recursive implementation uses O (h) memory (where h is the depth of the tree). an algorithm with a recursive solution leads to a lesser computational complexity than an algorithm without recursion Compare Insertion Sort to Merge Sort for example Lisp is Set Up For Recursion As stated earlier, the original intention of Lisp was to model. Time Complexity: Very high. It's an optimization that can be made if the recursive call is the very last thing in the function. Hence it’s space complexity is O (1) or constant. Iterative vs recursive factorial. In the Fibonacci example, it’s O(n) for the storage of the Fibonacci sequence. N * log N time complexity is generally seen in sorting algorithms like Quick sort, Merge Sort, Heap sort. Both iteration and recursion are. Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. 1. The result is 120. In the next pass you have two partitions, each of which is of size n/2. The recursive step is n > 0, where we compute the result with the help of a recursive call to obtain (n-1)!, then complete the computation by multiplying by n. In both cases (recursion or iteration) there will be some 'load' on the system when the value of n i. File. We added an accumulator as an extra argument to make the factorial function be tail recursive. . By breaking down a. Euclid’s Algorithm: It is an efficient method for finding the GCD (Greatest Common Divisor) of two integers. GHCRecursion is the process of calling a function itself repeatedly until a particular condition is met. 2. With constant-time arithmetic, thePhoto by Mario Mesaglio on Unsplash. Whenever you are looking for time taken to complete a particular algorithm, it's best you always go for time complexity. The idea is to use one more argument and accumulate the factorial value in the second argument. Thus, the time complexity of factorial using recursion is O(N). From the package docs : big_O is a Python module to estimate the time complexity of Python code from its execution time. The iterative version uses a queue to maintain the current nodes, while the recursive version may use any structure to persist the nodes. What this means is, the time taken to calculate fib (n) is equal to the sum of time taken to calculate fib (n-1) and fib (n-2). The purpose of this guide is to provide an introduction to two fundamental concepts in computer science: Recursion and Backtracking. Recursion, broadly speaking, has the following disadvantages: A recursive program has greater space requirements than an iterative program as each function call will remain in the stack until the base case is reached. By examining the structure of the tree, we can determine the number of recursive calls made and the work. Its time complexity is fairly easier to calculate by calculating the number of times the loop body gets executed. 1. Time Complexity: O(n) Auxiliary Space: O(n) The above function can be written as a tail-recursive function. Contrarily, iterative time complexity can be found by identifying the number of repeated cycles in a loop. It is fast as compared to recursion. , a path graph if we start at one end. Time Complexity. The problem is converted into a series of steps that are finished one at a time, one after another. def tri(n: Int): Int = { var result = 0 for (count <- 0 to n) result = result + count result} Note that the runtime complexity of this algorithm is still O(n) because we will be required to iterate n times. ) Every recursive algorithm can be converted into an iterative algorithm that simulates a stack on which recursive function calls are executed. As you correctly noted the time complexity is O (2^n) but let's look. We have discussed iterative program to generate all subarrays. Time Complexity. In contrast, the iterative function runs in the same frame. Share. 10. Share. With recursion, you repeatedly call the same function until that stopping condition, and then return values up the call stack. The recursive solution has a complexity of O(n!) as it is governed by the equation: T(n) = n * T(n-1) + O(1). Space Complexity: For the iterative approach, the amount of space required is the same for fib (6) and fib (100), i. Improve this. Your code is basically: for (int i = 0, i < m, i++) for (int j = 0, j < n, j++) //your code. You can find a more complete explanation about the time complexity of the recursive Fibonacci. The difference between O(n) and O(2 n) is gigantic, which makes the second method way slower. personally, I find it much harder to debug typical "procedural" code, there is a lot of book keeping going on as the evolution of all the variables has to be kept in mind. Reduces time complexity. Because of this, factorial utilizing recursion has an O time complexity (N). Auxiliary Space: O(n), The extra space is used due to the recursion call stack. In data structure and algorithms, iteration and recursion are two fundamental problem-solving approaches. First, one must observe that this function finds the smallest element in mylist between first and last. Iteration is generally faster, some compilers will actually convert certain recursion code into iteration. Each of such frames consumes extra memory, due to local variables, address of the caller, etc. Iteration and recursion are two essential approaches in Algorithm Design and Computer Programming. The result is 120. Photo by Compare Fibre on Unsplash. In algorithms, recursion and iteration can have different time complexity, which measures the number of operations required to solve a problem as a function of the input size. e. Recursion — depending on the language — is likely to use the stack (note: you say "creates a stack internally", but really, it uses the stack that programs in such languages always have), whereas a manual stack structure would require dynamic memory allocation. Explanation: Since ‘mid’ is calculated for every iteration or recursion, we are diving the array into half and then try to solve the problem. Space The Fibonacci sequence is de ned: Fib n = 8 >< >: 1 n == 0Efficiency---The Time Complexity of an Algorithm In the bubble sort algorithm, there are two kinds of tasks. the use of either of the two depends on the problem and its complexity, performance. Only memory for the. 🔁 RecursionThe time complexity is O (2 𝑛 ), because that is the number of iterations done in the only loops present in the code, while all other code runs in constant time. The complexity of this code is O(n). Found out that there exists Iterative version of Merge Sort algorithm with same time complexity but even better O(1) space complexity. Let's abstract and see how to do it in general. No. T ( n ) = aT ( n /b) + f ( n ). It may vary for another example. The recursive step is n > 0, where we compute the result with the help of a recursive call to obtain (n-1)!, then complete the computation by multiplying by n. e execution of the same set of instructions again and again. Time Complexity: Time complexity of the above implementation of Shell sort is O(n 2). Each of such frames consumes extra memory, due to local variables, address of the caller, etc. It causes a stack overflow because the amount of stack space allocated to each process is limited and far lesser than the amount of heap space allocated to it. A function that calls itself directly or indirectly is called a recursive function and such kind of function calls are called recursive calls. This article presents a theory of recursion in thinking and language. In a recursive step, we compute the result with the help of one or more recursive calls to this same function, but with the inputs somehow reduced in size or complexity, closer to a base case. So let us discuss briefly on time complexity and the behavior of Recursive v/s Iterative functions. " 1 Iteration is one of the categories of control structures. Time complexity = O(n*m), Space complexity = O(1). It is slower than iteration. Suppose we have a recursive function over integers: let rec f_r n = if n = 0 then i else op n (f_r (n - 1)) Here, the r in f_r is meant to. Also, deque performs better than a set or a list in those kinds of cases. Recursion is better at tree traversal. Generally, it has lower time complexity. Iterative functions explicitly manage memory allocation for partial results. Steps to solve recurrence relation using recursion tree method: Draw a recursive tree for given recurrence relation. Each function call does exactly one addition, or returns 1. Then we notice that: factorial(0) is only comparison (1 unit of time) factorial(n) is 1 comparison, 1 multiplication, 1 subtraction and time for factorial(n-1) factorial(n): if n is 0 return 1 return n * factorial(n-1) From the above analysis we can write:DFS. In this video, I will show you how to visualize and understand the time complexity of recursive fibonacci. Because each call of the function creates two more calls, the time complexity is O(2^n); even if we don’t store any value, the call stack makes the space complexity O(n). Time Complexity: O(n) Auxiliary Space: O(n) An Optimized Divide and Conquer Solution: To solve the problem follow the below idea: There is a problem with the above solution, the same subproblem is computed twice for each recursive call. from collections import deque def preorder3(initial_node): queue = deque([initial_node]) while queue: node = queue. GHC Recursion is quite slower than iteration. But it is stack based and stack is always a finite resource. We. When evaluating the space complexity of the problem, I keep seeing that time O() = space O(). e. Sometimes it’s more work. Below is the implementation using a tail-recursive function. You can use different formulas to calculate the time complexity of Fibonacci sequence. . A tail recursion is a recursive function where the function calls itself at the end ("tail") of the function in which no computation is done after the return of recursive call. In dynamic programming, we find solutions for subproblems before building solutions for larger subproblems. Count the total number of nodes in the last level and calculate the cost of the last level. As shown in the algorithm we set the f[1], f[2] f [ 1], f [ 2] to 1 1. but this is a only a rough upper bound. When the condition that marks the end of recursion is met, the stack is then unraveled from the bottom to the top, so factorialFunction(1) is evaluated first, and factorialFunction(5) is evaluated last. Reduces time complexity. When you're k levels deep, you've got k lots of stack frame, so the space complexity ends up being proportional to the depth you have to search. Reduced problem complexity Recursion solves complex problems by. We can define factorial in two different ways: 5. mat pow recur(m,n) in Fig. An example of using the findR function is shown below. So, if you’re unsure whether to take things recursive or iterative, then this section will help you make the right decision. Each pass has more partitions, but the partitions are smaller. Of course, some tasks (like recursively searching a directory) are better suited to recursion than others. Recursion is the most intuitive but also the least efficient in terms of time complexity and space complexity. Backtracking at every step eliminates those choices that cannot give us the. Iteration; For more content, explore our free DSA course and coding interview blogs. Here are the general steps to analyze loops for complexity analysis: Determine the number of iterations of the loop. Recursion: Recursion has the overhead of repeated function calls, that is due to the repetitive calling of the same function, the time complexity of the code increases manyfold. The reason is because in the latter, for each item, a CALL to the function st_push is needed and then another to st_pop. time complexity or readability but. Here, the iterative solution uses O (1. The reason that loops are faster than recursion is easy. It is the time needed for the completion of an algorithm. Recursive traversal looks clean on paper. For. g. Recursion vs Iteration: You can reduce time complexity of program with Recursion. Standard Problems on Recursion. I just use a normal start_time = time. Note: To prevent integer overflow we use M=L+(H-L)/2, formula to calculate the middle element, instead M=(H+L)/2. Its time complexity is easier to calculate by calculating the number of times the loop body gets executed. Therefore the time complexity is O(N). Iteration uses the CPU cycles again and again when an infinite loop occurs. If the algorithm consists of consecutive phases, the total time complexity is the largest time complexity of a single phase. In Java, there is one situation where a recursive solution is better than a. Question is do we say that recursive traversal is also using O(N) space complexity like iterative one is using? I am talking in terms of running traversal code on some. Iteration — Non-recursion. If the number of function. Because you have two nested loops you have the runtime complexity of O (m*n). Overview. The complexity is not O(n log n) because even though the work of finding the next node is O(log n) in the worst case for an AVL tree (for a general binary tree it is even O(n)), the. Disadvantages of Recursion. Example 1: Consider the below simple code to print Hello World. In our recursive technique, each call consumes O(1) operations, and there are O(N) recursive calls overall. In plain words, Big O notation describes the complexity of your code using algebraic terms. The Java library represents the file system using java. If the maximum length of the elements to sort is known, and the basis is fixed, then the time complexity is O (n). Every recursive function should have at least one base case, though there may be multiple. Recursion is a process in which a function calls itself repeatedly until a condition is met. What is the average case time complexity of binary search using recursion? a) O(nlogn) b) O(logn) c) O(n) d) O(n 2). The simplest definition of a recursive function is a function or sub-function that calls itself. The reason for this is that the slowest. Even though the recursive approach is traversing the huge array three times and, on top of that, every time it removes an element (which takes O(n) time as all other 999 elements would need to be shifted in the memory), whereas the iterative approach is traversing the input array only once and yes making some operations at every iteration but. Both approaches create repeated patterns of computation. Iteration is the process of repeatedly executing a set of instructions until the condition controlling the loop becomes false. In fact, the iterative approach took ages to finish. A recursive process, however, is one that takes non-constant (e. Other methods to achieve similar objectives are Iteration, Recursion Tree and Master's Theorem. However, we don't consider any of these factors while analyzing the algorithm. Overhead: Recursion has a large amount of Overhead as compared to Iteration. See moreEven though the recursive approach is traversing the huge array three times and, on top of that, every time it removes an element (which takes O(n) time as all other 999 elements. 5: We mostly prefer recursion when there is no concern about time complexity and the size of code is small. |. This reading examines recursion more closely by comparing and contrasting it with iteration. This reading examines recursion more closely by comparing and contrasting it with iteration. That means leaving the current invocation on the stack, and calling a new one. To understand the blog better, refer to the article here about Understanding of Analysis of. Generally, it. The recursive call, as you may have suspected, is when the function calls itself, adding to the recursive call stack. 2. Recursion vs. Time Complexity: O(3 n), As at every stage we need to take three decisions and the height of the tree will be of the order of n. Both approaches create repeated patterns of computation. Backtracking. Recursion adds clarity and reduces the time needed to write and debug code. Analyzing recursion is different from analyzing iteration because: n (and other local variable) change each time, and it might be hard to catch this behavior. Whenever you get an option to chose between recursion and iteration, always go for iteration because. Consider writing a function to compute factorial. In the worst case scenario, we will only be left with one element on one far side of the array. Recursion happens when a method or function calls itself on a subset of its original argument. Things get way more complex when there are multiple recursive calls. You can count exactly the operations in this function. In this post, recursive is discussed. Also, function calls involve overheads like storing activation. Iteration. A tail recursive function is any function that calls itself as the last action on at least one of the code paths. Analysis. There is less memory required in the case of. However -these are constant number of ops, while not changing the number of "iterations". " Recursion: "Solve a large problem by breaking it up into smaller and smaller pieces until you can solve it; combine the results. Next, we check to see if number is found in array [index] in line 4. The objective of the puzzle is to move all the disks from one. Second, you have to understand the difference between the base. A single conditional jump and some bookkeeping for the loop counter. Time Complexity of iterative code = O (n) Space Complexity of recursive code = O (n) (for recursion call stack) Space Complexity of iterative code = O (1). A filesystem consists of named files. Unlike in the recursive method, the time complexity of this code is linear and takes much less time to compute the solution, as the loop runs from 2 to n, i. The speed of recursion is slow. Frequently Asked Questions. This way of solving such equations is called Horner’s method. When to Use Recursion vs Iteration. First we create an array f f, to save the values that already computed. If the shortness of the code is the issue rather than the Time Complexity 👉 better to use Recurtion. DP abstracts away from the specific implementation, which may be either recursive or iterative (with loops and a table). The graphs compare the time and space (memory) complexity of the two methods and the trees show which elements are calculated. In the recursive implementation on the right, the base case is n = 0, where we compute and return the result immediately: 0! is defined to be 1. 3. Time Complexity: It has high time complexity. The major driving factor for choosing recursion over an iterative approach is the complexity (i. Recursion versus iteration. If. There are possible exceptions such as tail recursion optimization. T (n) = θ. Recursion trees aid in analyzing the time complexity of recursive algorithms. The advantages of. It's because for n - Person s in deepCopyPersonSet you iterate m times. The time complexity of the method may vary depending on whether the algorithm is implemented using recursion or iteration. If n == 1, then everything is trivial. Sum up the cost of all the levels in the. In a recursive function, the function calls itself with a modified set of inputs until it reaches a base case. To visualize the execution of a recursive function, it is. The Fibonacci sequence is defined by To calculate say you can start at the bottom with then and so on This is the iterative methodAlternatively you can start at the top with working down to reach and This is the recursive methodThe graphs compare the time and space memory complexity of the two methods and the trees show which elements are. However, having been working in the Software industry for over a year now, I can say that I have used the concept of recursion to solve several problems. Possible questions by the Interviewer. Whenever you are looking for time taken to complete a particular algorithm, it's best you always go for time complexity. Since this is the first value of the list, it would be found in the first iteration. mat mul(m1,m2)in Fig. Functional languages tend to encourage recursion. Iteration produces repeated computation using for loops or while. the search space is split half. There are two solutions for heapsort: iterative and recursive. Given an array arr = {5,6,77,88,99} and key = 88; How many iterations are. Recursion $&06,*$&71HZV 0DUFK YRO QR For any problem, if there is a way to represent it sequentially or linearly, we can usually use. However, as for the Fibonacci solution, the code length is not very long. Iteration is almost always the more obvious solution to every problem, but sometimes, the simplicity of recursion is preferred. Tail recursion is a special case of recursion where the recursive function doesn’t do any more computation after the recursive function call i. Our iterative technique has an O(N) time complexity due to the loop's O(N) iterations (N). There are O(N) recursive calls in our recursive approach, and each call uses O(1) operations. But it has lot of overhead. There are many other ways to reduce gaps which leads to better time complexity. Learn more about recursion & iteration, differences, uses. It is faster because an iteration does not use the stack, Time complexity. Where have I gone wrong and why is recursion different from iteration when analyzing for Big-O? recursion; iteration; big-o; computer-science; Share. As such, the time complexity is O(M(lga)) where a= max(r). The total time complexity is then O(M(lgmax(m1))). The recursive version’s idea is to process the current nodes, collect their children and then continue the recursion with the collected children. Introduction This reading examines recursion more closely by comparing and contrasting it with iteration. Because of this, factorial utilizing recursion has an O time complexity (N). But, if recursion is written in a language which optimises the. What this means is, the time taken to calculate fib (n) is equal to the sum of time taken to calculate fib (n-1) and fib (n-2). It's less common in C but still very useful and powerful and needed for some problems. Iteration is a sequential, and at the same time is easier to debug. There is less memory required in the case of iteration A recursive process, however, is one that takes non-constant (e. Recursion adds clarity and (sometimes) reduces the time needed to write and debug code (but doesn't necessarily reduce space requirements or speed of execution). Iteration is quick in comparison to recursion. , referring in part to the function itself. Same with the time complexity, the time which the program takes to compute the 8th Fibonacci number vs 80th vs 800th Fibonacci number i. In this traversal, we first create links to Inorder successor and print the data using these links, and finally revert the changes to restore original tree. e. Recursion: The time complexity of recursion can be found by finding the value of the nth recursive call in terms of the previous calls. Big O Notation of Time vs. O (NW) in the knapsack problem. Recursion is the process of calling a function itself repeatedly until a particular condition is met. That’s why we sometimes need to convert recursive algorithms to iterative ones. Iteration: Iteration does not involve any such overhead. A recursive implementation and an iterative implementation do the same exact job, but the way they do the job is different. What will be the run time complexity for the recursive code of the largest number. 0. Recursion often result in relatively short code, but use more memory when running (because all call levels accumulate on the stack) Iteration is when the same code is executed multiple times, with changed values of some variables, maybe better approximations or whatever else. It is faster than recursion. In our recursive technique, each call consumes O(1) operations, and there are O(N) recursive calls overall. For integers, Radix Sort is faster than Quicksort. In Java, there is one situation where a recursive solution is better than a. as N changes the space/memory used remains the same. Here’s a graph plotting the recursive approach’s time complexity, , against the dynamic programming approaches’ time complexity, : 5. If it's true that recursion is always more costly than iteration, and that it can always be replaced with an iterative algorithm (in languages that allow it) - than I think that the two remaining reasons to use. The complexity analysis does not change with respect to the recursive version. Use a substitution method to verify your answer". Both recursion and ‘while’ loops in iteration may result in the dangerous infinite calls situation. Because of this, factorial utilizing recursion has an O time complexity (N). e. If you're unsure about the iteration / recursion mechanics, insert a couple of strategic print statements to show you the data and control flow. 5. Recursion shines in scenarios where the problem is recursive, such as traversing a DOM tree or a file directory. For example, the Tower of Hanoi problem is more easily solved using recursion as. Example 1: Addition of two scalar variables. We often come across this question - Whether to use Recursion or Iteration. But when I compared time of solution for two cases recursive and iteration I had different results. Let’s write some code. That takes O (n). Iterative vs recursive factorial. The time complexity of recursion is higher than Iteration due to the overhead of maintaining the function call stack. Quoting from the linked post: Because you can build a Turing complete language using strictly iterative structures and a Turning complete language using only recursive structures, then the two are therefore equivalent. That said, i find it to be an elegant solution :) – Martin Jespersen. Because each call of the function creates two more calls, the time complexity is O(2^n); even if we don’t store any value, the call stack makes the space complexity O(n). geeksforgeeks. For example, using a dict in Python (which has (amortized) O (1) insert/update/delete times), using memoization will have the same order ( O (n)) for calculating a factorial as the basic iterative solution. Moreover, the recursive function is of exponential time complexity, whereas the iterative one is linear. Because of this, factorial utilizing recursion has an O time complexity (N). In our recursive technique, each call consumes O(1) operations, and there are O(N) recursive calls overall. The first method calls itself recursively once, therefore the complexity is O(n). In this case, our most costly operation is assignment. Our iterative technique has an O(N) time complexity due to the loop's O(N) iterations (N). You should be able to time the execution of each of your methods and find out how much faster one is than the other. W hat I will be discussing in this blog is the difference in computational time between different algorithms to get Fibonacci numbers and how to get the best results in terms of time complexity using a trick vs just using a loop. A recursive algorithm can be time and space expensive because to count the value of F n we have to call our recursive function twice in every step. Technically, iterative loops fit typical computer systems better at the hardware level: at the machine code level, a loop is just a test and a conditional jump,. Time Complexity : O(2^N) This is same as recursive approach because the basic idea and logic is same. Iteration uses the CPU cycles again and again when an infinite loop occurs. Credit : Stephen Halim. Our iterative technique has an O(N) time complexity due to the loop's O(N) iterations (N). Time Complexity: O(n) Space Complexity: O(1) Note: Time & Space Complexity is given for this specific example. With this article at OpenGenus, you must have a strong idea of Iteration Method to find Time Complexity of different algorithms. The recursive step is n > 0, where we compute the result with the help of a recursive call to obtain (n-1)!, then complete the computation by multiplying by n. If you are using a functional language (doesn't appear to be so), go with recursion. Now, we can consider countBinarySubstrings (), which calls isValid () n times. The 1st one uses recursive calls to calculate the power(M, n), while the 2nd function uses iterative approach for power(M, n). In that sense, it's a matter of how a language processes the code also, as I've mentioned, some compilers transformers a recursion into a loop on its binary depending on its computation on that code. Naive sorts like Bubble Sort and Insertion Sort are inefficient and hence we use more efficient algorithms such as Quicksort and Merge Sort. quicksort, merge sort, insertion sort, radix sort, shell sort, or bubble sort, here is a nice slide you can print and use:The Iteration Method, is also known as the Iterative Method, Backwards Substitution, Substitution Method, and Iterative Substitution. Iteration is quick in comparison to recursion. O (n * n) = O (n^2). For example, use the sum of the first n integers. Exponential! Ew! As a rule of thumb, when calculating recursive runtimes, use the following formula: branches^depth. Scenario 2: Applying recursion for a list. The time complexity is lower as compared to. A single point of comparison has a bias towards the use-case of recursion and iteration, in this case; Iteration is much faster. If the limiting criteria are not met, a while loop or a recursive function will never converge and lead to a break in program execution. Iteration The original Lisp language was truly a functional language:. e. Recursion takes additional stack space — We know that recursion takes extra memory stack space for each recursive calls, thus potentially having larger space complexity vs. Since you included the tag time-complexity, I feel I should add that an algorithm with a loop has the same time complexity as an algorithm with recursion, but. Therefore Iteration is more efficient.