If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Computer science theory

Course: computer science theory   >   unit 1.

  • Challenge: implement swap
  • Selection sort pseudocode
  • Challenge: Find minimum in subarray
  • Challenge: implement selection sort
  • Analysis of selection sort
  • Project: Selection sort visualizer

algorithm sorting assignment

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Learn Python practically and Get Certified .

Popular Tutorials

Popular examples, reference materials, learn python interactively, dsa introduction.

  • What is an algorithm?
  • Data Structure and Types
  • Why learn DSA?
  • Asymptotic Notations
  • Master Theorem
  • Divide and Conquer Algorithm

Data Structures (I)

  • Types of Queue
  • Circular Queue
  • Priority Queue

Data Structures (II)

  • Linked List
  • Linked List Operations
  • Types of Linked List
  • Heap Data Structure
  • Fibonacci Heap
  • Decrease Key and Delete Node Operations on a Fibonacci Heap

Tree based DSA (I)

  • Tree Data Structure
  • Tree Traversal
  • Binary Tree
  • Full Binary Tree
  • Perfect Binary Tree
  • Complete Binary Tree
  • Balanced Binary Tree
  • Binary Search Tree

Tree based DSA (II)

  • Insertion in a B-tree
  • Deletion from a B-tree
  • Insertion on a B+ Tree
  • Deletion from a B+ Tree
  • Red-Black Tree
  • Red-Black Tree Insertion
  • Red-Black Tree Deletion

Graph based DSA

  • Graph Data Structure
  • Spanning Tree
  • Strongly Connected Components
  • Adjacency Matrix
  • Adjacency List
  • DFS Algorithm
  • Breadth-first Search
  • Bellman Ford's Algorithm

Sorting and Searching Algorithms

  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Counting Sort
  • Bucket Sort
  • Linear Search
  • Binary Search

Greedy Algorithms

  • Greedy Algorithm
  • Ford-Fulkerson Algorithm
  • Dijkstra's Algorithm
  • Kruskal's Algorithm
  • Prim's Algorithm
  • Huffman Coding
  • Dynamic Programming
  • Floyd-Warshall Algorithm
  • Longest Common Sequence

Other Algorithms

  • Backtracking Algorithm
  • Rabin-Karp Algorithm

DSA Tutorials

Radix Sort Algorithm

Shell Sort Algorithm

Counting Sort Algorithm

Insertion Sort Algorithm

  • Selection Sort Algorithm
  • Bucket Sort Algorithm

Sorting Algorithm

A sorting algorithm is used to arrange elements of an array/list in a specific order. For example,

Sorting an array

Here, we are sorting the array in ascending order.

There are various sorting algorithms that can be used to complete this operation. And, we can use any algorithm based on the requirement.

  • Different Sorting Algorithms
  • Complexity of Sorting Algorithms

The efficiency of any sorting algorithm is determined by the time complexity and space complexity of the algorithm.

1. Time Complexity : Time complexity refers to the time taken by an algorithm to complete its execution with respect to the size of the input. It can be represented in different forms:

  • Big-O notation (O)
  • Omega notation (Ω)
  • Theta notation (Θ)

2. Space Complexity : Space complexity refers to the total amount of memory used by the algorithm for a complete execution. It includes both the auxiliary memory and the input.

The auxiliary memory is the additional space occupied by the algorithm apart from the input data. Usually, auxiliary memory is considered for calculating the space complexity of an algorithm.

Let's see a complexity analysis of different sorting algorithms.

Sorting Algorithm Time Complexity - Best Time Complexity - Worst Time Complexity - Average Space Complexity
n n n 1
n n n 1
n n n 1
nlog n nlog n nlog n n
nlog n n nlog n log n
n+k n+k n+k max
n+k n+k n+k max
n+k n n n+k
nlog n nlog n nlog n 1
nlog n n nlog n 1
  • Stability of Sorting Algorithm

A sorting algorithm is considered stable if the two or more items with the same value maintain the same relative positions even after sorting.

For example, in the image below, there are two items with the same value 3. An unstable sorting algorithm allows two possibilities where the two positions of 3 may or may not be maintained.

Unstable Sorting

However, after a stable sorting algorithm, there is always one possibility where the positions are maintained as in the original array.

Stable sorting

Here's a table showing the stablilty of different sorting algorithm.

Sorting Algorithm Stability
Yes
No
Yes
Yes
No
Yes
Yes
Yes
No
No

Table of Contents

Sorry about that.

Related Tutorials

DS & Algorithms

Sorting Algorithms Explained with Examples in JavaScript, Python, Java, and C++

Sorting Algorithms Explained with Examples in JavaScript, Python, Java, and C++

What is a Sorting Algorithm?

Sorting algorithms are a set of instructions that take an array or list as an input and arrange the items into a particular order.

Sorts are most commonly in numerical or a form of alphabetical (or lexicographical) order, and can be in ascending (A-Z, 0-9) or descending (Z-A, 9-0) order.

Why Sorting Algorithms are Important

Since they can often reduce the complexity of a problem, sorting algorithms are very important in computer science. These algorithms have direct applications in searching algorithms, database algorithms, divide and conquer methods, data structure algorithms, and many more.

Trade-Offs of Sorting Algorithms

When choosing a sorting algorithm, some questions have to be asked – How big is the collection being sorted? How much memory is available? Does the collection need to grow?

The answers to these questions may determine which algorithm is going to work best for each situation. Some algorithms like merge sort may need a lot of space or memory to run, while insertion sort is not always the fastest, but doesn't require many resources to run.

You should determine what your requirements are, and consider the limitations of your system before deciding which sorting algorithm to use.

Some Common Sorting Algorithms

Some of the most common sorting algorithms are:

  • Selection sort
  • Bubble sort
  • Insertion sort
  • Counting sort
  • Bucket sort

But before we get into each of these, let's learn a bit more about what classifies a sorting algorithm.

Classification of a Sorting Algorithm

Sorting algorithms can be categorized based on the following parameters:

  • The number of swaps or inversions required: This is the number of times the algorithm swaps elements to sort the input. Selection sort requires the minimum number of swaps.
  • The number of comparisons: This is the number of times the algorithm compares elements to sort the input. Using Big-O notation , the sorting algorithm examples listed above require at least O(nlogn) comparisons in the best case, and O(n^2) comparisons in the worst case for most of the outputs.
  • Whether or not they use recursion: Some sorting algorithms, such as quick sort, use recursive techniques to sort the input. Other sorting algorithms, such as selection sort or insertion sort, use non-recursive techniques. Finally, some sorting algorithms, such as merge sort, make use of both recursive as well as non-recursive techniques to sort the input.
  • Whether they are stable or unstable: Stable sorting algorithms maintain the relative order of elements with equal values, or keys. Unstable sorting algorithms do not maintain the relative order of elements with equal values / keys. For example, imagine you have the input array [1, 2, 3, 2, 4] . And to help differentiate between the two equal values, 2 , let's update them to 2a and 2b , making the input array [1, 2a, 3, 2b, 4] . Stable sorting algorithms will maintain the order of 2a and 2b , meaning the output array will be [1, 2a, 2b, 3, 4] . Unstable sorting algorithms do not maintain the order of equal values, and the output array may be [1, 2b, 2a, 3, 4] . Insertion sort, merge sort, and bubble sort are stable. Heap sort and quick sort are unstable.
  • The amount of extra space required: Some sorting algorithms can sort a list without creating an entirely new list. These are known as in-place sorting algorithms, and require a constant O(1) extra space for sorting. Meanwhile, out of place sorting algorithms create a new list while sorting. Insertion sort and quick sort are in place sorting algorithms, as elements are moved around a pivot point, and do not use a separate array. Merge sort is an example of an out of place sorting algorithm, as the size of the input must be allocated beforehand to store the output during the sort process, which requires extra memory.

Bucket Sort

Bucket sort is a comparison sort algorithm that operates on elements by dividing them into different buckets and then sorting these buckets individually. Each bucket is sorted individually using a separate sorting algorithm like insertion sort, or by applying the bucket sort algorithm recursively.

Bucket sort is mainly useful when the input is uniformly distributed over a range. For example, imagine you have a large array of floating point integers distributed uniformly between an upper and lower bound.

You could use another sorting algorithm like merge sort, heap sort, or quick sort. However, those algorithms guarantee a best case time complexity of O(nlogn) .

Using bucket sort, sorting the same array can be completed in O(n) time.

Pseudo Code for Bucket Sort:

Counting sort.

The counting sort algorithm works by first creating a list of the counts or occurrences of each unique value in the list. It then creates a final sorted list based on the list of counts.

One important thing to remember is that counting sort can only be used when you know the range of possible values in the input beforehand.

  • Space complexity: O(k)
  • Best case performance: O(n+k)
  • Average case performance: O(n+k)
  • Worst case performance: O(n+k)
  • Stable: Yes ( k is the range of the elements in the array)

Implementation in JavaScript

C++ implementation, swift implementation, insertion sort.

Insertion sort is a simple sorting algorithm for a small number of elements.

In Insertion sort, you compare the key element with the previous elements. If the previous elements are greater than the key element, then you move the previous element to the next position.

Start from index 1 to size of the input array.

[ 8 3 5 1 4 2 ]

[ 8 3 5 1 4 2 ]

The algorithm shown below is a slightly optimized version to avoid swapping the   key  element in every iteration. Here, the   key  element will be swapped at the end of the iteration (step).

Here is a detailed implementation in JavaScript:

A quick implementation in Swift is shown below:

The Java example is shown below:

And in c....

Properties:

  • Space Complexity: O(1)
  • Time Complexity: O(n), O(n* n), O(n* n) for Best, Average, Worst cases respectively.
  • Best Case: array is already sorted
  • Average Case: array is randomly sorted
  • Worst Case: array is reversely sorted.
  • Sorting In Place: Yes
  • Stable: Yes

Heapsort is an efficient sorting algorithm based on the use of max/min heaps. A heap is a tree-based data structure that satisfies the heap property – that is for a max heap, the key of any node is less than or equal to the key of its parent (if it has a parent).

This property can be leveraged to access the maximum element in the heap in O(logn) time using the maxHeapify method. We perform this operation n times, each time moving the maximum element in the heap to the top of the heap and extracting it from the heap and into a sorted array. Thus, after n iterations we will have a sorted version of the input array.

The algorithm is not an in-place algorithm and would require a heap data structure to be constructed first. The algorithm is also unstable, which means when comparing objects with same key, the original ordering would not be preserved.

This algorithm runs in O(nlogn) time and O(1) additional space [O(n) including the space required to store the input data] since all operations are performed entirely in-place.

The best, worst and average case time complexity of Heapsort is O(nlogn). Although heapsort has a better worse-case complexity than quicksort, a well-implemented quicksort runs faster in practice. This is a comparison-based algorithm so it can be used for non-numerical data sets insofar as some relation (heap property) can be defined over the elements.

An implementation in Java is as shown below :

Implementation in C++

Prerequisite: Counting Sort

QuickSort, MergeSort, and HeapSort are comparison-based sorting algorithms. CountSort is not. It has the complexity of O(n+k), where k is the maximum element of the input array. So, if k is O(n), CountSort becomes linear sorting, which is better than comparison based sorting algorithms that have O(nlogn) time complexity.

The idea is to extend the CountSort algorithm to get a better time complexity when k goes O(n2). Here comes the idea of Radix Sort.

The Algorithm:

For each digit i where i varies from the least significant digit to the most significant digit of a number, sort input array using countsort algorithm according to ith digit. We used count sort because it is a stable sort.

Example: Assume the input array is:

10, 21, 17, 34, 44, 11, 654, 123

Based on the algorithm, we will sort the input array according to the one's digit (least significant digit).

0: 10 1: 21 11 2: 3: 123 4: 34 44 654 5: 6: 7: 17 8: 9:

So, the array becomes 10, 21, 11, 123, 24, 44, 654, 17.

Now, we'll sort according to the ten's digit:

0: 1: 10 11 17 2: 21 123 3: 34 4: 44 5: 654 6: 7: 8: 9:

Now, the array becomes : 10, 11, 17, 21, 123, 34, 44, 654.

Finally, we sort according to the hundred's digit (most significant digit):

0: 010 011 017 021 034 044 1: 123 2: 3: 4: 5: 6: 654 7: 8: 9:

The array becomes : 10, 11, 17, 21, 34, 44, 123, 654 which is sorted. This is how our algorithm works.

An implementation in C:

Selection Sort

Selection Sort is one of the simplest sorting algorithms. This algorithm gets its name from the way it iterates through the array: it selects the current smallest element, and swaps it into place.

Here's how it works:

  • Find the smallest element in the array and swap it with the first element.
  • Find the second smallest element and swap with with the second element in the array.
  • Find the third smallest element and swap wit with the third element in the array.
  • Repeat the process of finding the next smallest element and swapping it into the correct position until the entire array is sorted.

But, how would you write the code for finding the index of the second smallest value in an array?

An easy way is to notice that the smallest value has already been swapped into index 0, so the problem reduces to finding the smallest element in the array starting at index 1.

Selection sort always takes the same number of key comparisons — N(N − 1)/2.

Implementation in C/C++

The following C++ program contains an iterative as well as a recursive implementation of the Selection Sort algorithm. Both implementations are invoked in the   main()  function.

Implementation in Python

Implementation in java, implementation in matlab.

  • Space Complexity:   O(n)
  • Time Complexity:   O(n2)
  • Sorting in Place:   Yes
  • Stable:   No

Bubble Sort

Just like the way bubbles rise from the bottom of a glass, bubble sort is a simple algorithm that sorts a list, allowing either lower or higher values to bubble up to the top. The algorithm traverses a list and compares adjacent values, swapping them if they are not in the correct order.

With a worst-case complexity of O(n^2), bubble sort is very slow compared to other sorting algorithms like quicksort. The upside is that it is one of the easiest sorting algorithms to understand and code from scratch.

From technical perspective, bubble sort is reasonable for sorting small-sized arrays or specially when executing sort algorithms on computers with remarkably limited memory resources.

First pass through the list:

  • Starting with [4, 2, 6, 3, 9] , the algorithm compares the first two elements in the array, 4 and 2. It swaps them because 2 < 4: [2, 4, 6, 3, 9]
  • It compares the next two values, 4 and 6. As 4 < 6, these are already in order, and the algorithm moves on: [2, 4, 6, 3, 9]
  • The next two values are also swapped because 3 < 6: [2, 4, 3, 6, 9]
  • The last two values, 6 and 9, are already in order, so the algorithm does not swap them.

Second pass through the list:

  • 2 < 4, so there is no need to swap positions: [2, 4, 3, 6, 9]
  • The algorithm swaps the next two values because 3 < 4: [2, 3, 4, 6, 9]
  • No swap as 4 < 6: [2, 3, 4, 6, 9]
  • Again, 6 < 9, so no swap occurs: [2, 3, 4, 6, 9]

The list is already sorted, but the bubble sort algorithm doesn't realize this. Rather, it needs to complete an entire pass through the list without swapping any values to know the list is sorted.

Third pass through the list:

  • [2, 4, 3, 6, 9] => [2, 4, 3, 6, 9]

Clearly bubble sort is far from the most efficient sorting algorithm. Still, it's simple to wrap your head around and implement yourself.

  • Space complexity: O(1)
  • Best case performance: O(n)
  • Average case performance: O(n*n)
  • Worst case performance: O(n*n)

Video Explanation

Bubble sort algorithm

Example in JavaScript

Example in java, example in c++, example in swift, example in python, example in php, example in c.

Quick sort is an efficient divide and conquer sorting algorithm. Average case time complexity of Quick Sort is O(nlog(n)) with worst case time complexity being O(n^2) depending on the selection of the pivot element, which divides the current array into two sub arrays.

For instance, the time complexity of Quick Sort is approximately O(nlog(n)) when the selection of pivot divides original array into two nearly equal sized sub arrays.

On the other hand, if the algorithm, which selects of pivot element of the input arrays, consistently outputs 2 sub arrays with a large difference in terms of array sizes, quick sort algorithm can achieve the worst case time complexity of O(n^2).

The steps involved in Quick Sort are:

  • Choose an element to serve as a pivot, in this case, the last element of the array is the pivot.
  • Partitioning: Sort the array in such a manner that all elements less than the pivot are to the left, and all elements greater than the pivot are to the right.
  • Call Quicksort recursively, taking into account the previous pivot to properly subdivide the left and right arrays. (A more detailed explanation can be found in the comments below)

Example Implementations in Various Languages

Implementation in javascript:, implementation in c, implementation in python3.

The space complexity of quick sort is O(n) . This is an improvement over other divide and conquer sorting algorithms, which take O(nlong(n)) space.

Quick sort achieves this by changing the order of elements within the given array. Compare this with the merge sort algorithm which creates 2 arrays, each length n/2 , in each function call.

However there does exist the problem of this sorting algorithm being of time O(n*n) if the pivot is always kept at the middle. This can be overcomed by utilizing a random pivot

Best, average, worst, memory: n log(n)n log(n)n 2log(n). It's not a stable algorithm, and quicksort is usually done in-place with O(log(n)) stack space.

The space complexity of quick sort is O(n). This is an improvement over other divide and conquer sorting algorithms, which take O(n log(n)) space.

Timsort is a fast sorting algorithm working at stable O(N log(N)) complexity.

Timsort is a blend of Insertion Sort and Mergesort. This algorithm is implemented in Java’s Arrays.sort() as well as Python’s sorted() and sort(). The smaller parts are sorted using Insertion Sort and are later merged together using Mergesort.

A quick implementation in Python:

Complexity:

Tim sort has a stable Complexity of O(N log(N)) and compares really well with Quicksort. A comparison of complexities can be found on this chart .

Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. The major portion of the algorithm is given two sorted arrays, and we have to merge them into a single sorted array. The whole process of sorting an array of N integers can be summarized into three steps-

  • Divide the array into two halves.
  • Sort the left half and the right half using the same recurring algorithm.
  • Merge the sorted halves.

There is something known as the Two Finger Algorithm that helps us merge two sorted arrays together. Using this subroutine and calling the merge sort function on the array halves recursively will give us the final sorted array we are looking for.

Since this is a recursion based algorithm, we have a recurrence relation for it. A recurrence relation is simply a way of representing a problem in terms of its subproblems.

T(n) = 2 * T(n / 2) + O(n)

Putting it in plain English, we break down the subproblem into two parts at every step and we have some linear amount of work that we have to do for merging the two sorted halves together at each step.

The biggest advantage of using Merge sort is that the time complexity is only n*log(n) to sort an entire Array. It is a lot better than n^2 running time of bubble sort or insertion sort.

Before we write code, let us understand how merge sort works with the help of a diagram.

4712ef1a5d856dbb4af393fcc08a820a38787395

  • Initially we have an array of 6 unsorted integers Arr(5, 8, 3, 9, 1, 2)
  • We split the array into two halves Arr1 = (5, 8, 3) and Arr2 = (9, 1, 2).
  • Again, we divide them into two halves: Arr3 = (5, 8) and Arr4 = (3) and Arr5 = (9, 1) and Arr6 = (2)
  • Again, we divide them into two halves: Arr7 = (5), Arr8 = (8), Arr9 = (9), Arr10 = (1) and Arr6 = (2)
  • We will now compare the elements in these sub arrays in order to merge them.
  • Space Complexity: O(n)
  • Time Complexity: O(n*log(n)). The time complexity for the Merge Sort might not be obvious from the first glance. The log(n) factor that comes in is because of the recurrence relation we have mentioned before.
  • Sorting In Place: No in a typical implementation
  • Parallelizable :yes (Several parallel variants are discussed in the third edition of Cormen, Leiserson, Rivest, and Stein's Introduction to Algorithms.)

JavaScript Implementation

First we check the length of the array. If it is 1 then we simply return the array. This would be our base case. Else, we will find out the middle value and divide the array into two halves. We will now sort both of the halves with recursive calls to MergeSort function.

When we merge the two halfs, we store the result in an auxilliary array. We will compare the starting element of left array to the starting element of right array. Whichever is lesser will be pushed into the results array and we will remove it from there respective arrays using [shift() operator. If we still end up with values in either of left or right array, we would simply concatenate it in the end of the result. Here is the sorted result:

A Merge Sort YouTube Tutorial

Here's a good YouTube video that walks through the topic in detail .

Implementaion in JS

Let us consider array A = {2,5,7,8,9,12,13} and array B = {3,5,6,9,15} and we want array C to be in ascending order as well.

If this article was helpful, share it .

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Sorting Algorithms in Python

Sorting Algorithms in Python

Table of Contents

The Importance of Sorting Algorithms in Python

Python’s built-in sorting algorithm, timing your code, measuring efficiency with big o notation, implementing bubble sort in python, measuring bubble sort’s big o runtime complexity, timing your bubble sort implementation, analyzing the strengths and weaknesses of bubble sort, implementing insertion sort in python, measuring insertion sort’s big o runtime complexity, timing your insertion sort implementation, analyzing the strengths and weaknesses of insertion sort, implementing merge sort in python, measuring merge sort’s big o complexity, timing your merge sort implementation, analyzing the strengths and weaknesses of merge sort, implementing quicksort in python, selecting the pivot element, measuring quicksort’s big o complexity, timing your quicksort implementation, analyzing the strengths and weaknesses of quicksort, implementing timsort in python, measuring timsort’s big o complexity, timing your timsort implementation, analyzing the strengths and weaknesses of timsort.

Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Introduction to Sorting Algorithms in Python

Sorting is a basic building block that many other algorithms are built upon. It’s related to several exciting ideas that you’ll see throughout your programming career. Understanding how sorting algorithms in Python work behind the scenes is a fundamental step toward implementing correct and efficient algorithms that solve real-world problems.

In this tutorial, you’ll learn:

  • How different sorting algorithms in Python work and how they compare under different circumstances
  • How Python’s built-in sort functionality works behind the scenes
  • How different computer science concepts like recursion and divide and conquer apply to sorting
  • How to measure the efficiency of an algorithm using Big O notation and Python’s timeit module

By the end of this tutorial, you’ll understand sorting algorithms from both a theoretical and a practical standpoint. More importantly, you’ll have a deeper understanding of different algorithm design techniques that you can apply to other areas of your work. Let’s get started!

Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.

Sorting is one of the most thoroughly studied algorithms in computer science. There are dozens of different sorting implementations and applications that you can use to make your code more efficient and effective.

You can use sorting to solve a wide range of problems:

Searching: Searching for an item on a list works much faster if the list is sorted.

Selection: Selecting items from a list based on their relationship to the rest of the items is easier with sorted data. For example, finding the k th -largest or smallest value, or finding the median value of the list, is much easier when the values are in ascending or descending order.

Duplicates: Finding duplicate values on a list can be done very quickly when the list is sorted.

Distribution: Analyzing the frequency distribution of items on a list is very fast if the list is sorted. For example, finding the element that appears most or least often is relatively straightforward with a sorted list.

From commercial applications to academic research and everywhere in between, there are countless ways you can use sorting to save yourself time and effort.

The Python language, like many other high-level programming languages, offers the ability to sort data out of the box using sorted() . Here’s an example of sorting an integer array:

You can use sorted() to sort any list as long as the values inside are comparable.

Note: For a deeper dive into how Python’s built-in sorting functionality works, check out How to Use sorted() and .sort() in Python and Sorting Data With Python .

The Significance of Time Complexity

This tutorial covers two different ways to measure the runtime of sorting algorithms:

  • For a practical point of view, you’ll measure the runtime of the implementations using the timeit module.
  • For a more theoretical perspective, you’ll measure the runtime complexity of the algorithms using Big O notation .

When comparing two sorting algorithms in Python, it’s always informative to look at how long each one takes to run. The specific time each algorithm takes will be partly determined by your hardware, but you can still use the proportional time between executions to help you decide which implementation is more time efficient.

In this section, you’ll focus on a practical way to measure the actual time it takes to run to your sorting algorithms using the timeit module. For more information on the different ways you can time the execution of code in Python, check out Python Timer Functions: Three Ways to Monitor Your Code .

Here’s a function you can use to time your algorithms:

In this example, run_sorting_algorithm() receives the name of the algorithm and the input array that needs to be sorted. Here’s a line-by-line explanation of how it works:

Line 8 imports the name of the algorithm using the magic of Python’s f-strings . This is so that timeit.repeat() knows where to call the algorithm from. Note that this is only necessary for the custom implementations used in this tutorial. If the algorithm specified is the built-in sorted() , then nothing will be imported.

Line 11 prepares the call to the algorithm with the supplied array. This is the statement that will be executed and timed.

Line 15 calls timeit.repeat() with the setup code and the statement. This will call the specified sorting algorithm ten times, returning the number of seconds each one of these executions took.

Line 19 identifies the shortest time returned and prints it along with the name of the algorithm.

Note: A common misconception is that you should find the average time of each run of the algorithm instead of selecting the single shortest time. Time measurements are noisy because the system runs other processes concurrently. The shortest time is always the least noisy, making it the best representation of the algorithm’s true runtime.

Here’s an example of how to use run_sorting_algorithm() to determine the time it takes to sort an array of ten thousand integer values using sorted() :

If you save the above code in a sorting.py file, then you can run it from the terminal and see its output:

Remember that the time in seconds of every experiment depends in part on the hardware you use, so you’ll likely see slightly different results when running the code.

Note: You can learn more about the timeit module in the official Python documentation .

The specific time an algorithm takes to run isn’t enough information to get the full picture of its time complexity . To solve this problem, you can use Big O (pronounced “big oh”) notation. Big O is often used to compare different implementations and decide which one is the most efficient, skipping unnecessary details and focusing on what’s most important in the runtime of an algorithm.

The time in seconds required to run different algorithms can be influenced by several unrelated factors, including processor speed or available memory. Big O, on the other hand, provides a platform to express runtime complexity in hardware-agnostic terms. With Big O, you express complexity in terms of how quickly your algorithm’s runtime grows relative to the size of the input, especially as the input grows arbitrarily large.

Assuming that n is the size of the input to an algorithm, the Big O notation represents the relationship between n and the number of steps the algorithm takes to find a solution. Big O uses a capital letter “O” followed by this relationship inside parentheses. For example, O(n) represents algorithms that execute a number of steps proportional to the size of their input.

Although this tutorial isn’t going to dive very deep into the details of Big O notation, here are five examples of the runtime complexity of different algorithms:

Big O Complexity Description
constant The runtime is constant regardless of the size of the input. Finding an element in a is an example of an operation that can be performed in .
linear The runtime grows linearly with the size of the input. A function that checks a condition on every item of a list is an example of an algorithm.
) quadratic The runtime is a quadratic function of the size of the input. A naive implementation of finding duplicate values in a list, in which each item has to be checked twice, is an example of a quadratic algorithm.
) exponential The runtime grows exponentially with the size of the input. These algorithms are considered extremely inefficient. An example of an exponential algorithm is the .
logarithmic The runtime grows linearly while the size of the input grows exponentially. For example, if it takes one second to process one thousand elements, then it will take two seconds to process ten thousand, three seconds to process one hundred thousand, and so on. is an example of a logarithmic runtime algorithm.

This tutorial covers the Big O runtime complexity of each of the sorting algorithms discussed. It also includes a brief explanation of how to determine the runtime on each particular case. This will give you a better understanding of how to start using Big O to classify other algorithms.

Note: For a deeper understanding of Big O, together with several practical examples in Python, check out Big O Notation and Algorithm Analysis with Python Examples .

The Bubble Sort Algorithm in Python

Bubble Sort is one of the most straightforward sorting algorithms. Its name comes from the way the algorithm works: With every new pass, the largest element in the list “bubbles up” toward its correct position.

Bubble sort consists of making multiple passes through a list, comparing elements one by one, and swapping adjacent items that are out of order.

Here’s an implementation of a bubble sort algorithm in Python:

Since this implementation sorts the array in ascending order, each step “bubbles” the largest element to the end of the array. This means that each iteration takes fewer steps than the previous iteration because a continuously larger portion of the array is sorted.

The loops in lines 4 and 10 determine the way the algorithm runs through the list. Notice how j initially goes from the first element in the list to the element immediately before the last. During the second iteration, j runs until two items from the last, then three items from the last, and so on. At the end of each iteration, the end portion of the list will be sorted.

As the loops progress, line 15 compares each element with its adjacent value, and line 18 swaps them if they are in the incorrect order. This ensures a sorted list at the end of the function.

Note: The already_sorted flag in lines 13, 23, and 27 of the code above is an optimization to the algorithm, and it’s not required in a fully functional bubble sort implementation. However, it allows the function to skip unnecessary steps if the list ends up wholly sorted before the loops have finished.

As an exercise, you can remove the use of this flag and compare the runtimes of both implementations.

To properly analyze how the algorithm works, consider a list with values [8, 2, 6, 4, 5] . Assume you’re using bubble_sort() from above. Here’s a figure illustrating what the array looks like at each iteration of the algorithm:

Bubble Sort Algorithm

Now take a step-by-step look at what’s happening with the array as the algorithm progresses:

The code starts by comparing the first element, 8 , with its adjacent element, 2 . Since 8 > 2 , the values are swapped, resulting in the following order: [2, 8, 6, 4, 5] .

The algorithm then compares the second element, 8 , with its adjacent element, 6 . Since 8 > 6 , the values are swapped, resulting in the following order: [2, 6, 8, 4, 5] .

Next, the algorithm compares the third element, 8 , with its adjacent element, 4 . Since 8 > 4 , it swaps the values as well, resulting in the following order: [2, 6, 4, 8, 5] .

Finally, the algorithm compares the fourth element, 8 , with its adjacent element, 5 , and swaps them as well, resulting in [2, 6, 4, 5, 8] . At this point, the algorithm completed the first pass through the list ( i = 0 ). Notice how the value 8 bubbled up from its initial location to its correct position at the end of the list.

The second pass ( i = 1 ) takes into account that the last element of the list is already positioned and focuses on the remaining four elements, [2, 6, 4, 5] . At the end of this pass, the value 6 finds its correct position. The third pass through the list positions the value 5 , and so on until the list is sorted.

Your implementation of bubble sort consists of two nested for loops in which the algorithm performs n - 1 comparisons, then n - 2 comparisons, and so on until the final comparison is done. This comes at a total of (n - 1) + (n - 2) + (n - 3) + … + 2 + 1 = n(n-1)/2 comparisons, which can also be written as ½n 2 - ½n .

You learned earlier that Big O focuses on how the runtime grows in comparison to the size of the input. That means that, in order to turn the above equation into the Big O complexity of the algorithm, you need to remove the constants because they don’t change with the input size.

Doing so simplifies the notation to n 2 - n . Since n 2 grows much faster than n , this last term can be dropped as well, leaving bubble sort with an average- and worst-case complexity of O(n 2 ) .

In cases where the algorithm receives an array that’s already sorted—and assuming the implementation includes the already_sorted flag optimization explained before—the runtime complexity will come down to a much better O(n) because the algorithm will not need to visit any element more than once.

O(n) , then, is the best-case runtime complexity of bubble sort. But keep in mind that best cases are an exception, and you should focus on the average case when comparing different algorithms.

Using your run_sorting_algorithm() from earlier in this tutorial, here’s the time it takes for bubble sort to process an array with ten thousand items. Line 8 replaces the name of the algorithm and everything else stays the same:

You can now run the script to get the execution time of bubble_sort :

It took 73 seconds to sort the array with ten thousand elements. This represents the fastest execution out of the ten repetitions that run_sorting_algorithm() runs. Executing this script multiple times will produce similar results.

Note: A single execution of bubble sort took 73 seconds, but the algorithm ran ten times using timeit.repeat() . This means that you should expect your code to take around 73 * 10 = 730 seconds to run, assuming you have similar hardware characteristics. Slower machines may take much longer to finish.

The main advantage of the bubble sort algorithm is its simplicity . It is straightforward to both implement and understand. This is probably the main reason why most computer science courses introduce the topic of sorting using bubble sort.

As you saw before, the disadvantage of bubble sort is that it is slow , with a runtime complexity of O(n 2 ) . Unfortunately, this rules it out as a practical candidate for sorting large arrays.

The Insertion Sort Algorithm in Python

Like bubble sort, the insertion sort algorithm is straightforward to implement and understand. But unlike bubble sort, it builds the sorted list one element at a time by comparing each item with the rest of the list and inserting it into its correct position. This “insertion” procedure gives the algorithm its name.

An excellent analogy to explain insertion sort is the way you would sort a deck of cards. Imagine that you’re holding a group of cards in your hands, and you want to arrange them in order. You’d start by comparing a single card step by step with the rest of the cards until you find its correct position. At that point, you’d insert the card in the correct location and start over with a new card, repeating until all the cards in your hand were sorted.

The insertion sort algorithm works exactly like the example with the deck of cards. Here’s the implementation in Python:

Unlike bubble sort, this implementation of insertion sort constructs the sorted list by pushing smaller items to the left. Let’s break down insertion_sort() line by line:

Line 4 sets up the loop that determines the key_item that the function will position during each iteration. Notice that the loop starts with the second item on the list and goes all the way to the last item.

Line 7 initializes key_item with the item that the function is trying to place.

Line 12 initializes a variable that will consecutively point to each element to the left of key item . These are the elements that will be consecutively compared with key_item .

Line 18 compares key_item with each value to its left using a while loop, shifting the elements to make room to place key_item .

Line 27 positions key_item in its correct place after the algorithm shifts all the larger values to the right.

Here’s a figure illustrating the different iterations of the algorithm when sorting the array [8, 2, 6, 4, 5] :

Insertion Sort Algorithm

Now here’s a summary of the steps of the algorithm when sorting the array:

The algorithm starts with key_item = 2 and goes through the subarray to its left to find the correct position for it. In this case, the subarray is [8] .

Since 2 < 8 , the algorithm shifts element 8 one position to its right. The resultant array at this point is [8, 8, 6, 4, 5] .

Since there are no more elements in the subarray, the key_item is now placed in its new position, and the final array is [2, 8, 6, 4, 5] .

The second pass starts with key_item = 6 and goes through the subarray located to its left, in this case [2, 8] .

Since 6 < 8 , the algorithm shifts 8 to its right. The resultant array at this point is [2, 8, 8, 4, 5] .

Since 6 > 2 , the algorithm doesn’t need to keep going through the subarray, so it positions key_item and finishes the second pass. At this time, the resultant array is [2, 6, 8, 4, 5] .

The third pass through the list puts the element 4 in its correct position, and the fourth pass places element 5 in the correct spot, leaving the array sorted.

Similar to your bubble sort implementation, the insertion sort algorithm has a couple of nested loops that go over the list. The inner loop is pretty efficient because it only goes through the list until it finds the correct position of an element. That said, the algorithm still has an O(n 2 ) runtime complexity on the average case.

The worst case happens when the supplied array is sorted in reverse order. In this case, the inner loop has to execute every comparison to put every element in its correct position. This still gives you an O(n 2 ) runtime complexity.

The best case happens when the supplied array is already sorted. Here, the inner loop is never executed, resulting in an O(n) runtime complexity, just like the best case of bubble sort.

Although bubble sort and insertion sort have the same Big O runtime complexity, in practice, insertion sort is considerably more efficient than bubble sort. If you look at the implementation of both algorithms, then you can see how insertion sort has to make fewer comparisons to sort the list.

To prove the assertion that insertion sort is more efficient than bubble sort, you can time the insertion sort algorithm and compare it with the results of bubble sort. To do this, you just need to replace the call to run_sorting_algorithm() with the name of your insertion sort implementation:

You can execute the script as before:

Notice how the insertion sort implementation took around 17 fewer seconds than the bubble sort implementation to sort the same array. Even though they’re both O(n 2 ) algorithms, insertion sort is more efficient.

Just like bubble sort, the insertion sort algorithm is very uncomplicated to implement. Even though insertion sort is an O(n 2 ) algorithm, it’s also much more efficient in practice than other quadratic implementations such as bubble sort.

There are more powerful algorithms, including merge sort and Quicksort, but these implementations are recursive and usually fail to beat insertion sort when working on small lists. Some Quicksort implementations even use insertion sort internally if the list is small enough to provide a faster overall implementation. Timsort also uses insertion sort internally to sort small portions of the input array.

That said, insertion sort is not practical for large arrays, opening the door to algorithms that can scale in more efficient ways.

The Merge Sort Algorithm in Python

Merge sort is a very efficient sorting algorithm. It’s based on the divide-and-conquer approach, a powerful algorithmic technique used to solve complex problems.

To properly understand divide and conquer, you should first understand the concept of recursion . Recursion involves breaking a problem down into smaller subproblems until they’re small enough to manage. In programming, recursion is usually expressed by a function calling itself.

Note : This tutorial doesn’t explore recursion in depth. To better understand how recursion works and see it in action using Python, check out Thinking Recursively in Python and Recursion in Python: An Introduction .

Divide-and-conquer algorithms typically follow the same structure:

  • The original input is broken into several parts, each one representing a subproblem that’s similar to the original but simpler.
  • Each subproblem is solved recursively.
  • The solutions to all the subproblems are combined into a single overall solution.

In the case of merge sort, the divide-and-conquer approach divides the set of input values into two equal-sized parts, sorts each half recursively, and finally merges these two sorted parts into a single sorted list.

The implementation of the merge sort algorithm needs two different pieces:

  • A function that recursively splits the input in half
  • A function that merges both halves, producing a sorted array

Here’s the code to merge two different arrays:

merge() receives two different sorted arrays that need to be merged together. The process to accomplish this is straightforward:

Lines 4 and 9 check whether either of the arrays is empty. If one of them is, then there’s nothing to merge, so the function returns the other array.

Line 17 starts a while loop that ends whenever the result contains all the elements from both of the supplied arrays. The goal is to look into both arrays and combine their items to produce a sorted list.

Line 21 compares the elements at the head of both arrays, selects the smaller value, and appends it to the end of the resultant array.

Lines 31 and 35 append any remaining items to the result if all the elements from either of the arrays were already used.

With the above function in place, the only missing piece is a function that recursively splits the input array in half and uses merge() to produce the final result:

Here’s a quick summary of the code:

Line 44 acts as the stopping condition for the recursion. If the input array contains fewer than two elements, then the function returns the array. Notice that this condition could be triggered by receiving either a single item or an empty array. In both cases, there’s nothing left to sort, so the function should return.

Line 47 computes the middle point of the array.

Line 52 calls merge() , passing both sorted halves as the arrays.

Notice how this function calls itself recursively , halving the array each time. Each iteration deals with an ever-shrinking array until fewer than two elements remain, meaning there’s nothing left to sort. At this point, merge() takes over, merging the two halves and producing a sorted list.

Take a look at a representation of the steps that merge sort will take to sort the array [8, 2, 6, 4, 5] :

Merge Sort Algorithm

The figure uses yellow arrows to represent halving the array at each recursion level. The green arrows represent merging each subarray back together. The steps can be summarized as follows:

The first call to merge_sort() with [8, 2, 6, 4, 5] defines midpoint as 2 . The midpoint is used to halve the input array into array[:2] and array[2:] , producing [8, 2] and [6, 4, 5] , respectively. merge_sort() is then recursively called for each half to sort them separately.

The call to merge_sort() with [8, 2] produces [8] and [2] . The process repeats for each of these halves.

The call to merge_sort() with [8] returns [8] since that’s the only element. The same happens with the call to merge_sort() with [2] .

At this point, the function starts merging the subarrays back together using merge() , starting with [8] and [2] as input arrays, producing [2, 8] as the result.

On the other side, [6, 4, 5] is recursively broken down and merged using the same procedure, producing [4, 5, 6] as the result.

In the final step, [2, 8] and [4, 5, 6] are merged back together with merge() , producing the final result: [2, 4, 5, 6, 8] .

To analyze the complexity of merge sort, you can look at its two steps separately:

merge() has a linear runtime. It receives two arrays whose combined length is at most n (the length of the original input array), and it combines both arrays by looking at each element at most once. This leads to a runtime complexity of O(n) .

The second step splits the input array recursively and calls merge() for each half. Since the array is halved until a single element remains, the total number of halving operations performed by this function is log 2 n . Since merge() is called for each half, we get a total runtime of O(n log 2 n) .

Interestingly, O(n log 2 n) is the best possible worst-case runtime that can be achieved by a sorting algorithm.

To compare the speed of merge sort with the previous two implementations, you can use the same mechanism as before and replace the name of the algorithm in line 8 :

You can execute the script to get the execution time of merge_sort :

Compared to bubble sort and insertion sort, the merge sort implementation is extremely fast, sorting the ten-thousand-element array in less than a second!

Thanks to its runtime complexity of O(n log 2 n) , merge sort is a very efficient algorithm that scales well as the size of the input array grows. It’s also straightforward to parallelize because it breaks the input array into chunks that can be distributed and processed in parallel if necessary.

That said, for small lists, the time cost of the recursion allows algorithms such as bubble sort and insertion sort to be faster. For example, running an experiment with a list of ten elements results in the following times:

Both bubble sort and insertion sort beat merge sort when sorting a ten-element list.

Another drawback of merge sort is that it creates copies of the array when calling itself recursively. It also creates a new list inside merge() to sort and return both input halves. This makes merge sort use much more memory than bubble sort and insertion sort, which are both able to sort the list in place.

Due to this limitation, you may not want to use merge sort to sort large lists in memory-constrained hardware.

The Quicksort Algorithm in Python

Just like merge sort, the Quicksort algorithm applies the divide-and-conquer principle to divide the input array into two lists, the first with small items and the second with large items. The algorithm then sorts both lists recursively until the resultant list is completely sorted.

Dividing the input list is referred to as partitioning the list. Quicksort first selects a pivot element and partitions the list around the pivot , putting every smaller element into a low array and every larger element into a high array.

Putting every element from the low list to the left of the pivot and every element from the high list to the right positions the pivot precisely where it needs to be in the final sorted list. This means that the function can now recursively apply the same procedure to low and then high until the entire list is sorted.

Here’s a fairly compact implementation of Quicksort:

Here’s a summary of the code:

Line 6 stops the recursive function if the array contains fewer than two elements.

Line 12 selects the pivot element randomly from the list and proceeds to partition the list.

Lines 19 and 20 put every element that’s smaller than pivot into the list called low .

Lines 21 and 22 put every element that’s equal to pivot into the list called same .

Lines 23 and 24 put every element that’s larger than pivot into the list called high .

Line 28 recursively sorts the low and high lists and combines them along with the contents of the same list.

Here’s an illustration of the steps that Quicksort takes to sort the array [8, 2, 6, 4, 5] :

Quick Sort Algorithm

The yellow lines represent the partitioning of the array into three lists: low , same , and high . The green lines represent sorting and putting these lists back together. Here’s a brief explanation of the steps:

The pivot element is selected randomly. In this case, pivot is 6 .

The first pass partitions the input array so that low contains [2, 4, 5] , same contains [6] , and high contains [8] .

quicksort() is then called recursively with low as its input. This selects a random pivot and breaks the array into [2] as low , [4] as same , and [5] as high .

The process continues, but at this point, both low and high have fewer than two items each. This ends the recursion, and the function puts the array back together. Adding the sorted low and high to either side of the same list produces [2, 4, 5] .

On the other side, the high list containing [8] has fewer than two elements, so the algorithm returns the sorted low array, which is now [2, 4, 5] . Merging it with same ( [6] ) and high ( [8] ) produces the final sorted list.

Why does the implementation above select the pivot element randomly? Wouldn’t it be the same to consistently select the first or last element of the input list?

Because of how the Quicksort algorithm works, the number of recursion levels depends on where pivot ends up in each partition. In the best-case scenario, the algorithm consistently picks the median element as the pivot . That would make each generated subproblem exactly half the size of the previous problem, leading to at most log 2 n levels.

On the other hand, if the algorithm consistently picks either the smallest or largest element of the array as the pivot , then the generated partitions will be as unequal as possible, leading to n-1 recursion levels. That would be the worst-case scenario for Quicksort.

As you can see, Quicksort’s efficiency often depends on the pivot selection. If the input array is unsorted, then using the first or last element as the pivot will work the same as a random element. But if the input array is sorted or almost sorted, using the first or last element as the pivot could lead to a worst-case scenario. Selecting the pivot at random makes it more likely Quicksort will select a value closer to the median and finish faster.

Another option for selecting the pivot is to find the median value of the array and force the algorithm to use it as the pivot . This can be done in O(n) time. Although the process is little bit more involved, using the median value as the pivot for Quicksort guarantees you will have the best-case Big O scenario.

With Quicksort, the input list is partitioned in linear time, O(n) , and this process repeats recursively an average of log 2 n times. This leads to a final complexity of O(n log 2 n) .

That said, remember the discussion about how the selection of the pivot affects the runtime of the algorithm. The O(n) best-case scenario happens when the selected pivot is close to the median of the array, and an O(n 2 ) scenario happens when the pivot is the smallest or largest value of the array.

Theoretically, if the algorithm focuses first on finding the median value and then uses it as the pivot element, then the worst-case complexity will come down to O(n log 2 n) . The median of an array can be found in linear time, and using it as the pivot guarantees the Quicksort portion of the code will perform in O(n log 2 n) .

By using the median value as the pivot , you end up with a final runtime of O(n) + O(n log 2 n) . You can simplify this down to O(n log 2 n) because the logarithmic portion grows much faster than the linear portion.

Note : Although achieving O(n log 2 n) is possible in Quicksort’s worst-case scenario, this approach is seldom used in practice. Lists have to be quite large for the implementation to be faster than a simple randomized selection of the pivot .

Randomly selecting the pivot makes the worst case very unlikely. That makes random pivot selection good enough for most implementations of the algorithm.

By now, you’re familiar with the process for timing the runtime of the algorithm. Just change the name of the algorithm in line 8 :

You can execute the script as you have before:

Not only does Quicksort finish in less than one second, but it’s also much faster than merge sort ( 0.11 seconds versus 0.61 seconds). Increasing the number of elements specified by ARRAY_LENGTH from 10,000 to 1,000,000 and running the script again ends up with merge sort finishing in 97 seconds, whereas Quicksort sorts the list in a mere 10 seconds.

True to its name, Quicksort is very fast . Although its worst-case scenario is theoretically O(n 2 ) , in practice, a good implementation of Quicksort beats most other sorting implementations. Also, just like merge sort, Quicksort is straightforward to parallelize .

One of Quicksort’s main disadvantages is the lack of a guarantee that it will achieve the average runtime complexity. Although worst-case scenarios are rare, certain applications can’t afford to risk poor performance, so they opt for algorithms that stay within O(n log 2 n) regardless of the input.

Just like merge sort, Quicksort also trades off memory space for speed. This may become a limitation for sorting larger lists.

A quick experiment sorting a list of ten elements leads to the following results:

The results show that Quicksort also pays the price of recursion when the list is sufficiently small, taking longer to complete than both insertion sort and bubble sort.

The Timsort Algorithm in Python

The Timsort algorithm is considered a hybrid sorting algorithm because it employs a best-of-both-worlds combination of insertion sort and merge sort. Timsort is near and dear to the Python community because it was created by Tim Peters in 2002 to be used as the standard sorting algorithm of the Python language .

The main characteristic of Timsort is that it takes advantage of already-sorted elements that exist in most real-world datasets. These are called natural runs . The algorithm then iterates over the list, collecting the elements into runs and merging them into a single sorted list.

In this section, you’ll create a barebones Python implementation that illustrates all the pieces of the Timsort algorithm. If you’re interested, you can also check out the original C implementation of Timsort .

The first step in implementing Timsort is modifying the implementation of insertion_sort() from before:

This modified implementation adds a couple of parameters, left and right , that indicate which portion of the array should be sorted. This allows the Timsort algorithm to sort a portion of the array in place. Modifying the function instead of creating a new one means that it can be reused for both insertion sort and Timsort.

Now take a look at the implementation of Timsort:

Although the implementation is a bit more complex than the previous algorithms, we can summarize it quickly in the following way:

Lines 8 and 9 create small slices, or runs, of the array and sort them using insertion sort. You learned previously that insertion sort is speedy on small lists, and Timsort takes advantage of this. Timsort uses the newly introduced left and right parameters in insertion_sort() to sort the list in place without having to create new arrays like merge sort and Quicksort do.

Line 16 merges these smaller runs, with each run being of size 32 initially. With each iteration, the size of the runs is doubled, and the algorithm continues merging these larger runs until a single sorted run remains.

Notice how, unlike merge sort, Timsort merges subarrays that were previously sorted. Doing so decreases the total number of comparisons required to produce a sorted list. This advantage over merge sort will become apparent when running experiments using different arrays.

Finally, line 2 defines min_run = 32 . There are two reasons for using 32 as the value here:

Sorting small arrays using insertion sort is very fast, and min_run has a small value to take advantage of this characteristic. Initializing min_run with a value that’s too large will defeat the purpose of using insertion sort and will make the algorithm slower.

Merging two balanced lists is much more efficient than merging lists of disproportionate size. Picking a min_run value that’s a power of two ensures better performance when merging all the different runs that the algorithm creates.

Combining both conditions above offers several options for min_run . The implementation in this tutorial uses min_run = 32 as one of the possibilities.

Note: In practice, Timsort does something a little more complicated to compute min_run . It picks a value between 32 and 64 inclusive, such that the length of the list divided by min_run is exactly a power of 2. If that’s not possible, it chooses a value that’s close to, but strictly less than, a power of 2.

If you’re curious, you can read the complete analysis on how to pick min_run under the Computing minrun section.

On average, the complexity of Timsort is O(n log 2 n) , just like merge sort and Quicksort. The logarithmic part comes from doubling the size of the run to perform each linear merge operation.

However, Timsort performs exceptionally well on already-sorted or close-to-sorted lists, leading to a best-case scenario of O(n) . In this case, Timsort clearly beats merge sort and matches the best-case scenario for Quicksort. But the worst case for Timsort is also O(n log 2 n) , which surpasses Quicksort’s O(n 2 ) .

You can use run_sorting_algorithm() to see how Timsort performs sorting the ten-thousand-element array:

Now execute the script to get the execution time of timsort :

At 0.51 seconds, this Timsort implementation is a full 0.1 seconds, or 17 percent, faster than merge sort, though it doesn’t match the 0.11 of Quicksort. It’s also a ridiculous 11,000 percent faster than insertion sort!

Now try to sort an already-sorted list using these four algorithms and see what happens. You can modify your __main__ section as follows:

If you execute the script now, then all the algorithms will run and output their corresponding execution time:

This time, Timsort comes in at a whopping thirty-seven percent faster than merge sort and five percent faster than Quicksort, flexing its ability to take advantage of the already-sorted runs.

Notice how Timsort benefits from two algorithms that are much slower when used by themselves. The genius of Timsort is in combining these algorithms and playing to their strengths to achieve impressive results.

The main disadvantage of Timsort is its complexity. Despite implementing a very simplified version of the original algorithm, it still requires much more code because it relies on both insertion_sort() and merge() .

One of Timsort’s advantages is its ability to predictably perform in O(n log 2 n) regardless of the structure of the input array. Contrast that with Quicksort, which can degrade down to O(n 2 ) . Timsort is also very fast for small arrays because the algorithm turns into a single insertion sort.

For real-world usage, in which it’s common to sort arrays that already have some preexisting order, Timsort is a great option. Its adaptability makes it an excellent choice for sorting arrays of any length.

Sorting is an essential tool in any Pythonista’s toolkit. With knowledge of the different sorting algorithms in Python and how to maximize their potential, you’re ready to implement faster, more efficient apps and programs!

In this tutorial, you learned:

  • How Python’s built-in sort() works behind the scenes
  • What Big O notation is and how to use it to compare the efficiency of different algorithms
  • How to measure the actual time spent running your code
  • How to implement five different sorting algorithms in Python
  • What the pros and cons are of using different algorithms

You also learned about different techniques such as recursion , divide and conquer , and randomization . These are fundamental building blocks for solving a long list of different algorithms, and they’ll come up again and again as you keep researching.

Take the code presented in this tutorial, create new experiments, and explore these algorithms further. Better yet, try implementing other sorting algorithms in Python. The list is vast, but selection sort , heapsort , and tree sort are three excellent options to start with.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About Santiago Valdarrama

Santiago Valdarrama

Santiago is a software and machine learning engineer who specializes in building enterprise software applications.

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Aldren Santos

Master Real-World Python Skills With Unlimited Access to Real Python

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

What Do You Think?

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. Get tips for asking good questions and get answers to common questions in our support portal . Looking for a real-time conversation? Visit the Real Python Community Chat or join the next “Office Hours” Live Q&A Session . Happy Pythoning!

Keep Learning

Related Topics: intermediate

Recommended Video Course: Introduction to Sorting Algorithms in Python

Keep reading Real Python by creating a free account or signing in:

Already have an account? Sign-In

Almost there! Complete this form and click the button below to gain instant access:

Python Tricks: The Book

"Python Tricks: The Book" – Free Sample Chapter (PDF)

🔒 No spam. We take your privacy seriously.

algorithm sorting assignment

Reset password New user? Sign up

Existing user? Log in

Sorting Algorithms

Already have an account? Log in here.

Recommended Course

Retiring dec 20: algorithms (2019).

This Brilliant course is leaving our library on December 20. Check out our new course: Algorithm Fundamentals!

  • Kristian Takvam

A sorting algorithm is an algorithm made up of a series of instructions that takes an array as input, performs specified operations on the array, sometimes called a list, and outputs a sorted array. Sorting algorithms are often taught early in computer science classes as they provide a straightforward way to introduce other key computer science topics like Big-O notation , divide-and-conquer methods, and data structures such as binary trees , and heaps . There are many factors to consider when choosing a sorting algorithm to use.

Properties of Sorting Algorithms

Common sorting algorithms, choosing a sorting algorithm.

In other words, a sorted array is an array that is in a particular order. For example, \([a,b,c,d]\) is sorted alphabetically, \([1,2,3,4,5]\) is a list of integers sorted in increasing order, and \([5,4,3,2,1]\) is a list of integers sorted in decreasing order.

A sorting algorithm takes an array as input and outputs a permutation of that array that is sorted.

There are two broad types of sorting algorithms: integer sorts and comparison sorts .

Comparison Sorts

Comparison sorts compare elements at each step of the algorithm to determine if one element should be to the left or right of another element.

Comparison sorts are usually more straightforward to implement than integer sorts, but comparison sorts are limited by a lower bound of \(O(n \log n)\), meaning that, on average, comparison sorts cannot be faster than \(O(n \log n)\). A lower bound for an algorithm is the worst-case running time of the best possible algorithm for a given problem. The "on average" part here is important: there are many algorithms that run in very fast time if the inputted list is already sorted, or has some very particular (and overall unlikely) property. There is only one permutation of a list that is sorted, but \(n!\) possible lists, so the chances that the input is already sorted is very unlikely, and on average, the list will not be very sorted.

The running time of comparison-based sorting algorithms is bounded by \(\Omega(n \log n)\). A comparison sort can be modeled as a large binary tree called a decision tree where each node represents a single comparison. Because the sorted list is some permutation of the input list, for an input list of length \(n\), there are \(n!\) possible permutations of that list. This is a decision tree because each of the \(n!\) is represented by a leaf, and the path the algorithm must take to get to each leaf is the series of comparisons and outcomes that yield that particular ordering. At each level of the tree, a comparison is made. Comparisons happen, and we keep traveling down the tree; until the algorithm reaches the leaves of the tree, there will be a leaf for each permutation, so there are \(n!\) leaves. Each comparison halves the number of future comparisons the algorithm must do (since if the algorithm selects the right edge out of a node at a given step, it will not search the nodes and paths connected to the left edge). Therefore, the algorithm performs \(O(\log n!)\) comparisons. Any binary tree, with height \(h\), has a number of leaves that is less than or equal to \(2^h\). From this, \[2^h \geq n!.\] Taking the logarithm results in \[h \geq \log(n!).\] From Stirling's approximation , \[n! > \left(\frac{n}{e}\right)^n.\] Therefore, \[\begin{align} h &\geq \log\left(\frac{n}{e}\right)^n \\ &= n\log \left(\frac{n}{e}\right) \\ &= n\log n- n \log e\\ &= \Omega(n\log n). \end{align}\]

Integer Sorts

Integer sorts are sometimes called counting sorts (though there is a specific integer sort algorithm called counting sort). Integer sorts do not make comparisons, so they are not bounded by \(\Omega(n\log n)\). Integer sorts determine for each element​ \(x\) how many elements are less than \(x\). If there are \(14\) elements that are less than \(x\), then \(x\) will be placed in the \(15^\text{th}\) slot. This information is used to place each element into the correct slot immediately—no need to rearrange lists.

All sorting algorithms share the goal of outputting a sorted list, but the way that each algorithm goes about this task can vary. When working with any kind of algorithm, it is important to know how fast it runs and in how much space it operates—in other words, its time complexity and space complexity . As shown in the section above, comparison-based sorting algorithms have a time complexity of \(\Omega(n\log n)\), meaning the algorithm can't be faster than \(n \log n\). However, usually, the running time of algorithms is discussed​ in terms of big O, and not Omega. For example, if an algorithm had a worst-case running time of \(O(n\log n)\), then it is guaranteed that the algorithm will never be slower than \(O(n\log n)\), and if an algorithm has an average-case running time of \(O(n^2)\), then on average, it will not be slower than \(O(n^2)\).

The running time describes how many operations an algorithm must carry out before it completes. The space complexity describes how much space must be allocated to run a particular algorithm. For example, if an algorithm takes in a list of size \(n\), and for some reason makes a new list of size \(n\) for each element in \(n\), the algorithm needs \(n^2\) space.

Find the big-O running time of a sorting program that does the following:

  • It takes in a list of integers.
  • It iterates once through the list to find the largest element, and moves that element to the end.
  • It repeatedly finds the largest element in the unsorted portion by iterating once through, and moves that element to the end of the unsorted portion.

At the end, the list is sorted low to high.

(Also, try implementing this program in your language of choice.)

Additionally, for sorting algorithms, it is sometimes useful to know if a sorting algorithm is stable.

Stability A sorting algorithm is stable if it preserves the original order of elements with equal key values (where the key is the value the algorithm sorts by). For example, [1] When the cards are sorted by value with a stable sort, the two 5s must remain in the same order in the sorted output that they were originally in. When they are sorted with a non-stable sort, the 5s may end up in the opposite order in the sorted output.

There are many different sorting algorithms, with various pros and cons. Here are a few examples of common sorting algorithms.

  • Insertion Sort

Insertion sort is a comparison-based algorithm that builds a final sorted array one element at a time. It iterates through an input array and removes one element per iteration, finds the place the element belongs in the array, and then places it there.

  • Bubble Sort

Quicksort is a comparison-based algorithm that uses divide-and-conquer to sort an array. The algorithm picks a pivot element, \(A[q]\), and then rearranges the array into two subarrays \(A[p \dots q-1]\), such that all elements are less than \(A[q]\), and \(A[q+1 \dots r]\), such that all elements are greater than or equal to \(A[q]\).

Heapsort is a comparison-based algorithm that uses a binary heap data structure to sort elements. It divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region.

  • Counting Sort

Counting sort is an integer sorting algorithm that assumes that each of the \(n\) input elements in a list has a key value ranging from \(0\) to \(k\), for some integer \(k\). For each element in the list, counting sort determines the number of elements that are less than it. Counting sort can use this information to place the element directly into the correct slot of the output array.

To choose a sorting algorithm for a particular problem, consider the running time, space complexity, and the expected format of the input list.

AlgorithmBest-caseWorst-caseAverage-caseSpace ComplexityStable?
\(O(n \log n)\)\(O(n \log n)\)\(O(n \log n)\)\(O(n)\)Yes
\(O(n)\)\(O(n^2)\)\(O(n^2)\)\(O(1)\)Yes
\(O(n)\)\(O(n^2)\)\(O(n^2)\)\(O(1)\)Yes
\(O(n \log n)\)\(O(n^2)\)\(O(n \log n)\)\(\log n\) best, \(n\) avgUsually not*
\(O(n \log n)\)\(O(n \log n)\)\(O(n \log n)\)\(O(1)\)No
\(O(k+n)\)\(O(k+n)\)\(O(k+n)\)\(O(k+n)\)Yes

*Most quicksort implementations are not stable, though stable implementations do exist.

When choosing a sorting algorithm to use, weigh these factors. For example, quicksort is a very fast algorithm but can be pretty tricky to implement; bubble sort is a slow algorithm but is very easy to implement. To sort small sets of data, bubble sort may be a better option since it can be implemented quickly, but for larger datasets, the speedup from quicksort might be worth the trouble implementing the algorithm.

  • , D., & , W. Sorting stability playing cards . Retrieved May 18, 2016, from https://en.wikipedia.org/wiki/File:Sorting_stability_playing_cards.svg

algorithm sorting assignment

Master concepts like these

Learn more in our Retiring Dec 20: Algorithms (2019) course, built by experts for you.

Problem Loading...

Note Loading...

Set Loading...

Browse Course Material

Course info, instructors.

  • Dr. Ana Bell
  • Prof. Eric Grimson
  • Prof. John Guttag

Departments

  • Electrical Engineering and Computer Science

As Taught In

  • Algorithms and Data Structures
  • Programming Languages

Learning Resource Types

Introduction to computer science and programming in python, lecture 12: searching and sorting.

Description: In this lecture, Prof. Grimson explains basic search and sort algorithms, including linear search, bisection search, bubble sort, selection sort, and merge sort.

Instructor: Prof. Eric Grimson

  • Download video
  • Download transcript

facebook

You are leaving MIT OpenCourseWare

Lecture 5/15: Sorting

May 15, 2020

📂Associated files

  • Sorting.zip
  • Questions and Answers

CS 106B: Programming Abstractions

Spring 2020, stanford university computer science department, lecturers: chris gregg and julie zelenski.

A lego man sorting red and blue lego pieces into boxes.

Announcements

  • Make sure to double check that your assessment release time is correctly displayed on the online access portal . This time should be 5 minutes before the time slot that you signed up for on Paperless. Please let Nick know ASAP if the displayed time is incorrect.
  • Do not close BlueBook after your prep time is over! You will want to have it open while you discuss with your section leader.
  • Do not submit your exam when you're done! We are not collecting submissions, so you will likely run into an error.
  • Assignment 5 will be released over the weekend and due in a week (Friday, May 22).

Introduction to Sorting

Insertion sort, selection sort.

  • Other sorts you might want to look at:
  • Heap Sort (we will cover heaps later in the course)
  • Sort you don't want to look at: BubbleSort
  • be in nondecreasing order (each element must be no smaller than the previous element)
  • be a permutation of the input
  • Fundamentally, comparison sorts at best have a complexity of O(n log n).
  • We also need to consider the space complexity: some sorts can be done in place, meaning the sorting does not take extra memory. This can be an important factor when choosing a sorting algorithm!
  • In-place sorting can be stable or unstable : a stable sort retains the order of elements with the same key, from the original unsorted list to the final, sorted, list
  • see the Sorting Algorithm Animations website
  • here is a great animation site
  • 15 sorts in 6 minutes video
  • There are many different ways to sort elements n a list.
  • Insertion sort
  • Selection sort
  • Mergesort (which you basically saw in assignment 4!)
  • Insertion sort: orders a list of values by repetitively inserting a particular value into a sorted subset of the list
  • More specifically: – consider the first item to be a sorted sublist of length 1 – insert second item into sorted sublist, shifting first item if needed – insert third item into sorted sublist, shifting items 1-2 as needed – … – repeat until all values have been inserted into their proper positions

Insertion Sort Algorithm

  • Assume we have the following values, in this order: 9 5 10 8 12 11 14 2 22 43
  • We want to rearrange (sort) the values so that the lowest is on the left, and the highest is on the right, and the entire list is ordered correctly.
  • iterate through the list (starting with the second element)
  • at each element, shuffle the neighbors below that element up until the proper place is found for the element, and place it there.
  • The 9 is already in place (as far as we know), so we start with the 5.
  • We compare the 5 to the one below it, and see that we have to shuffle the 9 to the right and put the 5 into index 0: ←→ 5 9 10 8 12 11 14 2 22 43
  • Next, we look at the 10. When we compare it to the 9 to its left, we see that we don't need to make any changes.
  • Next, we look at the 8. Compared with its left neighbor, 10, we see that we need to shift 10 to the right. We also need to shift the 9 to the right (because it, too, is greater than 8). Finally, we put the 8 into index 1: ⤺←←← 5 9 10 8 12 11 14 2 22 43 ⤻ ⤻ 5 8 9 10 12 11 14 2 22 43
  • Next, we look at the 12, and it is in the right place compared to the 10 and does not need to move.
  • Looking at 11, we see that we need to move the 12 to the right, and put the 11 into index 4. ⤺ 5 8 9 10 12 11 14 2 22 43 ⤻ 5 8 9 10 11 12 14 2 22 43
  • The 14 is in the correct place (bigger than 12)
  • Now we look at the 2. We traverse to the left and see that we need to move the 14, 12, 11, 10, 9, 8, and 5 to the right! Then, we put the 2 into index 0: ⤺←←←←←←←←←←←←←← 5 8 9 10 11 12 14 2 22 43 ⤻⤻⤻ ⤻ ⤻ ⤻ ⤻ 2 5 8 9 10 11 12 14 22 43
  • The 22 does not need to be moved (it is greater than 14)
  • Finally, the 43 doesn't need to be moved, because it is greater than 22.
  • We have sorted the array!
  • Worst performance: O(n^2) (why?)
  • Best performnance: O(n)
  • Average performance: O(n^2), but very fast for small arrays, as it is a simple algorithm.

Insertion sort Code

  • Selection Sort is another in-place sort that has a simple algorithm:
  • Find the smallest item in the list, and exchange it with the left-most unsorted element.
  • Repeat the process from the first unsorted element.
  • Here is a good animation of selection sort.

Selection Sort Example

  • Find the smallest item in the list , and exchange it with the left-most unsorted element.

Selection sort is particularly slow, because it needs to go through the entire list each time to find the smallest item.

  • For the array above, we first look through each element one at a time, and we determine that the 2 is the lowest. So, we swap it into position 9 at index 0: 2 5 10 8 12 11 14 9 22 43
  • Next, starting from index 1, we look through all the rest of the elements and find that the 5 is smallest, and we don't have to swap because it is already at index 1.
  • Next, we loop from index 2, and we find that the 8 is lowest. We then swap with index 2: 2 5 8 10 12 11 14 9 22 43
  • We continue this process, and end up with the following steps: ↓----------↓------- 2 5 8 9 12 11 14 10 22 43 ↓--------↓------- 2 5 8 9 10 11 14 12 22 43 ↓------------ 2 5 8 9 10 11 14 12 22 43 (no swap necessary) ↓--↓------ 2 5 8 9 10 11 12 14 22 43 ↓------- 2 5 8 9 10 11 12 14 22 43 (no swap necessary) ↓---- 2 5 8 9 10 11 12 14 22 43 (no swap necessary) ↓ 2 5 8 9 10 11 12 14 22 43 (no swap necessary)
  • Worst performance: O(n^2)
  • Best performance: O(n^2)
  • Average performance: O(n^2)
  • However: there is one use case for it that makes it useful: If you want to find the "top X" number of elements for a very small X and a large n, selection sort actually works fine.
  • It's trivial why it works well for this: first, it finds the top element, then the next, then then next by looking through the entire list each time.

Selection sort Code

  • The next sort we are going to talk about is merge sort, which you've already seen in assignment 4!
  • As you've seen (and done), merge sort can be coded recursively
  • L1 = { 3 , 5 , 11 } L2 = { 1 , 8 , 10 }
  • merge ( L1 , L2 ) = { 1 , 3 , 5 , 8 , 10 , 11 }

Merging two sorted lists

  • Merging two sorted lists is straightforward: ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: <empty> ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: 1 ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: 1 3 ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: 1 3 5 ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: 1 3 5 8 ↓ ↓ L1: 3 5 11 15 L2: 1 8 10 Result so far: 1 3 5 8 10

Because L2 is done, we simply have to take the rest (11 and 15, in this case) from L1, and we're done:

Merge Sort full algorithm

  • Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element is considered sorted).
  • Repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist remaining. This will be the sorted list.
  • Code (using vectors instead of queues): // Rearranges the elements of v into sorted order using // the merge sort algorithm. void mergeSort ( Vector < int > & vec ) { int n = vec . size (); if ( n <= 1 ) return ; Vector < int > v1 ; Vector < int > v2 ; for ( int i = 0 ; i < n ; i ++ ) { if ( i < n / 2 ) { v1 . add ( vec [ i ]); } else { v2 . add ( vec [ i ]); } } mergeSort ( v1 ); mergeSort ( v2 ); vec . clear (); merge ( vec , v1 , v2 ); }
  • Here is the code to merge two vectors // Merges the left/right elements into a sorted result. // Precondition: left/right are sorted, and vec is empty void merge ( Vector < int > & vec , Vector < int > & v1 , Vector < int > & v2 ) { int n1 = v1 . size (); int n2 = v2 . size (); int p1 = 0 ; int p2 = 0 ; while ( p1 < n1 && p2 < n2 ) { if ( v1 [ p1 ] <= v2 [ p2 ]) { vec . add ( v1 [ p1 ++ ]); } else { vec . add ( v2 [ p2 ++ ]); } } while ( p1 < n1 ) { vec . add ( v1 [ p1 ++ ]); } while ( p2 < n2 ) { vec . add ( v2 [ p2 ++ ]); } }

Merge Sort, full example

  • Assume we have the following vector to sort: [96 6 86 15 58 35 86 4 0]
  • We recursively break into two halves until we are left with single elements (the base case): [96 6 86 15 58 35 86 4 0] [96 6 86 15] [58 35 86 4 0] [96 6] [86 15] [58 35] [86 4 0] [96] [6] [86] [15] [58] [35] [86] [4 0] [4] [0]
  • Now that we have single-element vectors, we merge back up, two at a time: [4]⇔[0] [96] ⇔ [6] [86] ⇔ [15] [58] ⇔ [35] [86] ⇔ [0 4] [6 96] ⇔ [15 86] [35 58] ⇔ [0 4 86] [6 15 86 96] ⇔ [0 4 35 58 86] [0 4 6 15 35 58 86 86 96]

Merge Sort: Time Complexity

  • Worst case: O(n log n)
  • Best case: O(n log n)
  • Average case (all cases): O(n log n)
  • We have log 2 n levels, because we are splitting the arrays into two each time ( divide and conquer ).
  • At each level, we have to work on all n elements. Therefore, the total complexity is O(n * log 2 n), or O(n log n)

Merge Sort: Space Complexity

  • Merge sort is the first sort we've looked at where we must keep producing more and more lists – this is a tradeoff!
  • It would be possible to do merge sort in place , where we swap individual elements, but it is difficult and there is more overhead.
  • If you're tight on memory, merge sort isn't necessarily the best sort

Is our merge sort stable ?

  • A sorting algorithm is stable if the elements that are the same end up in the same relative ordering after the end of the sort. For our example above, we can see if the original 86s are in the same order before and after: [96 6 86 15 58 35 86 4 0]
  • In other words, at the end, we should have the following, where the blue 86 comes before the green 86: [0 4 6 15 35 58 86 86 96]
  • You might ask, why does it matter? but what if the values were keys for values that have other keys associated with them, too? If you first sort by one key and then another, you might want to retain the ordering of elements with the same second key.
  • It turns out that our implementation of merge sort is stable, and it hinges on one line in the merge function: if ( v1 [ p1 ] <= v2 [ p2 ]) {
  • Your book has a less than instead of a less than-and-equal sign, making it not stable!
  • See here for more information about stable sorting.
  • Quicksort is a sorting algorithm that is often faster than most other types of sorts.
  • However, although it has an average O(n log n) time complexity, it also has a worst-case O(n^2) time complexity, though this rarely occurs if you code it wisely.
  • The low elements go on one side of the list (in any order)
  • The high elements go on the other side of the list (in any order)
  • Then, recursively sort the sub-lists.
  • We choose a pivot element, and then base the high and low elements on that element.

Quicksort Algorithm

  • Pick an element , called the pivot , from the list. Choosing the pivot is important, as we will see later.
  • Reorder the list so that all elements with values less than the pivot come before the pivot , while all elements with values greater than the pivot come after the pivot . After this partitioning, the pivot is in its final position. This is called the partition operation.
  • Recursively apply the above steps to the sub-list of elements with smaller values and separately to the sub-list of elements with greater values.
  • The base case of the recursion is for lists of 0 or 1 elements , which do not need to be sorted (because they are already trivially sorted).

Quicksort: different methods, same underlying algorithm

  • The naive algorithm: create new lists for each sub-list, in a similar fashion to merge sort.
  • The in-place algorithm: perform the sort by swapping elements in a single list.

Quicksort: Naive implementation

  • Assume the following list. We will choose the pivot to be the first element in the list: pivot (6) ↓ [6 5 9 12 3 4]
  • We then create three lists, a list with elements less than the pivot, the pivot itself, and a list with elements greater than the pivot: < 6 > 6 [5 3 4] [6] [9 12]
  • Even if all elements go into one of the less than / greater than lists, that's the way it goes.
  • We continue this with the sub-lists: pivot (5) pivot(9) ↓ ↓ [5 3 4] [6] [9 12] < 5 > 5 < 9 > 9 [3 4] [5] [] [6] [] [9] [12]
  • We continue this with the sub-lists (and there is only one more to sort): pivot (3) ↓ [3 4] [5] [] [6] [] [9] [12] < 3 > 3 [] [3] [4] [5] [] [6] [] [9] [12]
  • Now we can simply merge back up the sub-lists, and it is easier than merge sort, because they are already sorted completely, from left to right: [] [3] [4] [5] [] [6] [] [9] [12] becomes [3 4 5 6 9 12]

Quicksort algorithm: Naive code

Quicksort algorithm: in-place.

  • The in-place, recursive quicksort algorithm has a prototype in C++ that looks like this: int quicksort ( Vector < int > & v , int start , int finish );
  • Pick your pivot as the left element (might not be a good choice…)
  • Traverse the list from the end (right) backwards until the value should be to the left of the pivot, or it hits the left.
  • Traverse the list from the beginning (left, after pivot) forwards until the value should be to the right of the pivot, or until it hits the right.
  • Swap the pivot with the element where the left/right cross, unless it happens to be the pivot.
  • This is best described with a detailed example. Assume the following initial list, with the index values below. We will call the function as quicksort ( v , 0 , 7 ); [ 56 25 37 58 95 19 73 30 ] 0 1 2 3 4 5 6 7
  • We first pick a pivot on the left, then start our traversals: pivot (56) ↓ [(56) 25 37 58 95 19 73 30] 0 1 2 3 4 5 6 7
  • We then start traversing from the right side towards the left until we find a value that should be to the left of the pivot: pivot (56) ↓ ↓ (30 is already smaller than 56) [(56) 25 37 58 95 19 73 30] 0 1 2 3 4 5 6 7
  • Then, we traverse the list from the beginning (after the pivot) forwards until the value should be to the right of the pivot: pivot (56) ↓ ↓ (30 is already smaller than 56) [(56) 25 37 58 95 19 73 30] 0 1 2 3 4 5 6 7
  • We've reached 58, so we swap the two elements where the left and right cross: pivot (56) ↓ ↓ [(56) 25 37 30 95 19 73 58] 0 1 2 3 4 5 6 7
  • Now we keep traversing from the right towards the left until the value should be to the left of the pivot, or until it hits the left marker: pivot (56) ↓ ↓ 19 is less than 56 [(56) 25 37 30 95 19 73 58] 0 1 2 3 4 5 6 7
  • Again, traverse from the left marker towards the right until we find a value that should be to the right of the pivot (95 in this case) pivot (56) ↓ ↓ 19 is less than 56 [(56) 25 37 30 95 19 73 58] 0 1 2 3 4 5 6 7
  • Now we swap the 95 and the 19: pivot (56) ↓ ↓ [(56) 25 37 30 19 95 73 58] 0 1 2 3 4 5 6 7
  • We then start from the right marker and traverse backwards until the value should be to the left of the pivot, or until it hits the left marker. In this case, it hits the left marker: pivot (56) ↓↓ [(56) 25 37 30 19 95 73 58] 0 1 2 3 4 5 6 7
  • Finally, we swap the pivot with the value we've reached (which will always be smaller than or the same as the pivot): pivot (56) ↓↓ [19 25 37 30 (56) 95 73 58] 0 1 2 3 4 5 6 7
  • Notice that the 56 is in its correct place, and will never need to be moved again. We now call quicksort ( v , 0 , 3 ) and quicksort ( v , 5 , 7 ) recursively, and we will end up with a completely sorted list that we sorted in-place.

Quicksort in-place code

  • Here is the partition code: int partition ( Vector < int > & vec , int start , int finish ) { int pivot = vec [ start ]; int lh = start + 1 ; int rh = finish ; while ( true ) { while ( lh < rh && vec [ rh ] >= pivot ) rh -- ; while ( lh < rh && vec [ lh ] < pivot ) lh ++ ; if ( lh == rh ) break ; // swap int tmp = vec [ lh ]; vec [ lh ] = vec [ rh ]; vec [ rh ] = tmp ; } if ( vec [ lh ] >= pivot ) return start ; vec [ start ] = vec [ lh ]; vec [ lh ] = pivot ; return lh ; }

Quicksort Complexity

  • What if we had a sorted list to begin with and we picked a pivot that was always at the beginning?
  • We would always have all values in our sub-lists after the pivot! This degrades to O(n^2) behavior!
  • Best-case: O(n log n) (similar analysis to merge sort)
  • Average complexity: O(n log n)
  • Not a stable sort!
Sorting Big-O Cheat Sheet
Sort Worst Case Best Case Average Case
Insertion O(n^2) O(n) O(n^2)
Selection O(n^2) O(n^2) O(n^2)
Merge O(n log n) O(n log n) O(n log n)
Quicksort O(n^2) O(n log n) O(n log n)

© Stanford 2020 · Website by Julie Zelenski · Page updated 2020-May-19

Computing Requirements

Practice problems.

Experiment with the Python implementations of the various search algorithms:

  • Sequential Search on unsorted data
  • sorted Sequential Search on sorted data
  • Binary Search (on sorted data)

Experiment with the Python implementations of the various sorting algorithms:

  • Bubble Sort     ( version instrumented with additional output information )
  • Selection Sort

Consider the following sorted list of numbers:

index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
value 2 4 7 10 13 16 17 20 24 28 31 38 41 43 45

What sequence of 'middle' values are compared to the target when performing a binary search with target 28?

What sequence of 'middle' values are compared to the target when performing a binary search with target 16?

What sequence of 'middle' values are compared to the target when performing a binary search with target 5?

Note: The solution for the above questions can be found by running a Binary Search Demonstration with parameter "#items: 15" and "seed: 15849" and with the target "key" value set appropriately.

Problems to be Submitted   (20 points)

When you turn in your assignment, you must include a signed cover sheet with your assignment ( you're assignment will not be graded without a completed cover sheet ).

You are allowed to submit your assignment via email, but if you choose to do so, you must bring a hardcopy of your assignment along with a completed cover sheet to the instructor at the next class. ( Note: Do not email the instructor any .zip file attachments, as SLU's email may not accept these emails, i.e. the instructor may not receive your email. )

  • Exercise #66, part d , at the end of Chapter 3 (p. 241-242)
  • Exercise #66, part c , at the end of Chapter 3 (p. 241-242)
  • Exercise #67, part d , at the end of Chapter 3 (p. 241-242)
  • Exercise #67, part c , at the end of Chapter 3 (p. 241-242)
  • If the data in the list was unsorted , using a sequential search, how many comparisons would it take to determine that the value 45 is not in the list?

Using the following sorted list of words to perform a binary search:

indexvalue
0babka
1baklava
2cheesecake
3cupcake
4danish
5eclair
6funnelcake
7kringle
8lamington
9profiterole
10sopaipilla
11strudel
12tiramisu
13torte
14turnover

What sequence of 'middle' values are compared to the target when performing a binary search with target cupcake ?

What sequence of 'middle' values are compared to the target when performing a binary search with target doughnut ?

What sequence of 'middle' values are compared to the target when performing a binary search with target tiramisu ?

index 0 1 2 3 4 5 6 7
value 9 20 6 10 14 8 60 11

Using this same list, do the following:

Show the series of steps taken by the Bubble Sort algorithm while sorting this list.

Show the series of steps taken by the Selection Sort algorithm while sorting this list.

The book, in Section 7.4, shows the series of steps taken during the first pass of sorting the list. In particular, Figure 7.16 on p. 231 shows the steps taken to determine the split point, and then p. 229 shows how the split point splits the list into two halves (Note: halves do not need to be equal-sized). Once the list is split in two, the Quicksort algorithm is applied again to each half, so it is applied to the sub-list, [6, 8] , and the sub-list, [10, 14, 20, 60, 11] . It does not need to sort the split point, 9, any further, since it's final position is now known.

Show all the remaining steps (i.e. the second and subsequent passes) performed by the Quicksort algorithm to complete the sorting of the list. .

First, you need to instrument the code (i.e. make a minor addition to the code) for each of the algorithms in order to count the number of elements in the array that are checked during a search.

In order to do this, you will need to add a count variable to the function implementing the search algorithm. You will want to initialize this count to zero at the beginning of the function, add 1 to it each time it checks a new distinct element of the list to see if it is the search value, and finally prints out the count at the end of the function.

Note 1: You only need to add 3 lines of code to the search procedure in each case!

Note 2: Since the initial versions of the sorted and unsorted sequential search are searching for strings, you will need to change the user query for input to convert the input to a number. In other words, you will need to change this line:

Using your instrumented code for the three search algorithms, perform the following searches, and report the number of elements checked by each algorithm for each search (i.e. you will report nine results in total -- for each of the 3 searches below, you will report the results from the 3 different search algorithms).

A = [6, 19, -3, 5, 12, 7, 21, -8, 25, 10, 0, 28, -6, 1, 33, 18, 9, 2, -13, 43]

A = [6, 19, -3, 5, 12, 7, 21, -8, 25, 10, 0, 28, -6, 1, 33, 18, 9, 2, -13, 43, -15, 4, 22, 38, -5, 13, 23, -11, 29, -20, 41, 31, -23, 35, 40, 14, 8, -18, 16, 36]

Extra Credit (2 points)

Computer Science 245: Data Structures and Algorithms

  • Lecture Notes

Sorting (Due April 19th, 2017)

Coding sorting algorithms.

  • insertionSort
  • selectionSort
  • optimizedQuickSort
  • insertionSortLL
  • selectionSortLL
  • mergeSortLL
  • quickSortLL
  • Sorting class containing static methods (skeleton provided)
  • Linked List element class (provided)
  • SortTest class which contains your main program (You need to create this)
  • Your sorting class be named Sort
  • Your do not change any of the function signatures of any of the provided funtion stubs in the Sort class

Efficiency Testing

Building a better sorting algorithm.

  • Use quicksort until the list gets small enough, and then use another sort or insertion sort to sort the small lists
  • Use quicksort to "mostly" sort the list. That is, use quicksort to sort the list until a cutoff size is reached, and then stop. The list will now be mostly sorted, and you can use insertion sort on the entire list to quickly complete the sorting (not unlike the strategy used in Shell Sort)
  • Try to make partition (where the work happens!) as efficient as possible
  • Write a version of quicksort that is (partially) tail-recursive
  • Some other method of your own devising.

Sorting Algorithms in Detail

  • static <T extends Comparable<T>> void insertionSort(T[] array, int lowIndex, int highIndex, boolean reversed) This is the most straightforward of the sorting algorithms to code - there are only two wrinkles -- your insertion sort needs to work over a range of indices in the array, just like quickSort, and you need to be able to sort the list backwards, if the reversed flag is true. Your algorithm should sort all elements in the array in the range lowindex..highindex (inclusive). You should not touch any of the data elements outside the range lowindex .. highindex. Note that you need to be able to sort any Comparable object.
  • public public static <T extends Comparable<T>> void selectionSort(T[] A, int lowIndex, int highIndex, boolean reversed) Also very straightforward.  As with insertion sort above, your sorting algorithm needs to work over a range of indicies in the array,  and you need to be able to sort the list backwards, if the reversed flag is true.  Your algorithm should sort all elements in the array in the range lowindex..highindex (inclusive).  You should not touch any of the data elements outside the range lowindex .. highindex. 
  • public public static <T extends Comparable<T>> void shellSort(T[] array, int lowindex, int highindex, boolean reversed) Your implementation of Shell Sort needs to use Hibbard's increments: 1, 3, 7, 15, ... 2 k -1.  Thus if the range of elements contains 100 elements, the first sort would be a 63-sort, followed by a 31-sort, 15-sort, 7-sort, 3-sort and 1-sort.  (The code in the notes uses Shell's increments - in this case 50, 25, 12, 6, 3, 1).  As with insertion sort, you need to be able to sort only a range of the array, and also be able to inverse-sort the array. This function and insertionSort should share code!
  • public static <T extends Comparable<T>> void heapSort(T[] array, int lowindex, int highindex, boolean reversed) As with insertion sort, you need to sort a range of indices. Do not copy the range to be sorted into a temporary array, sort it, and then copy back -- you need to sort the data in place. (Perhaps you should consider parent / child functions parameterzied based on lowindex ...) You also need to be able to inverse sort the list.
  • public public <T extends Comparable<T>> void quicksort(T[] array, int lowindex, int highindx, boolean reversed) This is the standard, unmodified version of quicksort.  You should use a median-of-three to pick the pivot.  That is, pick three elements (first, middle, and last, or three random) and use the median of those 3 elements as the pivot.  This version of quicksort should not be a hybrid.  Note that you will need to do some special-case work on small lists (since obviously you cannot find the median of three on a list with 2 elements)
  • public static > void optimizedQuickSort(T[] array, int lowindex, int highindex, boolean reversed) This should be a hybrid of quicksort and some other sorting algorithm.  Make it as efficient as possible.
  • public static <T extends Comparable<T>> LLNode<T> insertionSortLL(LLNode<T> list, boolean reversed) This function uses Insertion Sort to sort a linked list.  It should be called in a similar fashion to tree functions, as follows: A = insertionSortLL(A, false); Note that when using linked lists, you need to implement insertion sort in a slightly different way (for instance, an inverse sorted list will probably give best-case performance, while a sorted list will probably give you worst-case performance)
  • public static <T extends Comparable<T>> LLNode<T> selectionSortLL(LLNode<T> list, boolean reversed) Much as above, this function sorts a linked-list using selection sort. You should not allocate any extra memory for this version of selection sort (no calls to new!)  Instead, you need to rearrange the linked list elements that are passed in. (Alternately, you may wish to consider moving the data elements around and keeping the structure of the linked list the same)
  • public static <T extends Comparable<T>> LLNode<T> mergeSortLL(LLNode<T> list, boolean reversed) Much as above, this function sorts a linked-list using merge sort. You should not allocate any extra memory for this version of merge sort (no calls to new!)  Instead, you need to rearrange the linked list elements that are passed in.
  • public static <T extends Comparable<T>> LLNode<T> quickSortLL(LLNode<T> list, boolean reversed) As with all the other linked list sorting algorithms, we will need to sort the list by moving the linked list nodes around and relinking them -- not by calling new! You can have no calls to new in this method (or in any methods that it calls!). It is easiest to have the pivot be the first element in the list. Partition should break the list into two sublists, call itself recursively, and then slice the lists together (which will, alas, take O(n) time. If you want to write helper methods that return a pointer ot both the head and tail of the sorted list to make splicing easier, you are welcome to, but that is not required)
  • public static void bucketSort(int[] array, int lowindex, int highindex, boolean reversed) Your implementation of Bucket Sort should use half as many buckets are there are elements to be sorted.  Thus, if highindex - lowindex + 1 == 100, you should use 50 buckets.  Assume that the data values are evenly distributed over the range of the list.  You will need to do a quick run through the list to find the range of values stored in the list.  You need to be able to handle sorting negative as well as positive values. While the list that you are sorting will be ints, you might need to use longs in some places when calculating bucket size.   As before, you need to be able to inverse-sort the list. Note that you are sorting ints here and not Comparables -- bucketSort is not a comparison-based sorting algorithm!
  • public void radixSort(String[] array, int lowindex, int highindex, boolean reversed) As with all of the other sorting algorithms, radix sort needs to be able to sort elements in the range lowindex to highindex.  All version of radix sort which we have seen so far sort lists of integers, not strings, but radix sort can easily be extended to sort strings (since you can think of strings as having a "most significant digit (character)", a "second most signficant digit (character)", and so on. The only wrinkle is for strings that are different in length -- the most significant character is always the first one, and shorter strings have fewer "digits". We can get around this by first sorting the strings by their length (using a counting sort!), then running a counting sort on just the least significant characters of the longest strings, continuing until all strings are sorted. Let's look at an example. Say we are soring the list of strings [ "BABAB", "BA", "CB", "BAABB", "CCCAA", "C" ]. First, we sort the strings by length, giving us: "C" "BA" "CB" "BABAB" "BAABB" "CCCAA" The first 3 passes of our counting sort only look at the strings of length 5. Once that round is done, we have: "C" "BA" "CB" "BAABB" "BABAB" "CCCAA" Now we can add back in the strings of length 2 -- the next pass sorts strings of lengths 2 - 5, based on the second charater: "C" "BA" "BAABB" "BABAB" "CB" "CCCAA" Finally, we sort all the strings using the first charater, to get: "BA" "BAABB" "BABAB" "C" "CB" "CCCAA"

What to Turn In

  • Your copy of Sort.java
  • Source code for your main program, which you used for performance testing
  • Running time for all 12 algorithms for lists of sizes 10, 50, 100, 200, 500, 1000, 2000, 5000, 10000, 50000 and 100000.;Be sure to subtract out the overhead time costs! You should include sorted and inverse sorted lists as well as random lists for each list size in these tests. The results should not be handwritten! Use your favorite word processor / spreadsheet / etc instead, and subit the results as a .pdf file so we can read it easily
  • A brief (one page is enough) document on how you created your hybrid sorting algorithm -- which apporoaches you tried, what different parameters you chose, and how much of a difference these changes made to the efficiency of your algorithm. This document should be either plaintext or .pdf

Collaboration

Supporing files.

  • LLNode.java

algorithm sorting assignment

Sorting various types of data.

Which sorting algorithm should i use, reductions., a brief survey of sorting applications..

public int compareTo(String t) { String s = this; if (s == t) return 0; // this line int n = Math.min(s.length(), t.length()); for (int i = 0; i t.charAt(i)) return +1; } return s.length() - t.length(); }
public class Customer implements Comparable<Customer> { private String name; private double balance; public int compareTo(Customer that) { if (this.balance that.balance + 0.005) return +1; return 0; } }
R W Q O J M V A H B S G Z X N T C I E K U P D Y F L
String[] a = new String[N]; for (int i = 0; i
public class CaseInsensitive implements Comparator<String> { public int compare(String a, String b) { return a.compareToIgnoreCase(b); } }
public class Descending implements Comparator<String> { public int compare(String a, String b) { return b.compareToIgnoreCase(a); } }
import java.util.Arrays; import java.text.Collator; ... Arrays.sort(words, Collator.getInstance(Locale.FRENCH));
  • DSA Tutorial
  • Data Structures
  • Linked List
  • Dynamic Programming
  • Binary Tree
  • Binary Search Tree
  • Divide & Conquer
  • Mathematical
  • Backtracking
  • Branch and Bound
  • Pattern Searching

Selection Sort Algorithm

Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the smallest (or largest) element from the unsorted portion of the list and moving it to the sorted portion of the list. 

The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of the list and swaps it with the first element of the unsorted part. This process is repeated for the remaining unsorted portion until the entire list is sorted. 

How does Selection Sort Algorithm work?

Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11} First pass: For the first position in the sorted array, the whole array is traversed from index 0 to 4 sequentially. The first position where 64 is stored presently, after traversing whole array it is clear that 11 is the lowest value. Thus, replace 64 with 11. After one iteration 11 , which happens to be the least value in the array, tends to appear in the first position of the sorted list. Selection Sort Algorithm | Swapping 1st element with the minimum in array Second Pass: For the second position, where 25 is present, again traverse the rest of the array in a sequential manner. After traversing, we found that 12 is the second lowest value in the array and it should appear at the second place in the array, thus swap these values. Selection Sort Algorithm | swapping i=1 with the next minimum element Third Pass: Now, for third place, where 25 is present again traverse the rest of the array and find the third least value present in the array. While traversing, 22 came out to be the third least value and it should appear at the third place in the array, thus swap 22 with element present at third position. Selection Sort Algorithm | swapping i=2 with the next minimum element Fourth pass: Similarly, for fourth position traverse the rest of the array and find the fourth least element in the array  As 25 is the 4th lowest value hence, it will place at the fourth position. Selection Sort Algorithm | swapping i=3 with the next minimum element Fifth Pass: At last the largest value present in the array automatically get placed at the last position in the array The resulted array is the sorted array. Selection Sort Algorithm | Required sorted array

Below is the implementation of the above approach:

Complexity Analysis of Selection Sort

Time Complexity: The time complexity of Selection Sort is O(N 2 ) as there are two nested loops:

  • One loop to select an element of Array one by one = O(N)
  • Another loop to compare that element with every other Array element = O(N)
  • Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N 2 )

Auxiliary Space: O(1) as the only extra memory used is for temporary variables while swapping two values in Array. The selection sort never makes more than O(N) swaps and can be useful when memory writing is costly. 

Advantages of Selection Sort Algorithm

  • Simple and easy to understand.
  • Works well with small datasets.

Disadvantages of the Selection Sort Algorithm

  • Selection sort has a time complexity of O(n^2) in the worst and average case.
  • Does not work well on large datasets.
  • Does not preserve the relative order of items with equal keys which means it is not stable.

Applications of Selection Sort Algorithm

  • Mainly works as a basis for some more efficient algorithms like Heap Sort . Heap Sort mainly uses Heap Data Structure along with the Selection Sort idea.
  • Used when memory writes (or swaps) are costly for example EEPROM or Flash Memory. When compared to other popular sorting algorithms, it takes relatively less memory writes (or less swaps) for sorting. But Selection sort is not optimal in terms of memory writes, cycle sort even requires lesser memory writes than selection sort.
  • Simple technique and used to introduce sorting in teaching.
  • Used as a benchmark for comparison with other algorithms.

Frequently Asked Questions on Selection Sort

Q1. Is Selection Sort Algorithm stable?

The default implementation of the Selection Sort Algorithm is not stable . However, it can be made stable. Please see the stable Selection Sort for details.

Q2. Is Selection Sort Algorithm in-place?

Yes, Selection Sort Algorithm is an in-place algorithm, as it does not require extra space.

Please Login to comment...

Similar reads, improve your coding skills with practice.

 alt=

What kind of Experience do you want to share?

Instantly share code, notes, and snippets.

@SebastianAle

SebastianAle / algos_sorting_answers.txt

  • Download ZIP
  • Star ( 0 ) 0 You must be signed in to star a gist
  • Fork ( 0 ) 0 You must be signed in to fork a gist
  • Embed Embed this gist in your website.
  • Share Copy sharable link for this gist.
  • Clone via HTTPS Clone using the web URL.
  • Learn more about clone URLs
  • Save SebastianAle/466b803488c03867aa14e92027ee3a32 to your computer and use it in GitHub Desktop.
Exercises
1. Write pseudocode for bubble sort.
A:
FUNCTION bubbleSort(collection)
REPEAT
SET swapped to false
FOR i = FIRST INDEX of collection to LAST INDEX of collection - 1
IF collection[i] > collection[i + 1] THEN
SET tmp to collection[i]
SET collection[i] to collection[i + 1]
SET collection[i + 1] to tmp
SET swapped to true
END IF
END FOR
UNTIL swapped is FALSE
RETURN collection
END FUNCTION
2. Write pseudocode for quicksort.
A:
FUNCTION quickSort(collection)
SET low to 0
SET high to collection -1
IF low < high THEN
SET pivot to partition WITH collection, low ,high
CALL quickSort WITH collection, low, pivot
CALL quickSort WITH collection, pivot + 1, high
END IF
END FUNCTION
FUNCTION partition(collection, low, high){
SET pivot to collection[low]
SET leftwall to low
FOR each item in collection
IF collection[i] < pivot THEN
swap collection[i] with collection[leftwall + 1]
SET leftwall to leftwall + 1
END IF
END FOR
swap WITH pivot, collection[leftwall]
RETURN leftwall
3. We talked about time complexity in a previous checkpoint, and how to get an idea of the efficiency of an algorithm.
After looking at the pseudocode for the above sorting methods, identify why merge sort and quick sort are much more
efficient than the others. Walking through each algorithm with a few sample collections may help.
A: Merge sort is a efficient algorithm because it can handle large collection without having
to iterate through the same collection time and time again. It divides the collection into
sub-collections and then merge them together in order.
Quick sort is an effective algorithm because it can work with a large collection while using less memory.
It doesn’t require to compare each item with one another, unlike other algorithms.
4. All of the sorts addressed in this checkpoint are known as comparison sorts. Research bucket sort and explain how it works.
What is the ideal input for bucket sort?
A: Bucket sort starts with an unsorted array, we then set up an array of empty buckets
and depending on the items’ range we place them into the buckets. Finally we sort the items in the buckets,
and we then we place them back into the original array.
Bucket sort works best when the data are more or less uniformly distributed
or where there is an intelligent way to choose the buckets given a quick set of heuristics based on the input array

IMAGES

  1. Sorting Algorithm

    algorithm sorting assignment

  2. 10 Best Sorting Algorithms

    algorithm sorting assignment

  3. Bubble Sort Algorithm

    algorithm sorting assignment

  4. Sorting algorithm

    algorithm sorting assignment

  5. Sorting algorithm

    algorithm sorting assignment

  6. Bubble Sort Visualization Using Python And Tkinter Sorting Algorithm Images

    algorithm sorting assignment

COMMENTS

  1. Sorting Algorithms

    A Sorting Algorithm is used to rearrange a given array or list of elements according to a comparison operator on the elements. The comparison operator is used to decide the new order of elements in the respective data structure. For Example: The below list of characters is sorted in increasing order of their ASCII values. That is, the character with a lesser ASCII value will be placed first ...

  2. Sorting (article)

    You'll implement a particular sorting algorithm in a moment. But as a warmup, here is a sorting problem to play with. ... "=", is used for assignment. When you type temp = array[secondIndex];, you are assigning that value in the array to the variable temp. ... No algorithm can be guaranteed to do it if you use swaps. e.g. If I have 2,3,1 it ...

  3. Sorting Algorithm

    A sorting algorithm is used to arrange elements of an array/list in a specific order. For example, Sorting an array. Here, we are sorting the array in ascending order. There are various sorting algorithms that can be used to complete this operation. And, we can use any algorithm based on the requirement.

  4. Sorting Algorithms Explained with Examples in JavaScript, Python, Java

    Sorting algorithms are a set of instructions that take an array or list as an input and arrange the items into a particular order. Sorts are most commonly in numerical or a form of alphabetical (or lexicographical) order, and can be in ascending (A-Z, 0-9) or descending (Z-A, 9-0) order.

  5. Sorting Algorithms in Python

    Here's a figure illustrating the different iterations of the algorithm when sorting the array [8, 2, 6, 4, 5]: The Insertion Sort Process. Now here's a summary of the steps of the algorithm when sorting the array: The algorithm starts with key_item = 2 and goes through the subarray to its left to find the correct position for it.

  6. Sorting Algorithms

    A sorting algorithm is an algorithm made up of a series of instructions that takes an array as input, performs specified operations on the array, sometimes called a list, and outputs a sorted array. Sorting algorithms are often taught early in computer science classes as they provide a straightforward way to introduce other key computer science topics like Big-O notation, divide-and-conquer ...

  7. PDF Lecture 3: Sorting

    merge sort analysis: Base case: for n = 1, array has one element so is sorted. Induction: assume correct for k < n, algorithm sorts smaller halves by induction, and then merge merges into a sorted array as proved above. T (1) = Θ(1), T (n) = 2T (n/2) + Θ(n) ∗ Substitution: Guess T (n) = Θ(n log n) cn log n = Θ(n) + 2c(n/2) log(n/2) =⇒ ...

  8. PDF Lecture 15: Sorting Algorithms

    Compare two elements at a time General sort, works for most types of elements Element must form a "consistent, total ordering" For every element a, b and c in the list the following must be true: If a <= b and b <= a then a = b. If a <= b and b <= c then a <= c. Either a <= b is true or <= a. What does this mean? compareTo() works for your ...

  9. PDF MIT6 0001F16 Searching and Sorting Algorithms

    first step. extract minimum element. swap it with element at index 0. subsequent step. in remaining sublist, extract minimum element. swap it with the element at index 1. keep the le porAon of the list sorted. at i'th step, first i elements in list are sorted. all other elements are bigger than first i elements.

  10. Lecture 12: Searching and Sorting

    Description: In this lecture, Prof. Grimson explains basic search and sort algorithms, including linear search, bisection search, bubble sort, selection sort, and merge sort. Instructor: Prof. Eric Grimson. Transcript. Download video; Download transcript; ... assignment_turned_in Programming Assignments with Examples. Download Course.

  11. Algorithms for Searching, Sorting, and Indexing

    There are 4 modules in this course. This course covers basics of algorithm design and analysis, as well as algorithms for sorting arrays, data structures such as priority queues, hash functions, and applications such as Bloom filters. Algorithms for Searching, Sorting, and Indexing can be taken for academic credit as part of CU Boulder's ...

  12. CS106B Sorting

    Merge Sort full algorithm. Here is the full algorithm (which you performed for assignment 4): Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element is considered sorted). Repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist remaining. This will be the sorted list.

  13. Elementary Sorts

    Selection sort. One of the simplest sorting algorithms works as follows: First, find the smallest item in the array, and exchange it with the first entry. Then, find the next smallest item and exchange it with the second entry. Continue in this way until the entire array is sorted.

  14. PDF Comparison Sorting Algorithms

    L11: Comparison Sorts CSE332, Summer 2021 Introduction to Sorting (1 of 2) vStacks, queues, priority queues, and dictionaries/sets all provide one element at a time vBut often we want "all the items" in some order §Alphabetical list of people §Population list of countries §Search engine results by relevance vDifferent sorting algorithms have different asymptotic and

  15. Sorting Algorithms in Python

    Sorting Techniques. The different implementations of sorting techniques in Python are: Bubble Sort. Selection Sort. Insertion Sort. Bubble Sort. Bubble Sort is a simple sorting algorithm. This sorting algorithm repeatedly compares two adjacent elements and swaps them if they are in the wrong order. It is also known as the sinking sort.

  16. Sorting

    We discuss the theoretical basis for comparing sorting algorithms and conclude the chapter with a survey of applications of sorting and priority-queue algorithms. 2.1 Elementary Sorts introduces selection sort, insertion sort, and shellsort. 2.2 Mergesort describes megesort, a sorting algorithm that is guaranteed to run in linearithmic time.

  17. Insertion Sort Algorithm

    Pre-requisite: Merge Sort, Insertion Sort Merge Sort: is an external algorithm based on divide and conquer strategy. In this sorting: The elements are split into two sub-arrays (n/2) again and again until only one element is left.Merge sort uses additional storage for sorting the auxiliary array.Merge sort uses three arrays where two are used for

  18. Assignment #6: Searching and Sorting Algorithms

    First, you need to instrument the code (i.e. make a minor addition to the code) for each of the algorithms in order to count the number of elements in the array that are checked during a search. In order to do this, you will need to add a countvariable to the function implementing the search algorithm.

  19. Sorting Assignment

    While the savings on sorting one small list with a faster algorithm is negligible, sorting hundreds of small lists with a faster algorithm can make a difference in the overall efficiency of the sort. For part 3 of the assignment, you will combine quicksort with another sorting algorithm to build the fastest possible sorting algorithm.

  20. Divide and Conquer, Sorting and Searching, and Randomized Algorithms

    There are 4 modules in this course. The primary topics in this part of the specialization are: asymptotic ("Big-oh") notation, sorting and searching, divide and conquer (master method, integer and matrix multiplication, closest pair), and randomized algorithms (QuickSort, contraction algorithm for min cuts).

  21. Sorting Applications

    Programming Assignments. 2.5 Sorting Applications. Sorting algorithms and priority queues are widely used in a broad variety of applications. Our purpose in this section is to briefly survey some of these applications. ... The idea that we can use sorting algorithms to solve other problems is an example of a basic technique in algorithm design ...

  22. Selection Sort Algorithm

    Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the smallest (or largest) element from the unsorted portion of the list and moving it to the sorted portion of the list.. The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of the list and swaps it with the first element of the unsorted part.

  23. Algorithms: Sorting Assignment · GitHub

    Algorithms: Sorting Assignment. 1. Write pseudocode for bubble sort. 2. Write pseudocode for quicksort. 3. We talked about time complexity in a previous checkpoint, and how to get an idea of the efficiency of an algorithm. efficient than the others. Walking through each algorithm with a few sample collections may help.