Just a small post today. I stumbled upon a paper [1] on ArXiv recently on a surprising sorting algorithm that was discovered pretty recently (in October 2021).
Here is a quick implementation I wrote in Python:
import numpy as np
def swap_list_element(data: list, pos1, pos2):
"""
Swap elements in a list based on their indexes...
:param data: Input list. I don't return anything because I modify the input list directly
:param pos1: index of the first element to swap
:param pos2: index of the second element to swap
"""
tmp = data[pos2]
data[pos2] = data[pos1]
data[pos1] = tmp
def cant_believe_it_sort(data: list):
"""
Sorts the list...
:param data: Input list with elements to sort
:return: Sorted list
"""
data_tmp = data.copy()
# I make a copy of the input list to keep it intact in the main...
for i in range(len(data)):
for j in range(len(data)):
if data_tmp[i] < data_tmp[j]:
swap_list_element(data_tmp, i, j)
return data_tmp
if __name__ == '__main__':
random_list = list(np.random.rand(100) * 100)
print(cant_believe_it_sort(random_list))
It’s not the most efficient algorithm obviously, as it will always do n2 comparisons. It is an O(n2) algorithm in the best case and worst case scenario. There are much more advanced algorithms out there that go down to O(n log n) in average, so much faster and more efficient.
This algorithm is more like a curiosity, based on how simply the sorting is performed using a single comparison and swap condition and also based on the fact that no one mentioned it before this paper on ArXiv. After all, Merge Sort was discovered in 1945, Quick Sort was published in 1961, Bubble Sort was first used in 1962. You would think that most simple sorting algorithms would have been discovered, published somewhere, but no. Some are still being discovered, even to this day…