Search Algorithms and Sorted Lists
There are sorted lists, and then there are sorted lists. When ordering items, we tend to do so either alphabetically or numerically. If the list contains no duplicate values, then it’s guaranteed to have a unique minimum and maximum value. We also know the locations of the minimum and maximum values. They’re always the first or last elements of our list. Therefore, when a list is sorted a search algorithm can begin with the first and last elements.
I can guarantee that no matter how big your list is, a whole lotta values will be excluded from it. Why not begin by determining whether your target value (the value you’re searching for) is automatically excluded by the minimum and maximum numbers in your list? You don’t have to check every single value in a sorted list (all N elements) to know that the value you’re searching for is excluded from that list. In fact, sometimes you only need to check one element to determine that a target value isn’t in a sorted list. For example, if the list is ascending and the target value is lower than the minimum value then there’s no need to check any further elements. (I demonstrate this in JavaScript…)
If we think about this in terms of time complexity, the worst case is not simply where all N list values are checked and it’s determined that the target isn’t in the list. This complexity is typically expressed as O(N). With a sorted list, a target value’s non-membership can be determined in one comparison. As it turns out there’s never a need to check every single value in a sorted list. The worst case scenario becomes one where the target is greater than the minimum, and less than the maximum, values but isn’t contained in the list. Once the target is compared to minimum and maximum values, the usual binary search algorithm can be applied. This means that searching a sorted list (of unique values) should never require time complexity greater than O(log N).