Skip to content ↓
MIT student blogger Vincent A. '17

Anatomy of a Problem, Part 2 by Vincent A. '17

and the power of the many

A cornerstone of 6.046, MIT’s infamous algorithms class, is navigating around a variety of limitations (algorithm runtime, computer memory, accuracy) and understanding the compromises settled for in attaining this workaround. As a simple example, if I gave you a very long list of numbers (2,3,9,15,…) and told you to find their sum, you might store all the numbers in memory and then add them up. But if available memory was limited, so that you could see the stream of numbers you needed to sum up one at a time, but couldn’t possibly store them all, you would need a different tactic. In fact, you’d only need to keep track of one thing, the running sum, which you update with each additional number you see from the stream. With a list of numbers like 2,3,9,15…, the running sum starts out at 0. You see 2, so you update the running sum to 2. Then you see 3, so you update the running sum to 2 + 3 = 5. Then you see 9, so you update the running sum to 5 + 9 = 14, and so on. In this way, you don’t need to actually store 2,3,9,15… in memory.

For this blogpost, I’m going to talk about a more involved problem that arose in 6.046. I’ll discuss it at a high level, leaving out a good deal of the mathy details, though some familiarity with probability helps. For our purposes, you can think of probability as a value between 0 and 1 assigned to events. When you flip a fair coin, the event of getting a head occurs with 50% probability, or 0.5 probability, as does the event of getting a tail. When you roll a fair six-sided die, the probability of getting a number smaller than 3 is 2/6.

Now, the problem we’re going to explore (briefly mentioned in a companion post) will be as follows: given a list of numbers in random order (say 3, 19, 2, 17, 14, 4), I want you to sort them, that is, arrange them in ascending order: 2,3,4,14,17,19. Doing this on a computer would require a 2-number comparator: the simple ability to say things like “2 is smaller than 3”, which usually amounts to one line of code. Think about how you would go sorting 5, 9, 1 to yield 1, 5, 9. You would use a mental 2-number comparator three times: “1 is smaller than 5”, “1 is smaller than 9”, so 1 comes first. “5 is smaller than 9”, so 5 comes second, and 9 comes third. General sorting algorithms utilize the ability to compare 2 different numbers multiple times.

But what if your comparator was broken? What if it was accurate only 80% of the time? That means that 80 times out of 100, your comparator would say “2 is smaller than 3”, and 20 times out of 100, your comparator would say “3 is smaller than 2”, hence potentially and independently giving different results on the same 2 numbers. Notice that even in the simple case of sorting 5, 9, 1, we had to use our comparator quite a few times, and just a single wrong answer from it (say “9 is smaller than 5”) would ultimately result in an incorrectly sorted list. Hence, if we have an 80% accurate comparator, can we still use it to correctly and quickly sort a very large list of numbers? It turns out that we can–in fact, we can develop an algorithm, (perhaps inelegantly, we will call it ALG) that correctly sorts a list, using this broken comparator, with 99% accuracy (i.e. in 99 runs out of 100, ALG will produce a correctly sorted list). Notice that this also means we can achieve 99% sorting accuracy if our 2-number comparator works only 20% of the time — since by flipping the statements of this comparator to their opposites, we achieve a new comparator that works 80% of the time, which we can use in ALG.

Before diving into the details, I’ll mention the intuition, which is pretty simple: since our comparator works more often than not, we can achieve high sorting accuracy by making each comparison, like “Is 2 smaller than 3?”, several times and taking the majority result as our answer.
**

More formally, given a list of N numbers in random order, where N is very large, and assuming a broken world in which any comparison between 2 numbers (“Is X smaller than Y?”) is accurate only 80% of the time, develop an algorithm that sorts our list with 99% accuracy.

To start with, if our comparator always worked, instead of working 80% of the time, we could, as noted earlier with the “1,5,9” example, produce a sorted list by making multiple comparisons between pairs of numbers in our list. Mergesort is one such popular algorithm which sorts an N-size list by making k * N log N comparisons, where k is a small constant (could be 5, 10, 15, etc). The number of comparisons it makes is a good informal benchmark of how fast mergesort is. If an algorithm made k * N2 comparisons, it would be slower, and if it made k* NN comparisons, it would be horrendously, awfully slow.

It turns out the algorithm we develop, ALG, will make k * N * (log N)2 comparisons, which is reasonably fast. Since k is a constant, we can just choose k to be 40 (this works out well for mathematical reasons, but many choices of k would work here). ALG will simply be a duplicate of mergesort, with the following caveat: for each single comparison made in the original mergesort, which assumes a fully functional comparator, ALG will make that same comparison (“Is 2 smaller than 3?”, for example) with our broken comparator 40 log N times, and take a majority answer. So where the original mergesort might compare 2 and 3 once and decide that 2 is smaller than 3, ALG will compare 2 and 3 a total of 40 log N times and if our broken comparator says 2 is smaller than 3 a majority of the time, ALG decides that 2 is indeed smaller than 3. The problem is, if we get unlucky, our broken comparator may tell us that 3 is smaller than 2 a majority of times, which will result in ALG producing a wrongly sorted list. Remarkably, we will demonstrate that ALG produces a correctly sorted list 99% of the time.

To give you a better sense of ALG, observe that it hinges on two things:

1) Correctly sorting a list involves making several correct 2-number comparisons.

2) ALG boosts its accuracy by making each 2-number comparison 40 log N times, instead of just once as mergesort does, and taking a majority answer.

Since mergesort makes on the order of N log N comparisons in total, ALG will end up calculating on the order of N log N majority answers, where each majority answer is computed by running our comparator on the same 2 numbers 40 log N times. For ALG to work, every single one of these majority answers should be correct.

To help us understand how ALG achieves a 99% accuracy, we will start by calculating the probability that a single majority answer is correct (remember that we need all N log N of these to be correct).

A single majority answer is obtained by comparing 2 numbers in our list, X and Y, 40 log N times, and taking the majority answer of this comparison (recall that a single comparison is only 80% accurate). Using a specific tool, the Chernoff Bound inequality, we can prove the following: a majority of 40 log N comparisons on X and Y (“is X smaller than Y?”) will yield the wrong answer with probability less than 1/N4. That is, the probability that a single majority answer is wrong is rather small. While the Chernoff Bound inequality develops the reasoning rigorously, but requires a setup too laborious for the blogs, let the intuition suffice for now: our comparator is right 80% of the time, so when we make 40 log N comparisons, we expect about 80% of them (well above a majority) to be correct. A single majority answer being the wrong answer requires more than 50% of our comparisons to be wrong, which is such a deviation from the expected behavior that the odds are smaller than 1/N4.

However, ALG requires something stronger than one majority answer being correct to be correct itself. It requires all N log N majority answers it computes to be correct, and if at least one of these answers is incorrect, then ALG fails. It thus helps to compute the probability that at least one of the majority answers is incorrect, equivalent to the probability that ALG fails.

We can utilize a simple rule. Given a set of events, the probability that at least one of the events happens is no greater than the sum of probabilities of the individual events. For instance, if you have a 3% chance of winning the lottery, and a 7% chance of receiving a letter from the Hogwarts School of Witchcraft and Wizardry, the probability of winning the lottery or receiving the letter is no greater than 3% + 7% = 10%.

How does this help us with regards to ALG? Well, we can think of a single majority answer being wrong as an event, which we know happens with probability less than 1/N4. The rule from the preceding paragraph implies that the probability of at least one majority answer being wrong is no greater than the sum of the probabilities of each majority answer being wrong. There are N log N majority answers, so the probability that at least one majority answer is wrong will be less than N log N * 1/(N4) = log N / N3. This is equivalent to the probability that ALG fails. Therefore, ALG succeeds with probability 1 – (log N / N3).

Now, it turns out that N3 > 100 log N, for very large N, which in turn implies that 1/N3 < 1/100 log N, and thus, log N / N3 < log N / 100 log N, simplifying to log N / N3 < 1 / 100. Therefore, 1 – (log N / N3) > 99 / 100.

But 1 – (log N / N3) is the probability that ALG succeeds, which is greater than 99/100. Therefore, our algorithm has a better than 99% accuracy, an accuracy that can be arbitrarily improved by making more than 40 log N comparisons to compute each majority answer.

**

This is an example of a randomized algorithm. ALG could still fail, but the odds of that are smaller than 1%, and this is the compromise achieved in navigating around a broken comparator. Randomized algorithms have many applications from cyber-security to load-balancing servers on high-traffic websites to making efficient data management structures.

Yet, they represent a tiny subset of the detailed, fast-paced, diverse and often overwhelming concepts presented in one of my most terrifying/painful/highly enjoyable classes at MIT.