Category Archives: Problem Solving

Sample Pooling: An Opportunity for a 40x Improvement in Covid-19 Testing

Testing capacity has been a major obstacle in the battle against Covid-19. Scarce capacity has led to the rationing of testing and to overloaded testing resources. Even today, it is not unusual to hear of 7 day waits for the results of a test that can be run in several hours. Such data suggests a process which is over 98% queue time, a clear symptom of overloads. This is a big problem.

The Solution Already Exists

Fortunately, there is an approach known as sample pooling that has the potential to increase testing capacity by at least an order of magnitude. And, it is already in use today. It was mentioned in a New York Times article of April 4, 2020 that discussed the unusual effectiveness of Germany’s Covid response. It stated:

“Medical staff, at particular risk of contracting and spreading the virus are regularly tested.  To streamline the procedure, some hospitals have started doing block tests, using the swabs of 10 employees, and following up with individual tests only if there is a positive result.”

On April 6, 2020, the JAMA website published a letter entitled “Sample Pooling as a Strategy to Detect Community Transmission of SARS-CoV-2. (2). It examined the effectiveness of processing 2888 specimens in 292 pooled samples. Both of these articles demonstrated the potential to improve existing capacity by close to 10x with sample pooling approaches.

What is Sample Pooling?

Sample pooling combines individual specimens into a pool or block. If a pool of ten specimens tests negative, this establishes that all ten specimens are negative using a single test. Only if the pool tests positive, is it necessary to allocate scarce capacity to individual tests. This approach exploits the high sensitivity of PCR tests. In fact, the CDC mentions sample pooling in its publications, but only as a strategy to lower testing cost, stating:

Available evidence indicates that pooling might be a cost-effective alternative to testing individual specimens with minimal if any loss of sensitivity or specificity. (3)

Sample pooling is particularly valuable in the early stages of an epidemic for two reasons. First, disease prevalence is still low, which means that a high percentage of pooled samples will test negative, making pooling highly efficient. Second, the early stages of an epidemic typically face the greatest capacity limitations, so improvements in capacity produce disproportionate benefits in finding and stopping disease propagation.

Sample Pooling and Information Theory

Interestingly, the mechanism of action behind sample pooling has strong parallels with ideas that have been exploited in software engineering for decades, ideas that might aid the use of this technique in medicine. In this article, I’d like to explore some insights from the engineering world that might be useful to the world of medicine.

The problem of finding infected patients in a large population, is similar to the problem of finding an individual item in a large pool of data. In software engineering, one approach for doing this is known as the binary search algorithm. For example, let’s say we want to find the number 12,345 in a sorted list of numbers. We would split the list in half and ask if the number was in the lower half or the upper half. We would then repeat this step on progressively smaller subsets until be found the answer. How much does this improve efficiency? For a list of 1,000,000 numbers a binary search can find the number with 20 probes rather the average of 500,000 probes required to search the same file sequentially. This difference has caught the attention of software engineers.

The high performance of the binary search comes from exploiting insights from the field of Information Theory. This field shows us how to generate the maximum possible information from each probe. By generating more information per probe, we need fewer tests are needed to complete our search.

From Information Theory, we know that a test with a binary answer, such as True/False  or Infected/Uninfected, will generate maximum information per test if the probability of passing (or failing) the test is 50 percent. Tests with low probability of failure are actually very inefficient at generating information. For example, if the probability of Covid positive outcome is 1 out of 1000, then the test generates surprisingly little information. If 1 out of 1000 patients is positive, then an individual Covid test, will identify 0.999 Covid negative patients per test. Let’s start with the numbers the appear in the JAMA letter, examine what was achieved, and ask if Information Theory would help us gain further improvements.

Round 1 Block Size 10 Gives 9.3x Improvement

If the subjects in the JAMA study were tested individually it would require 2,888 tests. By pooling the samples of 9 to 10 patients together, the probability that the pooled sample would test positive was increased. This raised the information generated by the test, thereby increasing the efficiency of the test. The same information was generated in fewer tests because there was more information per test. As a figure of merit for a testing strategy I have calculated the average number of negative patients produced per test, which I have labeled as Average Cleared per Test  in the following table. As the table indicates, the pooling strategy in the JAMA Letter was able generate 9.3 negative patients per test, compared to 0.999 negative patients per test with non-pooled tests. That raises productivity by 9.3x.

If we recognize this as an Information Theory problem, we can exploit ideas from this field to improve our solution. For example, maximum information is generated when a probe has a 50 percent success rate. In the JAMA study disease prevalence was 0.07 percent. At this level, there is only a 0.7 percent chance that that one of the 10 tests will be positive. This is nowhere close to the 50 percent rate required for optimum information generation.

Raising Round 1 Block Size to 40 Gives 19x Improvement

So, let’s just look at what would happen if we raised our block size to 40 tests. As the table below indicates, we can generate 19 negative patients per test, more than doubling our productivity per test.  As Information Theory suggests, we can get more information out of a Round 1 test when we increase the likelihood of a positive test.

Note that to keep these calculations comparable I consistently use JAMA study numbers of 2888 patients with 2 positive cases.  I also make the conservative assumption that the two positive cases appear in different blocks. And, because 2888 total tests is not an integral number multiple of 40, I used 71 blocks of 40 tests and one block of 48 tests. The 80 potentially positive samples go into a second round of individual testing which finds 78 more negatives and 2 positive.

Using Three Rounds Give 41.8x Improvement

There is a disadvantage when large Round 1 block sizes are followed by a second round of individual tests. We are passing a large number of potentially positive tests into the intrinsically inefficient Round 2 of individual testing. For example, while a negative test on Round 1 block size of 500 could yield 500 negatives, a single positive within the block would require individual testing of all 500 samples within that block. Thus, we’ve created a need for a lot of low efficiency tests. We’d like to get the efficiency benefit that occurs when a pool tests negative, but we’d also like to reduce the penalty of using individual testing when we find a positive pool.

We can do this by using a more efficient strategy to search the positive pool. Instead of following Round 1 with individual testing, let’s insert an additional round of pooled testing before the round of individual testing. Why don’t we use a Round 1 block of 100, a Round 2 block of 10, and a final Round 3 of individual tests?  In effect, Round 2 will filter out additional negatives allowing Round 3 find the positives with fewer tests. In the example below Round 2 drops the number of samples that continue on to Round 3 from 200 to 20. This additional intermediate round allows us to generate 41.8  negative patients per test creating another doubling in productivity.

The additional intermediate round makes a big difference because it ensures that the pool of candidates reaching the inefficient final stage (1 test per patient) has a higher portion of positives. This, in turn, enables the final stage to generate more information per test. It works like the binary search algorithm which is extremely efficient is because it operates each round of testing near the theoretically optimum 50 percent failure rate.

Moving to Four Rounds Gives 54.5x Improvement

Let’s look at the effect of adding an additional intermediate round which permits us to use a higher block size on the first round. The approach below uses 4 rounds of testing using blocks of  125, 25, 5, and 1.  Now we are up to 54.5 negative patients per test. This is a further improvement, although not a doubling.

Ten Million Tests with Less Than One Negative per Test

Let’s look at the way we do Covid testing today in this context. As of today, we’ve done at least 10 million Covid-19 tests worldwide; almost all of these tests been done by inefficiently generating slightly less than 1 negative patient per test. Even worse, this inefficiency has occurred in the early stages of a pandemic when capacity was severely constrained, and when test productivity has the maximum possible impact.

If we generate information inefficiently our tests do less work. They give us fewer negative patients per test, larger backlogs, and massive queues. Yet, we vitally need test results to detect and control community transmission. We need them to find asymptomatic and presymptomatic spreaders. We need test results to trace contacts and isolate them before they start spreading. The queues we create with overloaded capacity delay testing and exponentially raise the number of secondary infections that we create while waiting for results. When we increase the efficiency with which we generate information we help almost every aspect of epidemic control.

The Crucial Role of Disease Prevalence

Sample pooling does not work equally well in all stages of an epidemic. Large blocks are efficient because they use a single test to find many negative cases. This works well when disease prevalence is low. However, if a single positive case shows up in the pool, the test will generate no negative cases. This means that as disease prevalence rises, sample pooling must use smaller and smaller block sizes. This in turn produces fewer and fewer negative tests per pool. In fact, at the point where 23 percent of the population is infected a Round 1 block size of 10 produces zero efficiency improvement over individual tests.

It is vital to recognize that the greatest benefits of sample pooling occur early in an epidemic. If we permit months of exponential spread before we use this powerful method, the epidemic may attain a prevalence at which sample pooling loses most of its benefits.

It’s Late but Not Too Late

Some opportunities to use Sample Pooling are gone, but future opportunities may be even greater. The explosion of Covid-19 spread has not yet reached highly vulnerable developing countries. Such countries have much more severe constraints in both testing capacity and the availability of people to perform the tests. They will most certainly face massive testing bottlenecks. This is still a perfect time to take what we have learned and use it to save lives.

Don Reinertsen



Additional Recommendations and Comments

1. Base the initial block size on the prevalence of the disease within the tested subpopulation. Large blocks only improve efficiency when disease prevalence is low. Once the disease has spread to more than 7% of a group a Round 1 block of 10 decreases in effectiveness. By the time 30 percent of group has a disease, a Round 1 block size of 10 will have a 97.2 percent chance of being positive, so it will almost never yield negative patients. Thus, sample pooling must be used early. This powerful tool loses its power when a disease becomes more prevalent.

2. Today’s multiple-round testing strategies tend to use pooled blocks in Round 1 to prefilter samples going into a final round of individual tests that have higher specificity and sensitivity. If we cascaded multiple levels of pooling prior to this final round of individual testing, these higher quality individual tests become more productive because they receive a flow of samples with more positives.

3. It is almost certain that larger blocks produce more sample dilution. Nevertheless,  it seems reasonable to expect that that nucleic acid amplification tests (NAAT) like qPCR tests, would be robust with respect to this dilution. This is a matter for true experts to decide. It seems likely that the intrinsic amplification of signals by 10^6 or more, would provide sufficient margin to tolerate the 100-fold dilution caused by pooling.

4. Regulatory risk could also be a huge obstacle. The CDC may view sample pooling primarily as a technique to reduce costs, rather than a technique to improve throughput. The FDA may view sample pooling as a modification of FDA-cleared procedures and therefore require special clearance under CLIA. There is quite likely a tension between the sincere desire of regulatory institutions to control risk, and the desire of clinicians to gain control over a rapidly growing epidemic. When infected cases double every 3 to 8 days, regulatory delays are clearly very costly. It would not be unreasonable to ask whether the damage caused by allowing thousands of untested people to contribute to the spread of a disease is more or less than the expected damage likely to come from a new method.  Judging pooled specimens as an unproven and therefore unusable technology may have great human costs.


1. Bernhold, K. (2020, April 4) A German Exception? Why the Country’s Coronavirus Death Rate is Low. The New York Times,  Retrieved from


2. Hogan CA, Sahoo MK, Pinsky BA. Sample Pooling as a Strategy to Detect Community Transmission of SARS-CoV-2. JAMA. Published online April 06, 2020. doi:10.1001/jama.2020.5445

3. Centers for Disease Control and Prevention. Screening Tests To Detect Chlamydia trachomatis and Neisseria gonorrhoeae Infections — 2002. MMWR 2002; 51(No. RR-15): p.15.