Category Archives: Tips

Management Tips

Covid-19 Testing Scarcity: A Self-Inflicted Wound

It seems obvious that the smart way to control of an epidemic is to prevent infected patients from triggering exponential chains of secondary infections. It also seems obvious that to do this we must first find infected patients. So, this is how we fight the war against Covid-19 today. We try to optimize the use of a single test to find a single infected patient. We focus on improving the accuracy, cost, and speed of this individual  test. And, we’ve done a great job; we test with a precision, efficiency, and speed that our predecessors could never dream of. Yet, there is a dark side. By focusing on the one test/one person problem, we have neglected another critical problem. This is what we might call the many tests/many people problem, a problem that is vitally important during an epidemic. What may not be obvious, is that the many people problem is very different from the single patient problem, and it has a very different solution.

Optimize a Single Test or A System of Many Tests?

Let’s use a simple analogy. We know that a chain is as strong as its weakest link. One way to make a strong chain is to separately test each individual link in the chain. We reason that if all the links are strong, then the chain will be strong. This frames the testing problem as a one link/one test problem.  A different way to view this problem is to see the chain as a system composed of many links. Viewed this way we would realize that we could attach 100 links together and test them in a single test. This allows us to use a single test to establish 100 links are good. The many links/many tests problem has a different solution than the one link/one test problem.

What does this have to do with Covid-19 testing? By now we have done about 20 Million Covid-19 tests worldwide. Most of the time, our test will establish that 1 patient is negative. Less frequently, it establishes that 1 patient is positive. What is certain is that we never identify more than one negative per test. What would happen if there was a way to identify as many as 10 to 30 negative patients in a single test? We would increase our testing capacity by a factor of 10 to 30. The 20 million tests we’ve already done could have done the work of 200 to 600 million individual tests.

The Magic of Sample Pooling/Block Testing

The method for producing more results per test is already in use. It is called sample pooling or block testing. It has been described in the Journal of American Medical Association. It has been reported on in the New York Times. It is used in Germany and Korea. It works by combining samples from 10 patients in a single batch and testing this batch in a single test. If the test is negative, which happens most of the time, it has identified that 10 patients are negative in a single test. If the test is positive, which happens less frequently, we need additional tests to determine how many patients were positive. The crucial difference is that we need much less individual testing. If a disease is only present in 1 percent of the population, 90 percent of the time a pool of ten tests will be negative, necessitating no additional tests. Sample pooling is perfectly suited for Covid-19 PCR testing because any dilution caused by combining 10 samples is trivial in the face of  the power of a PCR test to amplify the presence of DNA by a factor of a million.

What’s Stopping Us?

A key question is, why don’t we use this higher productivity approach to testing in America today? Quite simply, because our clinical laboratories are required to follow testing procedures mandated by the FDA and CDC. A clinical laboratory can lose its certification if it does not follow mandated procedures. Unfortunately, the current test prescribed by the CDC, , dated 3/30/2020, CDC 2019 Novel Coronavirus (2019-nCov) Real-Time RT-PCR Diagnostic Panel has no procedure for pooled samples, it only permits testing individual samples for individual patients. In other words, inefficient Covid testing is mandated by the US government.

We pay a high price for this inefficiency. Because we require 1 test per patient, we have a scarcity of tests. Because we have scarcity of tests, we focus these tests on people who already show Covid-19 symptoms. Since about 20 percent of Covid-19 infections are asymptomatic, this leaves thousands of untested people spreading Covid-19 through our communities. And, since scarce testing prevents us from locating the sources of infections in our community, we resort to brute force approaches like locking down our entire economy.

Even worse, in addition to forcing lockdowns, inefficient testing cripples our ability to reopen the economy. Reopening restarts the free movement of asymptomatic or pre-symptomatic people in a large pool of the uninfected people. This is only workable if we can find new infections quickly, trace their contacts, and quickly isolate them. While we can’t prevent all chains of infections, we can keep these chains short by shutting them down quickly. And, what does it take to find chain quickly? Frequent testing. Frequent testing is the key to preventing new waves of infection, and testing efficiency is what makes frequent testing cost-effective.

We Can Do Something

In fairness, the choice of the CDC and FDA to mandate inefficient testing is not motivated by malice. They are keenly aware of the danger of trying to fight an epidemic with unreliable testing and they are simply trying to do their job and promote public welfare. Test scarcity is simply an unintended and perhaps unexamined consequence of their choices. Now is a great time to change this and save lives.

Don Reinertsen

Sample Pooling: An Opportunity for a 40x Improvement in Covid-19 Testing

Testing capacity has been a major obstacle in the battle against Covid-19. Scarce capacity has led to the rationing of testing and to overloaded testing resources. Even today, it is not unusual to hear of 7 day waits for the results of a test that can be run in several hours. Such data suggests a process which is over 98% queue time, a clear symptom of overloads. This is a big problem.

The Solution Already Exists

Fortunately, there is an approach known as sample pooling that has the potential to increase testing capacity by at least an order of magnitude. And, it is already in use today. It was mentioned in a New York Times article of April 4, 2020 that discussed the unusual effectiveness of Germany’s Covid response. It stated:

“Medical staff, at particular risk of contracting and spreading the virus are regularly tested.  To streamline the procedure, some hospitals have started doing block tests, using the swabs of 10 employees, and following up with individual tests only if there is a positive result.”

On April 6, 2020, the JAMA website published a letter entitled “Sample Pooling as a Strategy to Detect Community Transmission of SARS-CoV-2. (2). It examined the effectiveness of processing 2888 specimens in 292 pooled samples. Both of these articles demonstrated the potential to improve existing capacity by close to 10x with sample pooling approaches.

What is Sample Pooling?

Sample pooling combines individual specimens into a pool or block. If a pool of ten specimens tests negative, this establishes that all ten specimens are negative using a single test. Only if the pool tests positive, is it necessary to allocate scarce capacity to individual tests. This approach exploits the high sensitivity of PCR tests. In fact, the CDC mentions sample pooling in its publications, but only as a strategy to lower testing cost, stating:

Available evidence indicates that pooling might be a cost-effective alternative to testing individual specimens with minimal if any loss of sensitivity or specificity. (3)

Sample pooling is particularly valuable in the early stages of an epidemic for two reasons. First, disease prevalence is still low, which means that a high percentage of pooled samples will test negative, making pooling highly efficient. Second, the early stages of an epidemic typically face the greatest capacity limitations, so improvements in capacity produce disproportionate benefits in finding and stopping disease propagation.

Sample Pooling and Information Theory

Interestingly, the mechanism of action behind sample pooling has strong parallels with ideas that have been exploited in software engineering for decades, ideas that might aid the use of this technique in medicine. In this article, I’d like to explore some insights from the engineering world that might be useful to the world of medicine.

The problem of finding infected patients in a large population, is similar to the problem of finding an individual item in a large pool of data. In software engineering, one approach for doing this is known as the binary search algorithm. For example, let’s say we want to find the number 12,345 in a sorted list of numbers. We would split the list in half and ask if the number was in the lower half or the upper half. We would then repeat this step on progressively smaller subsets until be found the answer. How much does this improve efficiency? For a list of 1,000,000 numbers a binary search can find the number with 20 probes rather the average of 500,000 probes required to search the same file sequentially. This difference has caught the attention of software engineers.

The high performance of the binary search comes from exploiting insights from the field of Information Theory. This field shows us how to generate the maximum possible information from each probe. By generating more information per probe, we need fewer tests are needed to complete our search.

From Information Theory, we know that a test with a binary answer, such as True/False  or Infected/Uninfected, will generate maximum information per test if the probability of passing (or failing) the test is 50 percent. Tests with low probability of failure are actually very inefficient at generating information. For example, if the probability of Covid positive outcome is 1 out of 1000, then the test generates surprisingly little information. If 1 out of 1000 patients is positive, then an individual Covid test, will identify 0.999 Covid negative patients per test. Let’s start with the numbers the appear in the JAMA letter, examine what was achieved, and ask if Information Theory would help us gain further improvements.

Round 1 Block Size 10 Gives 9.3x Improvement

If the subjects in the JAMA study were tested individually it would require 2,888 tests. By pooling the samples of 9 to 10 patients together, the probability that the pooled sample would test positive was increased. This raised the information generated by the test, thereby increasing the efficiency of the test. The same information was generated in fewer tests because there was more information per test. As a figure of merit for a testing strategy I have calculated the average number of negative patients produced per test, which I have labeled as Average Cleared per Test  in the following table. As the table indicates, the pooling strategy in the JAMA Letter was able generate 9.3 negative patients per test, compared to 0.999 negative patients per test with non-pooled tests. That raises productivity by 9.3x.

If we recognize this as an Information Theory problem, we can exploit ideas from this field to improve our solution. For example, maximum information is generated when a probe has a 50 percent success rate. In the JAMA study disease prevalence was 0.07 percent. At this level, there is only a 0.7 percent chance that that one of the 10 tests will be positive. This is nowhere close to the 50 percent rate required for optimum information generation.

Raising Round 1 Block Size to 40 Gives 19x Improvement

So, let’s just look at what would happen if we raised our block size to 40 tests. As the table below indicates, we can generate 19 negative patients per test, more than doubling our productivity per test.  As Information Theory suggests, we can get more information out of a Round 1 test when we increase the likelihood of a positive test.

Note that to keep these calculations comparable I consistently use JAMA study numbers of 2888 patients with 2 positive cases.  I also make the conservative assumption that the two positive cases appear in different blocks. And, because 2888 total tests is not an integral number multiple of 40, I used 71 blocks of 40 tests and one block of 48 tests. The 80 potentially positive samples go into a second round of individual testing which finds 78 more negatives and 2 positive.

Using Three Rounds Give 41.8x Improvement

There is a disadvantage when large Round 1 block sizes are followed by a second round of individual tests. We are passing a large number of potentially positive tests into the intrinsically inefficient Round 2 of individual testing. For example, while a negative test on Round 1 block size of 500 could yield 500 negatives, a single positive within the block would require individual testing of all 500 samples within that block. Thus, we’ve created a need for a lot of low efficiency tests. We’d like to get the efficiency benefit that occurs when a pool tests negative, but we’d also like to reduce the penalty of using individual testing when we find a positive pool.

We can do this by using a more efficient strategy to search the positive pool. Instead of following Round 1 with individual testing, let’s insert an additional round of pooled testing before the round of individual testing. Why don’t we use a Round 1 block of 100, a Round 2 block of 10, and a final Round 3 of individual tests?  In effect, Round 2 will filter out additional negatives allowing Round 3 find the positives with fewer tests. In the example below Round 2 drops the number of samples that continue on to Round 3 from 200 to 20. This additional intermediate round allows us to generate 41.8  negative patients per test creating another doubling in productivity.

The additional intermediate round makes a big difference because it ensures that the pool of candidates reaching the inefficient final stage (1 test per patient) has a higher portion of positives. This, in turn, enables the final stage to generate more information per test. It works like the binary search algorithm which is extremely efficient is because it operates each round of testing near the theoretically optimum 50 percent failure rate.

Moving to Four Rounds Gives 54.5x Improvement

Let’s look at the effect of adding an additional intermediate round which permits us to use a higher block size on the first round. The approach below uses 4 rounds of testing using blocks of  125, 25, 5, and 1.  Now we are up to 54.5 negative patients per test. This is a further improvement, although not a doubling.

Ten Million Tests with Less Than One Negative per Test

Let’s look at the way we do Covid testing today in this context. As of today, we’ve done at least 10 million Covid-19 tests worldwide; almost all of these tests been done by inefficiently generating slightly less than 1 negative patient per test. Even worse, this inefficiency has occurred in the early stages of a pandemic when capacity was severely constrained, and when test productivity has the maximum possible impact.

If we generate information inefficiently our tests do less work. They give us fewer negative patients per test, larger backlogs, and massive queues. Yet, we vitally need test results to detect and control community transmission. We need them to find asymptomatic and presymptomatic spreaders. We need test results to trace contacts and isolate them before they start spreading. The queues we create with overloaded capacity delay testing and exponentially raise the number of secondary infections that we create while waiting for results. When we increase the efficiency with which we generate information we help almost every aspect of epidemic control.

The Crucial Role of Disease Prevalence

Sample pooling does not work equally well in all stages of an epidemic. Large blocks are efficient because they use a single test to find many negative cases. This works well when disease prevalence is low. However, if a single positive case shows up in the pool, the test will generate no negative cases. This means that as disease prevalence rises, sample pooling must use smaller and smaller block sizes. This in turn produces fewer and fewer negative tests per pool. In fact, at the point where 23 percent of the population is infected a Round 1 block size of 10 produces zero efficiency improvement over individual tests.

It is vital to recognize that the greatest benefits of sample pooling occur early in an epidemic. If we permit months of exponential spread before we use this powerful method, the epidemic may attain a prevalence at which sample pooling loses most of its benefits.

It’s Late but Not Too Late

Some opportunities to use Sample Pooling are gone, but future opportunities may be even greater. The explosion of Covid-19 spread has not yet reached highly vulnerable developing countries. Such countries have much more severe constraints in both testing capacity and the availability of people to perform the tests. They will most certainly face massive testing bottlenecks. This is still a perfect time to take what we have learned and use it to save lives.

Don Reinertsen

donreinertsen@gmail.com

310-373-5332

 

Additional Recommendations and Comments

1. Base the initial block size on the prevalence of the disease within the tested subpopulation. Large blocks only improve efficiency when disease prevalence is low. Once the disease has spread to more than 7% of a group a Round 1 block of 10 decreases in effectiveness. By the time 30 percent of group has a disease, a Round 1 block size of 10 will have a 97.2 percent chance of being positive, so it will almost never yield negative patients. Thus, sample pooling must be used early. This powerful tool loses its power when a disease becomes more prevalent.

2. Today’s multiple-round testing strategies tend to use pooled blocks in Round 1 to prefilter samples going into a final round of individual tests that have higher specificity and sensitivity. If we cascaded multiple levels of pooling prior to this final round of individual testing, these higher quality individual tests become more productive because they receive a flow of samples with more positives.

3. It is almost certain that larger blocks produce more sample dilution. Nevertheless,  it seems reasonable to expect that that nucleic acid amplification tests (NAAT) like qPCR tests, would be robust with respect to this dilution. This is a matter for true experts to decide. It seems likely that the intrinsic amplification of signals by 10^6 or more, would provide sufficient margin to tolerate the 100-fold dilution caused by pooling.

4. Regulatory risk could also be a huge obstacle. The CDC may view sample pooling primarily as a technique to reduce costs, rather than a technique to improve throughput. The FDA may view sample pooling as a modification of FDA-cleared procedures and therefore require special clearance under CLIA. There is quite likely a tension between the sincere desire of regulatory institutions to control risk, and the desire of clinicians to gain control over a rapidly growing epidemic. When infected cases double every 3 to 8 days, regulatory delays are clearly very costly. It would not be unreasonable to ask whether the damage caused by allowing thousands of untested people to contribute to the spread of a disease is more or less than the expected damage likely to come from a new method.  Judging pooled specimens as an unproven and therefore unusable technology may have great human costs.

References

1. Bernhold, K. (2020, April 4) A German Exception? Why the Country’s Coronavirus Death Rate is Low. The New York Times,  Retrieved from http://www.nytimes.com

-https://www.nytimes.com/2020/04/04/world/europe/germany-coronavirus-death-rate.html

2. Hogan CA, Sahoo MK, Pinsky BA. Sample Pooling as a Strategy to Detect Community Transmission of SARS-CoV-2. JAMA. Published online April 06, 2020. doi:10.1001/jama.2020.5445

https://jamanetwork.com/journals/jama/fullarticle/2764364

3. Centers for Disease Control and Prevention. Screening Tests To Detect Chlamydia trachomatis and Neisseria gonorrhoeae Infections — 2002. MMWR 2002; 51(No. RR-15): p.15.

-https://www.cdc.gov/mmwr/pdf/rr/rr5115.pdf

Technical Debt: Adding Math to the Metaphor

Ward Cunningham introduced the “debt metaphor” to describe what can happen when some part of software work is postponed in order to get other parts out the door faster. He used this metaphor to highlight the idea that such postponement, while often a good choice, could have higher costs than people suspect. He likened the ongoing cost of postponed work to financial interest, and the eventual need to complete the work to the repayment of principal, observing that the interest charges could eventually become high enough to reduce the capacity to do other important work. Continue reading

The Dark Side of Robustness

Nobody likes a fragile design; when you provide it with the tiniest excuse to fail, it will. Everybody likes robust systems. Robustness can be defined many ways, but I think of it as the ability perform as intended, in the presence of a wide range of both expected and unexpected conditions. Thus, a robust system is relatively imperturbable. Continue reading

The Four Impostors: Success, Failure, Knowledge Creation, and Learning

Some product developers observe that failures are almost always present on the path to economic success.  “Celebrate failures,” they say. Others argue that failures are irrelevant as long as we extract knowledge along the way. “Create knowledge,” they advise. Still others reason that, if our real goal is success, perhaps we should simply aim for success. “Prevent failures and do it right the first time,” they suggest. And others assert that we can move beyond the illusion of success and failure by learning from both. “Create learning,” they propose.  Unfortunately, by focusing on failure rates, or knowledge creation, or success rates, or even learning we miss the real issue in product development. Continue reading

Is One-Piece Flow the Lower Limit for Product Developers?

In Lean Manufacturing one-piece flow is the state of perfection that we aspire to. Why? It unlocks many operational and economic benefits: faster cycle time, higher quality, and greater efficiency. The economically optimal batch size for a manufacturing process is an economic tradeoff, but it is a tradeoff that can be shifted to favor small batches. We’ve learned to make one-piece flow economically feasible by driving down transaction cost per batch. One-piece flow is unquestionably a sound objective for a manufacturing process. Continue reading

Going to Gemba and Its Limits

It is important to go to where the action is taking place. I was taught this as a young officer in the Navy, where, as in other areas of the military, we emphasized “leading from the front.” In warfare the reason is obvious: it is difficult to assess a complex situation from a distance. The further you are from the action, the more your view is obscured by what the great military writer Clausewitz called, “the fog of war.” Continue reading

The Cult of the Root Cause

“Why?” is my favorite question because it illuminates relationships between cause and effect. And when we ask this question more than once we expose even deeper causal relationships. Unfortunately, my favorite question has been hijacked by the Cult of the Root Cause and been transformed into the ritual of “The Five Whys”. The concept behind this ritual is simple: when trying to solve a problem, ask “Why” at least five times. Each “Why” will bring you closer to the ultimate cause of the problem. Finally, you will arrive at the root cause, and once there, you can fix the real problem instead merely treating symptoms. Continue reading

Please Wear Your Clown Hat When You Celebrate Failure

A recent column in Wired magazine recounted the story of the 5,127 prototypes used to create the first Dyson vacuum cleaner. In this column, Sir James Dyson notes his similarity to Thomas Edison, who said, “I have not failed. I’ve just found 10,000 ways that won’t work.” Dyson appears to take pride in his 5,127 prototypes as emblematic of the persistence and fortitude of an entrepreneur. In contrast, I think this extraordinary number of unsuccessful trials may illustrate a very fundamental misconception about innovation. Continue reading

The Lean Approach to Context Switching

A great insight of lean manufacturing was recognizing the pivotal importance of reducing changeover costs. American manufacturers would run the same parts on their stamping machines for two weeks because it took 24 hours to changeover the machine. Along came the Japanese, who reduced the changeover time by 100x, and suddenly short run lengths became cost-effective. With shorter run lengths, batch sizes became smaller, and this improved quality, efficiency, and flow-through time. The great blindness of the American manufacturers was accepting the cost of changeovers as immutable. This condemned them to use large batch sizes. Continue reading