2 Comments
⭠ Return to thread

Interesting. Oddly enough, I tend to agree with them here - the data is wrong - because the CDC simply doesn't want a record of how many vaccinated people are getting reinfected multiple times.

Expand full comment

It is one of the more fascinating paradoxes of empirical data sets for any phenomenon is that they are invariably "wrong" while the phenomenon is occurring. Simple measurement error sources such as reporting lags, clerical mistakes, and, particularly in the case of infectious disease, reporting gaps, makes every empirical data set less than 100% accurate.

Yet even tainted, empirical data remains the best broad depiction of the phenomenon that we have. This is mainly because the sources of error--including willful manipulation of the data--appear primarily in the absolute numbers: case counts and occupied hospital beds will be over- or under-stated, distributions between inoculated and non-inoculated patients will be skewed, et cetera. However, as the Boston Public Health dashboard illustrates with their threshold markers, decision making is done less on absolute numbers and more on trends and magnitudes of change. Actual case numbers are less probative than whether cases are rising or falling, and likewise for hospital bed occupancy.

Trend analysis, and examination of the magnitude of the deltas within the data set, frequently cancel out many sources of error. For example: even if the PCR testing flaws lead to positive cases being overstated by as much as 90% (meaning only 1 in 10 positive tests indicates actual infection), a 20% rise in cases is still a 20% rise in cases, because the 90% error rate will be consistent throughout. Similarly, 20% rise in cases is indicative of less impact than a doubling of cases, regardless of the extent of the false positives within the data set.

Moreover, effective decision making requires consideration of ALL the data at hand. Thus one looks at case counts, hospital bed occupancy, ER visits, and other available metrics in conjunction with one another. A doubling of case counts means one thing when there is a doubling of hospital bed occupancy rates vs when there is a decline in hospital bed occupancy rates.

By its very nature, anecdotal information is exclusionary. It ignores trends and magnitudes of change, as well as the broad array of metrics necessary to make informed decisions. This is the reason why even expert opinion is considered to be among the lowest quality evidence for any phenomenon.

Yes, the data is wrong, and yes much of it is being fudged (or errors willfully overlooked, which is substantially the same thing). And yet it remains a more rational foundation for decision making and policy debate than the anecdotal information that the "experts" are now seriously putting forward as evidence to support their opinions.

When someone says they "know" the reality is different from what the empirical data indicates, the first question to ask is HOW do they know? And until that is answered satisfactorily their assertion is worth exactly nothing.

Expand full comment