Hide Raphael’s ‘The School of Athens’
Hide : : Raphael’s ‘The School of Athens’ : : Show

Wesolowski et al: Fraudulent “HIV” Study Confirms Difference between True and False Positive is a Matter of Quantity not Infection with a Unique Retrovirus

He aquí la traducción al español de este artículo:

The observation originally made by the Perth Group that HIV has never been isolated, and that the existence of HIV therefore is in doubt predicts that the HIV tests are not specific to a unique viral entity. In The Circular Reasoning Scandal of HIV Testing, an article based on the Perth Group’s work, Neville Hodgkinson elaborates on the way it was determined which proteins belong to HIV in the absence of a proper isolate:

There is an association between testing HIV-positive and risk of developing Aids. This is the main reason why scientists believe HIV is the cause of Aids. But the link is artificial, a consequence of the way the test kits were made.

It never proved possible to validate the tests by culturing, purifying and analysing particles of the purported virus from patients who test positive, then demonstrating that these are not present in patients who test negative. This was despite heroic efforts to make the virus reveal itself in patients with Aids or at risk of Aids, in which their immune cells were stimulated for weeks in laboratory cultures using a variety of agents.

After the cells had been activated in this way, HIV pioneers found some 30 proteins in filtered material that gathered at a density characteristic of retroviruses. They attributed some of these to various parts of the virus. But they never demonstrated that these so-called “HIV antigens” belonged to a new retrovirus.

So, out of the 30 proteins, how did they select the ones to be defined as being from HIV? The answer is shocking, and goes to the root of what is probably the biggest scandal in medical history. They selected those that were most reactive with antibodies in blood samples from Aids patients and those at risk of Aids.

This means that “HIV” antigens are defined as such not on the basis of being shown to belong to HIV, but on the basis that they react with antibodies in Aids patients. Aids patients are then diagnosed as being infected with HIV on the basis that they have antibodies which react with those same antigens. The reasoning is circular.

Gay men leading “fast-track” sex lives, drug addicts, blood product recipients and others whose immune systems are exposed to multiple challenges and who are at risk of Aids are much more likely to have raised levels of the antibodies looked for by the tests than healthy people – because the antigens in the tests were chosen on the basis that they react with antibodies in Aids patients. But this association does not prove the presence of a lethal new virus.

If there is no unique HIV we can expect that the particular risk groups on the basis of which the “HIV” proteins were selected will continue to test positive more often than other groups, but we can also expect that they will test “almost positive” (“indeterminate” or “false positive”) more often than non-risk groups. Especially in the case of the screening tests, people from the risk groups will tend to have elevated levels of the relevant antibodies and therefore be closer to or above the cut-off level for a positive screening test result. Similarly, they will be more likely to test positive on at least some of the  proteins in the Western Blot test, and when they test positive on a predetermined number or  combination of proteins, the test is considered a true positive.

The non-isolation theory of HIV predicts that the difference between a false positive test and a true positive test result, everything else being equal, is a function of quantity (the number and severity of exposure to various antigens, such as semen and recreational drugs) rather than quality (exposure to a unique antigen, HIV), and that people from the original risk groups, gays, drug users etc. will test “almost positive” at higher rates than others because of their more frequent exposure to various antigens or health challenges in general.

As one would expect, the otherwise prolific HIV professionals have not been tripping over themselves to produce studies confirming the hypothesis that in any population the rate of false positive HIV tests will correlate with the rate of true positive tests, but  in their 2011 paper  False-Positive Human Immunodeficiency Virus Enzyme Immunoassay Results in Pregnant Women Wesolowski et al attempt to capitalise on the fact to promote their universal testing agenda. The paper is written to dispel what the authors consider the myth that pregnant women in low risk populations test false positive in unacceptably high numbers and therefore should not be tested.

The usual way to find out if something is worth the risk or the cost involved is to perform some form of cost/benefit analysis. In this case part of such an analysis would consist in finding out the ratio of false positives”to “true positives” one can expect to get ( (a true positive being defined as one in which the screening test and confirmatory test are both reactive).   To take an extreme example, if for every false positive screening test there were 10,000 “true positive” tests, universal testing would seem to be worth it. If on the other hand the ratio were 1 “true positive” test to 10.000 false positive tests, universal testing would be a lot less attractive.

Wesolowski et al. do  not tell us what ratio of false positives to “true positives” they find acceptable. This is likely because the result was not very impressive. As a matter of mathematical necessity the lower the prevalence of HIV positives in a population the higher the ratio of false positive tests to “true positive” tests will be; that is one of the main arguments for not targeting low prevalence populations for screening. In this case Wesolowski et al examined an extremely low prevalence population of pregnant women and predictably found that there were more than two false positive tests to every “true positive” test, corresponding to a positive predictive value (PPV) of only 30%. Wesolowski et al duly report these results, but in the discussion and conclusion little weight is placed on them and instead the rate, as opposed to the ratio, of false positives is emphasised:

 

False-positive HIV EIA results were rare and occurred less frequently among pregnant women than others. (…) False-positive antibody EIA test results are rare, so universal HIV screening among pregnant women should be pursued without hesitation unless a woman declines. However, clinicians should be aware that when HIV prevalence is low, as is often the case among pregnant women in the United States, a reactive EIA result is more likely to be false-positive.

Wesolovski et al. do not tell us what an unacceptable rate of false positives would be, just as they don’t tell us what an unacceptable ratio of false positives to “true positives” would be, but it is apparent that they are suggesting a yardstick. They picked a group of “others” (non-pregnant testers), which they compare with the pregnant group  in order  to be able to conclude that the rate of false positives among pregnant women is suitably low. We can thus infer that they consider the rate of false positives in the “other”, non-pregnant group to be very acceptable.

But there was an interesting difference between the pregnant and non-pregnant groups: The “true positive” prevalence in the non-pregnant group was more than 20 times higher than in the pregnant group (1.34% vs. 0.06%). That is hardly comparing like with like and Wesolowski et al can only get away with it because they focus exclusively on the rate not the ratio. The positive predictive value  (PPV) was in fact much higher in the non-pregnant group at 87% , which looks far more impressive in a cost/benefit analysis than the 30% PPV for the pregnant group.

Leaving that aside for a moment, let us take a look at Wesolowski’s breakdown of the numbers. There were 921,438 people in the pregnant group and 1,103, 961 people in the non-pregnant group, so the non-pregnant  group is larger by around a fifth, but the false positive and indeterminate rates in the non-pregnant group are much larger than this difference can account for.

Pregnant group – Non-pregnant  group

False positive (negative WB):                              951                 –              1,675

Indeterminate (indeterminate WB):                 306                 -                633

“True positive”  (positive WB):                             541                 –              14,788

The HIV testing industry has had more than 25 years to make the screening tests (EIA, ELISA) and the confirmatory test (WB) correspond perfectly and still we see these large differences. For every two false positives in the pregnant, or low prevalence, group there were three in the non-pregnant, or high prevalence, group (o.14 vs. 0.21). And when it comes to indeterminate results the ratio is even higher at two to one.

Since pregnancy in itself is considered a risk factor for testing false positive, how come the false positive and indeterminate rates were so low in that group compared to the non-pregnant group? We propose the answer that the primary predictor of false positive rates in a given population is the “true positive” rate. Because of the lack of suitable studies it remains to be seen if this is always the case. If it is, this is strong evidence that the causes of a false positive  and a true positive test result are the same. In other words, the difference is quantitative rather than qualitative, just as the Perth Group’s and Neville Hodgkinson’s articles on the selection of the “HIV” proteins would suggest.

The objection could be made that in high prevalence populations the same behaviour that puts people at higher risk for HIV infection also puts them at higher risk for a number of other infections and conditions that can cause a false positive test result. In this way our hypothesis would be correct, the cause(s) of a false positive and a “true positive” test are the same, namely the same behaviour, but there would still be a qualitative difference, namely infection with a unique viral agent HIV. However, this presents a new problem: If it is accepted among HIV experts that people in the risk groups for HIV/AIDS are also at much higher risk for testing false positive what would be the point of Wesolovski’s study? Why compare a low prevalence population to a high prevalence population to see which has the highest false positive rates if it is already accepted that the primary predictor of the false positive rate is the “true positive” rate? Did Wesolovski et al publish a scientifically pointless propaganda piece for universal testing, whose outcome was assured in advance?

Special thanks to Colin Esperson, (AKA “Snout”, AKA Kevin Kuritzky) who made us aware of the Wesolowski paper and the sleight of hand involved in switching the focus from ratio to rate of false positives in low risk groups and consequently confirmed our suspicion that the paper was fraudulent propaganda. The productive discussion between MacDonald from the TIG team and Colin Esperson can be read in full here.

Claus Jensen

Leave a Reply

*