Hide Raphael’s ‘The School of Athens’
Hide : : Raphael’s ‘The School of Athens’ : : Show

Discrepancy between Gender-Specific HIV Prevalence and Gender-Specific AIDS Mortality in South African Statistics disprove HIV > AIDS Theory (Duesberg vs. Chigwedere Part IV)

When Duesberg’s answer to  Chigwedere et al Estimating the Lost Benefits of Antiretroviral Drug Use in South Africa appeared in Medical Hypotheses we published  three critical analyses, one of Duesberg et al, one examining the calculations behind Chigwedere’s ARV treatment benefit estimate and one analysis of the evidence for a steep rise in child mortality caused by a new pathogenic agent. In the meantime Chigwedere and Essex published their own reply to  Duesberg, called AIDS Denialism and Public Health Practice.  In this article, the fourth part in our series about Chigwedere vs. Duesberg,  we will analyse Groenewald et al., one of Chigwedere’s key references, and show: 1. that it cannot be cited as proof of any aspect of the HIV theory of AIDS because it assumes that theory as a basic premise. 2. that it is powerful disproof of the HIV theory of AIDS because it reveals a glaring discrepancy between gender-specific HIV prevalence and gender-specific AIDS mortality in South African HIV/AIDS statistics.

Chigwedere and Essex’s second article rebutting Duesberg’s unpublished reply to their first political attack is similar to it in many ways. It is characteristically shallow and prefers to bombard the reader with a bewildering multitude of hard-to-penetrate references rather than performing meaningful analysis. It is therefore important to identify the relevant points in dispute. The critical assertion in Duesberg et al. is the following:

South African statistics have recorded only about 1 ‘‘HIV death” per 1000 HIV-positives per year (or 12,000 ‘‘HIV-deaths” among 12 million HIV antibody-positives) from 2000 to 2005.  (Duesberg et al)

Chigwedere et al. answered the following way:

The second part of the argument quotes Statistics South Africa, which recorded an average of 12,000 deaths per year in South Africa between 1997 and 2006 [3]. The shortfall is that these data are ‘‘Findings from Death Notification [118].’’ First, as explained by surveillance experts, ‘‘In resource-poor countries with underdeveloped health infrastructures, reports of AIDS or HIV cases are usually not complete enough to be considered reliable measures of the scope of the epidemic [119]’’. This simply means that the death notification system in South Africa had/has much underreporting. Indeed, the ‘‘former so-called independent homelands of Transkei, Boputhatswana, Venda and Ciskei (TBVC) were not included in the reporting system until 1994’’ when the reporting system began centralization, and a new death certificate was introduced in 1998 to improve reporting [120]. The second shortfall is that of misclassification of deaths. AIDS patients die of the resulting  opportunistic infections and cancers, and these immediate causes of death are often recorded without noting the underlying acquired immunodeficiency. According to the Medical Research Council (SA), up to 61% of HIV deaths are misclassified and the majority of them are recorded as tuberculosis and lower respiratory tract infections, which become the leading causes of death [120].7 It is apparent that Duesberg selected highly deficient statistics. (Chigwedere et al.)

We have previously shown that Duesberg et al. did indeed select deficient statistics to support their Passenger Virus theory. Duesberg was silent in the face of Chigwedere and Essex’s rebuttal, but co-author Henry Bauer addressed the issue indirectly in Galletti et al. and later in a blog post published in Jan 2011 called Picking cherries in South Africa - just as he had previously published a critique of Chigwedere’s first paper on his blog anticipating to some extent the approach in our first analysis. But as was the case with that effort Galletti et al. and “Picking cherries in South Africa” are sketchy arguments, and as a result the hard questions are never dealt with satisfactorily.  Consider for instance Bauer’s facile claim that the UNAIDS statistics have been “officially” debunked:

For some reason, the professionals at South Africa Statistics have been unmoved by this nonsense [Huge number of AIDS deaths]. Their latest published report on “Mortality and causes of death in South Africa, 2008: Findings from death notification” (P0309.3, released 18 November 2010) notes that the completion of reporting of deaths has been at around 80%, and that deaths from “AIDS” or “HIV disease” were a little over 15,000 in 2008, ranking 7th among causes of death, responsible for just 2.5% of all deaths. The UNAIDS model is once more officially declared to be wrong by a factor of 20 or so. (Bauer)

A report whose title specifies that it confines itself to reporting  “Findings from death notification” can hardly be taken as conclusive proof of, or even addressing the validity of the UNAIDS models. Those are different things and pretending otherwise is not going to impress an audience that has read Chigwedere and Essex’s well-referenced paper.

Bauer further cites Rian Malan on the continual down-revision of South African AIDS mortality to conclude that the ASSA model used to calculate the numbers is “thoroughly discredited”. But again, revised estimates discredit only the old estimates, not the current ones.  It should also be noted that Bauer, still citing Rian Malan, includes instances where not only AIDS deaths but HIV prevalence modeling was revealed as overestimations:

Malan also cited computer-modeled estimates of 9.5% “HIV-positive” for college students at Rand Afrikaans University when a large sample of them (nearly 1200) tested poz at only 1.1%; and a computer-modeled estimate of bank employees at 12% when actual testing of 29,000 employees revealed a rate of only 3%. The model is thoroughly discredited, in other words. (Bauer)

It is not clear how those estimates were arrived at, but Duesberg et al. used the huge disconnect between a 25%-30% HIV prevalence and 12,000 annual AIDS deaths to argue that HIV is not responsible for those deaths. If in fact HIV prevalence is overestimated, it only means that HIV prevalence and registered AIDS deaths match much better than Bauer/Duesberg pretend they do, which in turn means that their own statistical argument for HIV being harmless is also “thoroughly discredited”.  Even the none too bright Chigwedere and Essex know how to exploit  this inherent contradiction in Duesberg’s/Bauer’s argument :

The extreme end of the argument is to suppose that Duesberg’s low statistics of about 12,000 AIDS deaths per year were correct-that would translate to a total of 72,000 deaths from 2000 to 2005, and this would still enable a calculation of the number of persons that could have been treated using ARVs had Mbeki not obstructed, 24,000 lives if we assume a third of them would have been treated. This is not a small number of people to let die because of AIDS denialism. (Chigwedere/Essex)

It should be clear from this that sweeping epidemiomological arguments cannot settle the issue in favour of Duesberg et al. At some point one has to get down to the nitty-gritty of examining the specific references cited by Chigwedere and Essex, instead of chasing the red cloth of population curves and adjustable HIV/ AIDS estimates or accusing each other of cherry-picking, but Bauer stays the course over shallow waters in Galletti et al.:

The director of Statistics South Africa, Lehohla (2005; cited in Galletti & Bauer) has explicated the errors committed by those who rely on the UNAIDS models, for example by using long chains of inferences based indirectly on a host of doubtful claims that jump to farfetched conclusions based on changes in the age distributions of deaths and ignoring rises in political and criminal violence that account for those changes. By contrast, Chigwedere and other mainstream doom-purveyors have simply cherry-picked the invalid UNAIDS numbers and ignored the official Statistics South Africa reports. Those who attempted to defend the UNAIDS numbers coould only assert, without a shadow of evidence, that causes of death must have been misreported to the extent of almost half of all deaths; yet they have not even attempted to show how or why Statistics South African is wrong about its estimate of 80% completeness of counts and accuracy of reporting. (Bauer)

The reference for the categorical statement that political and criminal violence accounts for all changes in age distribution of deaths is South Africa’s Statistician General JP Lehohla. The only problem is that Lehohla says nothing of the kind.  He mentions unnatural deaths  (not political and criminal violence, which was decreasing) as one factor that could have impacted the year 2000 death rates in the 15-49 age group, but he does not suggest that they can explain the rise in AIDS indicator diseases such as tuberculosis and pneumonia. We are not dismissive of Bauer’s point as  part of a larger argument, but  the impressive numbers for unnatural deaths are all from 1996, well before HIV/AIDS is supposed to have impacted South Africa to an appreciable degree. All post-1997 reports from Lehohla’s Statistics South Africa (the report Bauer reads as gospel on AIDS deaths) show a steady or decreasing level of unnatural deaths and increasing levels of AIDS indicator diseases.

All of this brings us back to Chigwedere and Essex’s key reference 7 cited above for the 61%  misclassified HIV deaths. It is a paper by Groenewald et al. from 2005 entitled Identifying Deaths from AIDS in South Africa and it deals specifically with the issues of age-specific rise in AIDS indicator diseases and erroneous death certificates. It is in fact the “shadow of evidence” Bauer repeatedly claims does not exist although it features prominently in Chigwedere’s rebuttal of Duesberg et al. We will therefore place our hand on the lever and examine it in detail.

The first thing we learn from Groenewald et al. is how easy it is to make the numbers of AIDS deaths match up with the official HIV/AIDS estimates when one accepts there is such a thing as an HIV-related AIDS death. Groenewald et al. looked at the age distribution of the deaths determined as HIV-related, (the 12,000 annual AIDS deaths accepted by Duesberg et al.) and found a distinct age pattern. They then looked at all deaths in those same age groups and found that there was significant rise in mortality in the HIV era over and above what could be expected (excess mortality) due to recognised AIDS indicator diseases:

Methods: (…) The difference in the age-specific death rates for these two periods was examined to identify conditions where there was a noticeable increase in mortality following the same age pattern as the HIV deaths, thus likely to be misclassified AIDS deaths.

Results: The increase in the age-specific death rates for HIV-related deaths showed a distinct age pattern, which has been observed elsewhere. Out of the 22 potential causes of death investigated, there were nine that increased in the same distinct age pattern (tuberculosis, pneumonia, diarrhoea, meningitis, other respiratory disease, non-infective gastroenteritis, other infectious and parasitic diseases, deficiency anaemias and protein energy malnutrition) and could be considered AIDS-related conditions. The increase in these conditions accounted for 61% of the total deaths related to HIV/AIDS. When added to the deaths classified as HIV-related on the death certificate, the total accounts for 93% of the ASSA2000 model estimates of the number of AIDS deaths in 2000 (Groenewald)

It is a very reasonable assumption that the same agent, HIV, is responsible for the excess deaths where all three  correlates, HIV-related ages of death, HIV-related indicator disease and excess mortality, converge.  If the objection is raised as to what makes certain ages and certain diseases HIV-related, the answer is the HIV test. If one accepts the validity of the HIV tests even in principle, one must also accept that there are HIV-related ages of death and HIV-related causes of death.

However, Groenewald’s unquestioning acceptance of  the validity of the HIV > AIDS hypothesis presents a problem for Chigwedere et al. For example, Groenewald et al. felt free to consider not only excess mortality from recognised AIDS indicator diseases, but even non-AIDS indicator diseases in order to arrive at the desired figure:

Based on the literature, AIDS indicator conditions, including tuberculosis, pneumonia, diarrhoea, meningitis, wasting, septicaemia, lymphoma, cervical cancer, candidiasis, cryptococcosus and other opportunistic infections, were selected [11,13]. Clinical and autopsy surveys suggested that myocarditis (I40) and cardiomyopathy (I42) [14] should also be considered potential candidates for investigation as HIV-related deaths. On the basis of experience in Zimbabwe, which suggested that many HIV deaths were misclassified to malaria [15], malaria was also investigated. Maternal deaths were also selected on the basis of the most recent confidential inquiry into maternal deaths, which showed that the proportion of deaths from non-pregnancy-related infections (mainly AIDS) had increased dramatically between 1998 and 1999-2001 [16]. Kaposi’s sarcoma (C46) was added to the HIV-related deaths (B20-B24) as the 1985 Bangui definition for AIDS considered the presence of Kaposi’s sarcoma as sufficient for the diagnosis of AIDS for surveillance purposes [11]. Causes of death with no known association with AIDS, which had shown a marked rise during this period, were also investigated to ascertain whether this rise could be attributed to AIDS.

Groenewald et al. are to be applauded for their candidness, less so Chigwedere et al.: the Bangui definition, reports from other countries, autopsy surveys, causes of death with no known association with AIDS, everything but the kitchen sink is thrown into the mix. The authors feel they can pick and choose freely because they assume in advance that the excess mortality in the designated age groups is due to HIV. Since this is the claim in dispute, it should be obvious that Groenewald et al cannot be cited by Chigwedere and Essex as proving it.

It should also be emphasised that using  this method Groenewald et al. can make the numbers fit almost any ASSA estimate by picking and choosing between the initial 22 conditions considered. If, for example, the estimate is revised downwards, Groenewald et al. can decide that meningitis isn’t a reliable AIDS indicator disease in South Africa and scrap it partly or in full at their discretion. But far more importantly, this also means that where rising death rates do NOT match the age pattern the authors are looking for or the figures they are trying to approximate they have no problem attributing excess deaths to causes other than HIV. One example they give is melanoma, which had increased to almost 6 times previous levels around the age of 55. Had the peak occurred at age 35, the authors would likely have attributed it to HIV, but since the peak occurs at 55 the authors are quite happy attributing it to other causes. This illustrates nicely how empirical observation is made to fit theory instead of the other way round.

However, Groenewald et al. have a much bigger problem on their hands than mere cherry-picking. Since HIV prevalence is the only variable considered as a cause of excess deaths in the chosen age categories, it follows that excess mortality should track HIV prevalence faithfully. But in  Figure 1 we see that male mortality overtakes female mortality at age 35 despite the fact that female HIV prevalence is much higher than male prevalence for almost all age groups. Male HIV prevalence overtakes female prevalence briefly around age 35, but females are decisively back in the lead around age 40, so it cannot explain how the male HIV-related death rate can get in front and stay there.

Further, if we compare the 2002 curves with 1996, before the excess, HIV-related deaths supposedly kicked in, there is no age group for which the gender distribution of mortality has changed appreciably in the 6 years AIDS is suppposed to have exploded.  One would expect to see statistically significant divergence in the female to male ratio for at least some age groups but none is apparent, the 1996 and 2002 curves faithfully track each other in terms of female to male ratio.

The point is perhaps clearer if we move to Groenewald’s breakdown of actual numbers in Table 2, where he estimates that the overall difference between female and male AIDS mortality is approx. 6,000 in the adult category. This means that AIDS mortality has hit the sexes in almost equal measure despite HIV prevalence being much higher among females. How is that possible?

To give us a sense of the lopsided impact of HIV prevalence on women Groenewald cites the 26.5% prevalence rate at antenatal clinics, a good indicator of HIV prevalence among sexually active females. This is on background of an overall prevalence rate of 11.4%, which means that male prevalence must be significantly lower than 11.4%. Depending on how we cut it the ASSA-sanctioned difference between the sexes in adult HIV prevalence comes out at around 2:3 across all sexually active age groups. Assuming that HIV is the cause of AIDS, a 2:3 ratio between males and females means that the mortality figures Groenewald is operating with should be 72,000 female AIDS deaths to 48,000 male AIDS deaths. In other words, if AIDS mortality really tracked HIV prevalence the difference between male and female AIDS mortality in Table 2 would be somewhere around 24,000, 4 times higher than the 6,000 Groenewald arrives at. Thus the discrepancy between gender-specific HIV prevalence and gender-specific mortality alone should have led Groenewald to conclude that HIV is not a major cause of the gender-specific excess death rates in South Africa.

This problem is not unique to Groenewald; all South African gender-specific HIV prevalence statistics are contradicted by the corresponding gender-specific AIDS mortality, and we hope to be able to present a more comprehensive study in the near future, but here we emphasise that the discrepancy is intrinsic to the assumptions and calculations used to estimate the South African HIV/AIDS epidemic. Groenewald set out specifically to make HIV prevalence and AIDS death estimates match up any way he could. We allowed him all the room to manoeuver he wanted and still the numbers conclusively contradict the basic assumption they were meant to prove: that HIV is the cause of AIDS.  This is devastating to Chigwedere et al., who reference Groenewald as the best evidence they have that AIDS deaths in South Africa track HIV prevalence.


Leave a Reply