Categories
Junk Sciences Junk Sciences Sciences Uncategorized

[Sciences/Junk Sciences] Reconsideration of the immunotherapeutic pediatric safe dose levels of aluminum (Lyons-Weiler and Ricketson, J Trace Elem Med Biol 2018)

This is a post I wanted to write about a long time ago but for some reasons, I have been putting on the back burner for many different reasons.
You know what can be the most irritating to read? Papers from anti-science in general. You see, if the data was sound, the experimental setup was robust then I would consider their arguments are valid and sound. The problem with the anti-science papers I have been reading so far (anti-vaccines, anti-GMOs) are most of the time written by scientists that are lacking the expertise and credentials (in terms of publication record) to discuss on a topic, are based on speculation (the experimental data to support their hypothesis is at best paper-thin), the experimental data are most of the time missing the rigor and paradigm needed to make an objective outcome and often cherry-pick the literature.
Under normal conditions, such papers would not even pass a normal peer-review filter and would have been rejected outright. Yet, such papers found their way in very low impact factor journals or in predatory journals (that will publish any garbage study, as long as there is a valid payment method).

1. Who are the authors?
This is the case of this manuscript written by James Lyons-Weiler and Robert Ricketson. In this study, they claim that the current immunization schedule is dangerous, blaming on the extraordinary amount of aluminum and using questionable and speculative pharmacokinetics to support their claims (of course, there is no experimental data to support their claims, only speculation). A tenet in scientific publication is to assess how credible the authors are in the field, this can be judged by the authors affiliation and publication records. James Lyons-Weiler has  (according to his LinkedIn profile) a PhD in Ecology, Evolution and Conservation Biology and currently affiliated to the “Instittute for Pure and Applied Knowledge”. This is not a scientific institute as the Salk Institute, but rather an frontstore for some quackery posing as a “scientific institute”. The second author, Robert Ricketson, is no better. Indeed, he is even worse. Apparently Dr. Ricketson has a history of medical malpractice as a spine surgeon, and has been implicated in a medical malpractice lawsuit in 2001 for inserting a screwdriver in a patient spine. At the publication date, Ricketson affiliation is another “scientific institute” named “Hale O’mana’o Research” in Edmond, OK. A quick verification on his LinkedIn profile suggest that these two Ricketson are the same Ricketson. To summarize, we have two authors with ZERO expertise in pharmacokinetics (including one doctor that got fined over $5 millions for medical malpractice), working in institutes with questionable scientific credientials but established anti-vaccine stance, under the disguise of “vaccine safety” (here and here), published in a journal in which a notorious anti-vaccine scientist is sitting in the editorial board. Is it surprising? For me, it is not. Just a classical MO for anti-vaccine scientist.

2. What the paper is about?
You can find the paper here, since it is behind paywall I cannot legally share the information, so I would request the reader to corroborate my claims by getting the full-text. In this study, the authors consider the safety studies done in animals are not correct and underestimate the toxicity of aluminum because they are based on animal body weight. And thats where the trouble start. The authors solely consider the amount of aluminum injected into animals and patients SOLELY based on the body weight.
They ignore the administration route, they ignore the existing literature and even questions the outcomes and recommendation of the World Health Organization as mentioned as “We found two important errors in the provenance and derivation of provisional aluminum intake levels from World Health Organization (WHO; Supplementary Material) which, unfortunately, led to overestimation of safe exposure levels.” That’s a bold statement by the beginning, coming from two non-experts in pharmacokinetics and toxicokinetics.
So how do they ended up using such claim? By using a derived version of the Clarke’s equation:

Child dose (mg) = Adult Dose (mg) * (child bodyweight (lbs)/adult bodyweight (lbs))

The Clarke’s equation is a common equation used for therapeutic dosing, as we commonly refer to administer doses as x mg/kg. Knowing the patient weight, you can easily calculate the dose administered.  This formula is great…….if you already know the target concentration (or the average plasma concentration) you aim to target. This is usually supported by empirical data and further confirmed by lab tests (you can dose the drug in the patient plasma and assess if such amount falls within the therapeutical window). But this equation tells you nothing about the pharmacokinetics of the drug, or about the bioavailability of the drug, or differences in the administration routes.
It only tells you one thing “How many miligrams of X should I administer to obtain a plasma concentration of X falling into therapeutical range?” That’s it. You assume a dosing regimen (mg/kg), you know your patient weight (in kgs) and thus you can obtain the dose needed (loading dose or maintenance dose).  However, the authors manipulated the equation to be able to transpose the minimal risk level from adults to children as the following:

CED (mg/kg)= HED(adult) mg/kg x [BW(child) (kg)/BW(adult) (kg)]

The rest of the paper is SOLELY based on speculation, no experimental data to support the claim (we rather have a post hoc ergo fallacy unfolding). If the authors wanted to make their claims valid, they would provide experimental data (in forms of blood sampling) for 2 months babies before immunization (baseline control) and 6-24 hours after immunization to show that Al levels in plasma are significantly altered by the immunization. But they never show that data.
Indeed, what they show is a blatant misuse of the data and recommendation of the FDA and a serious miscalculation that a 12th grader would not even do.
They compared the dietary MRL as “JECFA provisional tolerable daily intake from dietary and additive exposures of 140 μg/kg/day and current provisional tolerable daily intake of 290 μg/kg/day per day both before and after the safety factor of 10 is applied (Fig. 3).
We end up in the classical cases of “apples versus oranges” and trying to make the claim they are the same. Which they are not. Yes, both are extravascular routes and follow similar fate. But in the same time, we have to compare the physics-chemical aspects and the bioavailability of Al in both routes. One is administered by oral route, the other by intramuscular route. In both cases, the bioavailability falls within the same range, with the oral showing about 0.3% and the IM from 0.6% (based on Flarend et al., Vaccine 1997) and 0.9% (Yokel and McNamara estimate, Pharm Tax 2001).

What the authors show us is basically a graph that assume the WHOLE Al injected in 100% bioavaialable at once, exceeding the MRL adjusted to pediatrics) as seen in Figure 4:
Lyons_Weiler_2018_Fig4

There is one thing to consider: The ATSDR. The ATSDR considers the MRL of 1mg/kd/day of ingested aluminum (that is about 100x lower than the NOAEL and adjusted to the bioavailability, as described here: https://www.atsdr.cdc.gov/toxprofiles/tp22-c8.pdf). If we assume a bioavailability of 0.3%, then we expect that out of 1mg/kg/day ingested, we can estimate that about 3microg (or 0.003mg)/kg/day would contribute in the total burden in the Al plasma level. This graph would be correct if 100% of the aluminum injected ended up in the systemic circulation at ONCE and spiked Al levels significantly high. But thats not the case, and the authors blatantly ignored this critical information, coming from previous studies. In order to compare these two items, you have to compare and estimate how much of each of these routes will contribute in the total Al plasma/blood levels.
You cannot just plot the total amount injected (adjusted per kg) and assume it is representing the same variable than estimated plasma levels from the MRL. Now, let consider that the Al injected is available at the rate of 1% a day, the graph will look more like that.

Lyons_Weiler_2018_Fig4_Adjusted

You see, we have a complete different scenario. If we consider that the aluminum is slowly released into the body at a rate of 1% per day we are now being way under the MRL and within safe levels. Again, we consider the MRL of 1mg (1000microg)/kg/day. If we consider a 0.3% bioavailability and difference in 5th and 95th percentiles weight (grey bars), we conclude that the daily burden of Al via dietary route should not exceed a value ranging from 13.20-18.72microg/day. Our values matches the MRL from Lyons-Weiler. In other words, our assumption is correct. If we consider an average weight of 5.35kg (50th percentile) at 2 months and 1mg as a cumulative dose of the immunization occurring the same day (conservative estimate), the amount delivered that day would be 0.187mg or (187microgram). Considering a bioavailability of 1% per day via IM, we have about 1.87microgram of burden from the vaccine added each day to a maximum MRL of 16.05microgram/day for the 50th percentile (weight 50th percentile=5.35kgs). Thats about 11.7% aluminum to be removed from the daily MRL, but within negligible range to have a statistical significance (you need at least 30% to consider it as statically meaningful).  This of course has to be confirmed by studies assessing plasma levels of Al after injections but there are already a literature out there with such data avaialable and reported here and here. Both studies concluded no changes in total Al plasma levels in regard of the vaccination status, including 6-24 hours after immunization.

3. Concluding remarks

Anti-science know how to bangs for their bucks, by sensationalizing claims knowing that the lay person will not or be capable to verify their claim. Most of the times, such claims come from persons that are legitimate scientists in their field, but completely speak out of their expertise domain. This is a common trope we see when people cite Linus Pauling, Otto Warburg or Luc Montagnier. Each of them have done remarkable discoveries in their field, got their Nobel Prizes but once they speak outside their expertise have proven to be wrong or have seen their claims manipulated by quack-peddlers. Lets take Linus Pauling that has been incremental in modern chemistry by describing the chemical bonds, but later claimed cancer(s) can be cured with Vitamin C. Coincidentaly, he died of prostate cancer in 1994.
Same applies in this paper. We have two authors with ZERO knowledge of pharmacokinetics, yet they have given themselves the role to demonize aluminum at all cost, bending and occulting facts to fit their narrative and their conclusion. This paper is the evidence that they are not serious about “vaccine safety”. They are staunch anti-vaccines, and they will use their status of scientists to vilify it at all costs, even if it means reaching outside their expertise, make extraordinary claims without extraordinary evidences (they did not have evidence for this study) and get published in a journal that will favor their claims and obviously lacked the rigor in the review.
Negating the neurotoxic effects of aluminum is not a correct statement either. Aluminum is neurotoxic, but as anything in toxicology it is all about the dose. And one parameter that is critical to assess the safety of aluminum is its plasma level. This safety level is driven by how much aluminum access the systemic circulation (from IV parenteral nutrition bags or from extravascular routes such as vaccines or dietary exposure). What matter at the end is the Al plasma levels and the FDA set a limit on that daily exposure (5 micrograms/kg/day via IV route). This is a problem encountered by patients suffering from non-functional kidneys (95% of aluminum is cleared via renal route) and from patient continuously fed via IV route (total parenteral nutrition).
Both Lyons-Weiler and Ricektson failed to applied basic concepts of pharmacokinetics, ignored the differences between vascular and extravascular routes and willfully used a calculation method that is not appropriated for this purpose. I would even what is worse is that none of their claims is supported by hard data, making their claims even more questionable.
Unfortunately, such “junk paper” felt through the cracks of peer-review and has been used repeatedly used by anti-vaxxers as supporting evidence. As Andrew Wakefield has his second paper removed after 16 years, how long it will take to remove that paper?
I dont know but the damage is done, and until “vaccine safety” scientists come with robust and foul-proof studies published in highly respected journals, they will be considered by me and others as junk scientists, keeping on feeding the literature with their garbage studies that should have been wiped out by a rigorous peer-review process.

 

Advertisement
Categories
Junk Sciences Sciences Uncategorized Vaccines

[Sciences/Junk Sciences/Vaccines] HPV vaccine safety – a tale of two studies

“It was the best of times, it was the worst of times” Charles Dickens – A Tale Of Two Cities

If you are following a bit the current news about immunization and vaccines, you likely heard about two studies in regards of the effect of HPV vaccines on population safety, in particular in terms of risk of developing complications or chronic conditions.
Interestingly two studies (to be honest one study and one comment) weighing the pros and cons of HPV public immunization were published within weeks.
One of them was claiming that HPV vaccines increased the risk of cervical cancer in the Swedish population, the other the incidence of autoimmune diseases in the Canadian population, in particular in the population of Ontario province.
One of them was published in a society journal with an reasonably impact factor (IF~8 and Scimago score of 1.7), the other in a journal with inexistent impact factor (Scimago score of 0.2) showing behavior similar to “predatory journals”.
One of them was published “in a peer-reviewed general medical journal that publishes original clinical research, commentaries, analyses, and reviews of clinical topics, health news, clinical practice updates and thought-provoking editorials.”; the other “is a multi-disciplinary academic journal providing a platform for publication of original material and discussion on all aspects of healthcare ethics and the humanities, relevant to and/or from the perspective of India and other developing countries.
One of them was a full-length study with several authors, the other one a rapid communication labelled as “comment” and purposely falsified the author’s name and affiliation (hiding behind an outlook email address), claiming the fear of retaliation by the opposing group.
One of them was a controlled population study, investigating two groups (vaccinated versus unvaccinated) with a sample size of 100’000+ each; the other one tossed epidemiological data together without further stratification and cherry-picked the information.
One of them was concluding the safety of HPV vaccine and absence of increased risk of autoimmunity, the other questioned the safety of HPV vaccine straight from the abstract as “I discuss the possibility that HPV vaccination could play a role in the increase in the incidence of cervical cancer by causing instead of preventing cervical cancer disease in women previously exposed to HPV. A time relationship exists between the start of vaccination and the increase in the incidence of cervical cancer. The HPV vaccines were approved in 2006 and 2007, respectively and most young girls started to be vaccinated during 2012–2013.
After a media firestorm and the strange support of the editorial board towards the author (and still consider the data as legitimate), it got finally retracted. The corresponding author has also been informed he will have four other retractions upcoming.
Guess which one was the flawed study published with a falsified author information and in a journal with a scope outside the content of the article published?
You can these publications here and here.

Categories
Junk Sciences Neurosciences Uncategorized

[Vaccines/Junk Sciences] “Murine hypothalamic destruction with vascular cell apoptosis subsequent to combined administration of human papilloma virus vaccine and pertussis toxin” (Aratani et al., Sci Rep 2016 – RETRACTED) Lesson from a paper that went completely fritz on the scientific method

You maybe heard about the recent retraction notice of a study published in Scientific Reports that raised concern about the safety of Gardasil(R) (HPV vaccine) and got retracted this week. Another anti-vaccine paper that bit the dust. I would have say, I am not surprised at all. Anti-vaccine studies have this very annoying habit of either being a proven fraud (remember the latest Shaw paper?), botching the experimental protocol with omission of proper controls (thats the Exley paper I have reviewed) or conveniently sweeping under the rug some data that are not fitting the narrative (that goes for one recent paper published by Gherardi).
But this one is interesting at several levels, because there is a bit of blood-brain barrier in  it, and also adds to the list of papers retraction in Scientific Reports and recent threats by scientists in the editorial board to resign from their positions due to ambiguous and unjustified decision on a flawed paper (Disclaimer: I have a study authored in this journal and I have peer-reviewed for them a couple of times).

To be honest, I only heard about this paper during the last weekend and took some times to read the paper. I will be honest, I don’t see any scientific fraud in the sense of data manipulation. What concerns me is how such a botched study could even pass through peer-review process? Considering I self-impose a quality of standard in my manuscripts and still get challenged by peer-reviewers, seeing such junk studies getting a free pass is a bit vexing. I agree that open-access may not have the same stringency in terms of peer-review filter but considering Scientific Reports as part of Nature Publishing Group, you expect a rigor found for any Nature-related journals applied in this journal too.

But lets go through the paper, it is retracted but you can still access it here.

What is the wrong with this paper?

1. The experimental design in terms of groups

The first problem arises from Figure 1.

Aratani_Fig1

We have six groups: vehicle (PBS), pertussis toxin (PTX), Gardasil(R) (MSD, HPV vaccine 4-strains) (G), G+PTX, EAE and EAE+PTX. Let’s breakdown first these oddities.  Why the author has included a PTX group, even more adding PTX with Gardasil? There is not much explanation in the text explaining the rationale (also the writing style is odd, very odd. I completely understand that the first author is not a Native English speaker. As an ESL myself, I completely understand that). Also, I am not aware about a higher risk on contracting pertussis upon vaccines.
The second aspect is the use of the EAE mouse model. EAE stands for experimental autoimmune encephalitis. It is the “gold standard” for a mouse model of multiple sclerosis (MS). The idea behind is to inject a brain protein (myelin basic protein or MBP), which will trigger an immune reaction (as the brain is a immune-privileged organ) and result in symptoms similar to MS. I would understand to compare the incidence of Gardasil(R) on MS patients by comparing EAE mice versus non-EAE mice but that is never the case (they even administer PTX to a sub-group).
So here we already start with a wrong experimental design: it just make no sense. A more rationale approach would have been the following:
Vehicle (PBS), Gardasil (G), EAE, EAE+G. That would have saved two groups and precious lives sacrificed in another useless study.

2. The experimental design in terms of statistics and power of analysis

Another important issue is the blatant dismissal of consideration of biostatistics and the power of analysis in the experimental design. For those that are not familiar with scientific research, you have to ensure you have a statistical meaning to your data to ensure the effect observed is real and not due to simple coincidence. This statement is especially true when working with vertebrates animals. Any animal experiment has to be approved by the institutional IACUC that ensure you have a clear idea of what are the purpose of your experiments, how you will ensure a humane treatment to animals and also have an optimal number of animals to achieve a statistical significance.
An important aspect for in vivo (animal) studies is to achieve at least a sample size of 8 or more animals per group (n=8). From Figure 1a, we have already a violation of this as only the G and the G+PTX have enough animals (n=14 and 21 respectively). All other groups are below the n=8 threshold. In addition, you have very different number of animals per groups (control has n=6, EAE has n=5) making the statistical power weak and also restricting the use of common statistical methods such as the use of ANOVA (ANOVA recommends that all groups have an equal number of samples).

3. The experimental procedure and treatment

This is where the firestorm came in: the experimental data. So lets bring this to the table: “Groups of 11 week-old female C57BL/6 mice were intramuscularly administrated 100 μ l of Gardasil or phosphate-buffered saline (PBS) for a total of five times. Ptx was intraperitoneally administrated 2 and 24 hours after immunization. The Gardasil vaccine or Ptx were administrated at 2- weeks or 4-week intervals“.
A key element in a paper is the methods section and this one utterly failed. Based on this information I have absolutely no idea why they injected five times (Gardasil immunization is maximum 3 times), why they injected the PTX right after the immunization (2 and 24 hours, suggesting a double-induction) and when they injected the Gardasil and PTX (how do they separate the 2-weeks versus the 4-weeks? When did they started?).
Also the use of 11 week-old female is not reflective of a human case scenario. If we approximate 1 human year to about 3.6 mice-days, you would expect to use young mice (males and females, to have a gender-balanced study) that are about 6-weeks old (~43.2 days). That would be about 12 years, the age of puberty.
Here we are basically injecting HPV to adult females, which is known to not provide an additional benefit as such population may have already been exposed to HPVs.
The next problem is the injection dose. It is 100microL, thats the equivalent of 0.1mL. A single dose of HPV is 0.5mL. Lets ignore the scale law and assume a mouse is an equivalent to a human. A 50th percentile weight at age 12 is about 40kgs. Lets assume a 6-weeks old mice to be about 20g.
If we assume 0.1mL injection to a 20g mouse, then the human dose-equivalent would be about 200 dose-equivalent injected at once! This is a serious issue because there is absolutely no chance that such things to happen in a lifetime (at grand maximum you may have 3-4 HPV injections, spaced in time). Also, if we assume the age scale, we are expecting to have mice receive their two doses within 48 to 96 hours, not 2 to 4-weeks.
I am not even entering the rationale to inject PTX right after immunization, which is utterly no-sense and just scramble defining the effect of HPV versus the effect of PTX. Another disastrous example of how this paper was flawed from the beginning.

4. Failure to report weight and clinical score

If we want to follow an EAE protocol, it is important to show the evolution of animal weight over a period of times (up to 15-21 days) as well as a clinical score. The clinical score is a well-described protocol in which features found in EAE mice are score from mild (tail flaccid) to severe (inability to move hindlimb or complete immobility).
These two graphs are almost present in any EAE paper outside in the literature.
I assume this is what Figure 1b and 1c wanted to show but very poorly. Indeed Figure 1c does not really show up anything. We dont know when these data have been taken, we have no idea about the onset time of symptoms and furthermore we have no indication of a statistical differences. This is already a waste of data.

5. The constant cherry-picking of the data and incomplete picture

The methods used are honestly laughable: some hematoxylin-eosin staining (a common histological stain that does not tell much unless you have massive brain damage or the growth brain tumor),  Kluver-Barrera staining (for myelin staining), TUNEL stringing (for apoptosis), a behavioral test relegated in Supplementary Figure S1 (in which the author thinks that a P-value of 0.1 has a statistical meaning).

Aratani_Fig2

Aratani_FigS1
Where is the Evans blue extravasation staining to show a disrupted BBB? Where are the GFAP staining to show astrocytes activation? Where are the CD11b and F4/80 staining to show microglial cells activation and macrophages infiltration? Nowhere to be seen. We have to comptent ourselves with some miserable histological staining in Figure 2 and 3. Also no-one of the data about EAE is never shown past Figure 1. Figure 4 is even more laughable as the author only shows the staining of the G+PTX, giving the middle finger to the reader to how such staining looks like in vehicle, or G.

Aratani_Fig3

How can the author be confident that it was the Gardasil treatment, not the PTX treatment (despite being mentioned as a BBB disrupting toxin) being the sole contributor of all this?

6. Conclusions

Another anti-vaccine study, another case of botched science resulting in a junk paper, the sacrifice of animals over a useless experiment. That should not have been passing through the peer-review filter at all because of its deficiencies, yet was able to go through. If I was the reviewer behind it, I would have been ashamed to have this paper not outright rejected for major flaws in the study. Should we assume that the author recommended some complacent reviewers  to this paper? Or should we question the integrity of the editorial board in accepting papers for publication that fail to address some scientific integrity? Again, anti-vaccine studies shows that they cannot challenge vaccine safety and can only make fool of themselves by producing junk studies like this time.

The first and senior authors of this paper produced a paper that is so bad, they should feel ashamed to even had published at first.

 

Categories
Junk Sciences Junk Sciences Sciences Uncategorized

[Neurosciences/Junk Sciences] Autopsy of a flawed study of aluminum and brain inflammation (Li et al., J Inorg Biochem 2017)

Note: This is a special blog post coauthored by The Mad Virologist and The Blood-Brain Barrier Scientist (this article will be co-published on both our blogs). Another post has already been published on this paper, but we wanted to take a deeper look at everything that is wrong with this paper.

[UPDATE2] The study in question got retracted according to RetractionWatch:
http://retractionwatch.com/2017/10/09/journal-retract-paper-called-anti-vaccine-pseudoscience/

[UPDATE] I would strongly recommend the reader to look at the comments on Pubpeer about this paper. It is terrifying to think how it percolated through peer-review.
https://pubpeer.com/publications/4AEB7C8F30015079E2611157CF8983#undefined

A recent paper by ophthalmologist Chris Shaw was published and immediately touted as being proof positive that the aluminum adjuvants found in some vaccines are responsible for causing autism. Before we get into the paper, I have a few choice things to say about Chris Shaw. Despite not being an immunologist, Shaw has ventured into studying how vaccines and vaccine adjuvants cause neurological disorders such as autism. Shaw made headlines in 2016 when a paper he co-authored that claimed to show a link between the HPV vaccine and neurological disorders was retracted after being accepted by the journal Vaccine. It turns out that the statistics used in the paper were completely inappropriate and there were undisclosed conflicts of interests for some of the authors, including Shaw.These issues should have prevented the paper from being accepted in the first place, but mistakes do happen and science tends  to be self correcting. More surprising is that Shaw claimed that he didn’t know why the paper was retracted and that the science was of the highest quality. Shaw’s previous work has also been described by the WHO as deeply flawed and rejected by that body. This isn’t being brought up to dismiss the paper out of hand but to help illustrate why Shaw’s work is deserving of additional scrutiny. Hopefully by the end of this post, the logic behind the need for additional scrutiny of anything Shaw publishes is abundantly clear. We’ll begin by examining the methods used by Shaw’s research group and point out some of the issues.

Background for experimental design flaws: PK and species issues

One problem that is recurrent with Shaw is his “vaccination schedule” tries to consider rodents, such as mice and rats, as humans in miniature. It is wrong to assume that rodent and human primate species are alike, they’re not and there are notable physiological differences between rodents and non-rodents. For example, there are a couple of studies by Terasaki and colleagues (http://onlinelibrary.wiley.com/doi/10.1111/j.1471-4159.2011.07208.x/abstract) that have shown differences in the expression of solute carriers and drug transporters at the blood-brain barrier. We cannot exclude that such differences may bias the outcome observed in his studies, but this bias applies intrinsically to any in vivo studies based on a rodent model.
There is also the issue of brain development and mapping the vaccination schedule and the brain maturation. In this study (as well in the previous ones), Shaw and colleagues consider that applying vaccines from post-natal day (PND) 3 to 12 is representative of a human infant vaccine schedule. There is some differences in the literature, as previous studies from Clancy and colleagues mapped the PND12 to the 7th gestational months in humans (https://blogs.cornell.edu/bfinlay/files/2015/06/ClancyNeurosci01-17kkli7.pdf), some more recent publications map PND21 to 6th month post natal in humans, making the PND12 around the 3rd month infancy following full-term birth (http://www.sciencedirect.com/science/article/pii/S2352154615001096). You can easily appreciate that by following Shaw flawed experimental design, the total amount of Al administered during a 2 year period has been indeed administered within 90 days of birth, whereas the vaccination schedule according to the CDC does not start before the 2nd month of infancy if we exclude the two injections of Hepatitis B vaccines at birth and after the first month respectively (https://www.cdc.gov/vaccines/schedules/hcp/imz/child-adolescent.html).

In addition to a flaw in the experimental design, we cannot exclude some differences in the pharmacokinetic profile of Al adjuvants between mice and humans. The data available is fairly limited but a recent study from Kim and colleagues (https://www.ncbi.nlm.nih.gov/pubmed/26437923) failed to show a significant brain uptake of Al compared to controls following the single oral administration of different Al oxide nanoparticles at a concentration of 10mg/kg. Furthermore, the approximation of Shaw in terms of total burden of Al from vaccines (550 microg/kg) is not an accurate metric as we have a dynamic process involving absorption, distribution and elimination to occur simultaneously. A daily burden of Al from vaccines is a much more reliable parameter to consider. Yokel and McNamara (https://www.ncbi.nlm.nih.gov/pubmed/11322172) established it about 1.4-8 microg/day for based on 20 injections spanning over a 6-year period in a 20kgs individual.
If we consider Shaw calculation, then the total burden at age 6 would be 1650 microg/kg or 33’000 microg for a 20kgs 6-year old child. That’s about 15 microg/day of daily Al burden from vaccines, a value that is 2 to 10 folds higher than applied to humans. It makes therefore very difficult to compare apples to oranges, as Shaw experimental paradigm is flawed and not representative of a clinical scenario.

Selection of genes to measure:

Selecting which genes to measure is a crucial step in a study like this. If care is not given to ensure that the correct genes are selected, then the study will be a wasted effort. Shaw stated in the paper that they selected genes that were previously published. However, not all of the genes that they measured came from this paper. Only 14 of the genes were from this paper (KLK1, NFKBIB, NFKBIE, SFTPB, C2, CCL2, CEBPB, IFNG, LTB, MMP9, TNFα, SELE, SERPINE1, and STAT4). This leaves 17 genes the were measured but not found in the paper. Two of these can be explained. One gene, ACHE, was mentioned as having been selected because of other work, so it is sourced. The second gene, is the internal control gene beta-actin. This is a housekeeping gene that is often used as an internal control to provide a relative expression from. This leaves 15 genes unaccounted for. We suspect that these genes were selected because they are involved in the innate immune response, but no reason is stated in the paper.

The way these genes were selected is problematic. Because half of the genes seemed to be selected for uncited reasons, this study is what is known in science as a “fishing expedition.” There’s nothing inherently wrong with this type of research and indeed it can lead to new discoveries that expand our understanding of the natural world (this study that increased the number of sequenced viral genomes by nearly tenfold is a good example of this). But what fishing expeditions can show is limited. These types of studies can lead to other studies but they do not show causality. Shaw is claiming causality with his fishing expedition here.

There is also the problem that they used old literature to select their gene targets when much more recent research has been done. By happenstance, they did measure some of these same genes in their study. However, their results do not match has has been measured in children that have been diagnosed with autism. For example, RANTES was shown to be decreased in children with autism. In Shaw’s work there was no statistical difference in RANTES expression between mice given the aluminum treatment and those receiving saline. Likewise, MIP1alpha  was shown to be decreased in developmentally delayed children but was shown to be increased in the aluminum treated mice. This was also the case for ILIb which was found to be elevated in children with moderate autism yet there was no statistical difference between the mice receiving the aluminum treatment and those receiving saline. In fact IL-4 was the only gene to follow an expression pattern similar to what was found in children with severe autism (elevated in both cases). However, there is something odd with the gel in this case. This was the image for figure 4 that was included in the online version of the paper (we have not altered the image in any way). Look closely at the top right panel at the IL-4 samples and the IL-6 samples. You’ll notice that the bands for the control and the aluminum treated mice have different color backgrounds (We enlarged the image to help highlight this but did not adjust the contrast). If these came from the same gel, there would not be a shift in color like this where the treated bands have a lighter color encircling them. The only way this could happen is if the gel was assembled in photoshop. The differences could be real; however, since this image was modified we do not know for sure and this is scientific misconduct. Papers get retracted for this all the time and people have lost their degrees for doing this in their dissertations. These gel results cannot be trusted and the paper hinges on them. The Western blots and issues with them will be discussed below.

22016544_10102969159317918_1868648690_n

The unaltered figure 4.

22052798_10102969159312928_192591888_n

A close up of the panel with the regions in question highlighted.

Semi-quantitative RT-PCR:

In order to quantify the gene expression levels of the genes that Shaw’s group selected, they used an older technique called semi-quantitative RT-PCR. This technique uses the exponential increase in PCR products in order to show differences between expression of a gene under different conditions. There’s nothing wrong with the technique provided one understands what the limitations are. Let’s say you have a large number of genes that you want to measure expression of, but you aren’t sure which genes are going to be responsive and you have limited funds. Semi-quantitative RT-PCR is a good method to screen for specific genes to be examined further by more precise techniques, such as Real-Time RT-PCR, but it’s not appropriate to use this technique and then make statements about precise quantification. Where semi-quantitative RT-PCR excels is with genes that are normally not expressed but can be expressed after some sort of stimulus, such as terpene biosynthesis genes that are induced by insect feeding.

To put it bluntly, semi-quantitative RT-PCR was not used properly in the paper by Shaw. The way that it was used implied that it would be quantitative when the technique is not that precise. Without verification by another method, ideally Real-Time PCR which can determine what the exact abundance of a given target is, these results should be taken with a grain of salt. This would still be the case if there weren’t irregularities in the gel images. With those irregularities, this is absolutely essential and should have prevented this paper from being accepted.

Western-blots and data manipulationPCR and Western-blots data: the owl is not what it seems
As The Mad Virologist mentioned, the semi-quantitative PCR is an old-fashioned RNA quantitation method, with the use of Real-Time quantitative PCR (that quantifies the amplification product at each cycle, using a fluorescent dye as an indicator) is a much more accepted method nowadays (see his section for more details). For Western-blots, the semi-quantitative approach is more accepted but it is important to show data that are consistent between what you show (qualitative) from what you count (quantitative). In Western-blot analysis, we measure the relative darkness of a protein band (the black lines that you see in papers) between treatments and controls. Because you cannot exclude some errors due to the amount of protein loading, we also measure the band intensity for proteins that are very abundant, usually referred as housekeeping proteins (because they play essential functions in cells). In this case, beta-actin (named ACT in the paper was used).
Once you normalize to beta-actin, you can compare the effect of a treatment by comparing the relative band intensity ratios. In both cases (semi-quantitative PCR and Western-blots), “what you see is what you measure” or you have to show a “representative Western-blot” alongside a quantitative data to demonstrate that your quantification matches with band densities. The common practice is the use of image acquisition software like ImageJ to determine band density. Showing Western-blot is nice, but not foolproof. Indeed, Western-blots data (with fluorescence images) is amongst the most common method by which some researchers can manipulate or even falsify data but also the most common type of data that spark a paper retraction. Someone notice something fuzzy on a Western-blot data, creating some questioning reaching to the editors and asking access to the full dataset (usually the X-ray film or the original full scan of the blot). Often, the author will use the excuse “the dog ate the flash drive” or “the hard drive containing the data crashed” if they cannot provide such data.
There are some methods to spot some image manipulation on Western-Blots and include playing with the brightness/contrast, requesting the presence of quantitative data in addition of a representative blot, samples must be coming from a same gel (you cannot use a cookie-cutter and build-your-own perfect gel). There is an excellent article that describe the pitfalls and cases of bad Western-blot data representation if not image manipulation. (https://www.elsevier.com/editors-update/story/publishing-ethics/the-art-of-detecting-data-and-image-manipulationThere are, at this time, different issues raised both in the Western-blots pictures and their subsequent analysis raising the reliability of the data presented in this study.

In this post, we have used the full-resolution pictures provided by the journal website (http://www.sciencedirect.com/science/article/pii/S0162013417300417), opened just pictures in ImageJ to convert such pictures into 8-bit format, invert the lookup tables (LUT) and adjusted the brightness and contrast. We have exported such pictures in Powerpoint to ease the annotation and comments. We recommend the reader to judge by himself/herself and download the full-resolution images as well.

The first concern is by looking at Figure 1C. First, this is the original Fig.1.

1-s2.0-S0162013417300417-gr1_lrg

Then, this is the close-up analysis for Fig.1C

Slide1

There are several issues. First there are some bands that appears as band splicings, in which the author create a custom blots by assembling different bands from different gels. This is a no-no in Western-blots: all bands showed in a blot should come from the same gel. This is why Western-blot is a torture for graduates students and postdocs, you need to show your best blot with all bands showing the same behavior for your quantitative analysis.
Second, the presence of a rectangular grey piece that was added on the top of control 3 TNF band. This is a possible data manipulation and fraud, as you are voluntary masking a band and hiding it. Thats a big red flag on the paper. The third issue of Fig.1C is the consistent feeling of seeing bands either cropped on a grey rectangle or what I call a “Photoshop brushing” in which you brush off using the brush function area of the gel you consider not looking good enough. You can clearly see it with actin as we have a clear line between the blurred blot and a sharp and uniform grey in the bottom half of the blot, compared to the wavy top of the blot. This a grey area that I am not familiar with Western-blot but this is a no-no for any immunofluorescence picture. Any image manipulation that goes beyond the brightness/contrast adjustment and involves alteration of the acquired picture is considered as data manipulation. If you analyze the data upon correcting for the inconsistency of Figure 1C, the graph looks much more different and failed to show any differences between Al-treated and control, when you restrict yourself in over-normalizing it and plot straight the protein/actin band density ratios.

What is also concerning and surprising is the conclusion from the authors that males, not females, showing an inflammatory response. Of course, the authors failed to show the same outcomes from female animals and expect us to trust them on this. The problem is that such conclusion is in direct contradiction with the literature. There is a solid literature supporting the presence of a sexual dimorphism in terms of inflammatory response, in particular in terms of neuroinflammation and autoimmune disorders such as multiple sclerosis (https://www.ncbi.nlm.nih.gov/pubmed/28647490; https://www.ncbi.nlm.nih.gov/pubmed/27870415). There is also a growing call to the scientific community to provide results for both sexes (males and females alike). Although Shaw reports the study was performed in both males and females, he gives us this explanation at the end of section 3.1: Taken together, a number of changes indicative of the activation of the immune-mediated NF-κB pathway were observed in both male and female mice brains as a result of Al-injection, although females seemed to be less susceptible than males as fewer genes were found altered in female brains.

Yet the interesting part comes when Shaw try to compare ikB phosphorylation between males and females following Al injection (Fig.3C). When you analyze the data, you are raising concerns very rapidly. First, we have a possible case of cookie-cutter band in which you just paste a band that seems nice enough in a blank space. This is a very suspicious activity as you can make up data as easy as this. Second, there is again this “Photoshopping brushing/erasing” taking place in that figure, in which I suspect a case of fraudulent activity. As you can see in female, it is as if someone tried to mask some bands that should not have been here. Remember when he said that males but not females showed an inflammatory response? Is it trying to dissimulate data that contradict his claims?

image

Again, lets bring up Figure 3 at its full resolution.
1-s2.0-S0162013417300417-gr3_lrg

Finally, the same issues are persistent and even more obvious in Fig.5A. Again, we have a mixture of different Western-blots image manipulations including bands splicing, Photoshop brushing, cookie-cutter bands……

First, the unedited picture:
1-s2.0-S0162013417300417-gr5_lrg

And below the close up of Fig.5A

Slide3

These are some serious concerns that raise the credbility of this study and can only be addressed by providing a full-resolution (300dpi) of the original blots (X-ray films or the original picture file generated by the gel acquisition camera).  There has been a lot of chatter on PubPeer discussing this paper and many duplicated bands and other irregularities have been identified by the users there. If anyone is unsure of how accurate the results are, we strongly suggest looking at what has been identified on PubPeer as it suggests that the results are not entirely accurate and until the original gels and Western blots have been provided, it looks like the results were manufactured in Photoshop.

 

Statistics:
Long time followers know that I tend to go right to the statistics that are used in papers to see if what they are claiming is reasonable or not. Poor use of statistics has been the downfall of many scientists, even if they are making honest mistakes. It’s a common problem that scientists have to be wary of. One easy solution is to consult with a statistician before submitting a paper for publication. These experts can help point out if the statistical tests that were run are the correct or not. The Shaw paper could have benefited from this expertise. They used a Student’s T Test for all of their statistics comparing the control to the aluminum treated. This is problematic for a couple of reasons. These aren’t independent tests and the data likely does not have a normal distribution, so a T Test isn’t appropriate. Better statistical tests would have been either Hotelling’s T-squared distribution or Tukey’s HSD.  Another issue is how the authors used standard error (SE) instead of standard deviation (SD). To understand why this matters, it helps to understand what the SE and what the SD measure and what these statistics show. The SD measures the variation in samples and how far the measurements are from the mean of the measurements. A smaller SD means that there is low variability in the measurements. The SE measures the likelihood that a measurement varies from the mean of the measurements within a population. Both the SE and SD can be used; however, using the SE is not always appropriate, especially if you are trying to use it as a descriptive statistic (in other words if you are trying to summarize data). Simply put, the SE is an estimation and only shows the variation between the sample mean and the population mean. If you are trying to show descriptive statistics, then you need to use the SD. The misuse of SE when the SD needs to be shown is a common mistake in many research publications. In fact, this is what the GraphPad manual has to say about when to use the SD and when to use the SE:

If you want to create persuasive propaganda:
If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD. If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.” This approach was advocated by Steve Simon in his excellent weblog. Of course he meant it as a joke. If you don’t understand the joke, review  the differences between SD and SEM.” The bottom line is that there is an appropriate time to use the SE but not when you are trying to summarize data.

Another issue is the number of animals used in the study. A consensus in published study is to provide a minimal number of animals (usually n=8) needed to achieve statistical significance but also maintain to a minimum to ensure proper welfare and humane consideration for lab animals. In this study, such number is half (n=5). Also the authors are bringing some confusion by blurring the lines between biological replicates (n=5) and technical replicates (n=3). By definition, biological replicates are different organisms that are measured and are essential for statistical analysis as these replicates are independent from each other. Technical replicates are dependent on each other as they come from the same biological samples and are repeated measurements. By considering the latter as statistical relevant, you are biasing yourself to consider a fluke as a biological phenomenon.

 

Conclusions:
Based on the methods that were used in this paper, Shaw et al. went too far in declaring that aluminum adjuvants cause autism. But there are six other key points that limit what conclusions can be drawn from this paper:
1) They selected genes based on old literature and ignored newer publications.
2) The method for PCR quantification is imprecise and cannot be used as an absolute quantification of expression of the selected genes.
3) They used inappropriate statistical tests that are more prone to giving significant results which is possibly why they were selected.
4) Their dosing regime for the mice makes assumptions on the development of mice that are not correct.
5) They gave the mice far more aluminum sooner than the vaccine schedule exposes children to.
6) There are irregularities in both the semi-quantitative RT-PCR and Western blot data that strongly suggests that these images were fabricated. This is probably the most damning thing about the paper. If the data were manipulated and images fabricated, then the paper needs to be retracted and UBC needs to do an investigation into research misconduct by the Shaw lab.

Maybe there’s a benign explanation for the irregularities that we’ve observed, but until these concerns are addressed this paper cannot be trusted.

Categories
Blood-Brain Barrier Junk Sciences Sciences Uncategorized

[BBB/Junk Sciences] Polysorbate 80 and the BBB or how to put anti-vaxxers into a blowing cognitive dissonance

Here we go again, anti-vaxxers keeping on moving the goalpost to fit their belief instead to change to adjust it to the facts. First it was mercury, then it was formaldehyde, then aluminum, today the “ingredient du jour” is polysorbate 80 and tomorrow they will blame it to PBS saline solution.

The latest fad as I have seen is to blame polysorbate 80 as a source of “vaccine-injury” with the bold claim that it breaks down the blood-brain barrier (BBB). Lets put the fact straight and debunk this one for all. But what is even better is the “what if” counter-argument. What if polysorbate 80 was indeed a good ingredient? I will come to that later.

Polysorbate (aka Tween 80) is a amphiphile compound   as you can see the molecular structure below (source Wikipedia):
1200px-polysorbate_80

You can see the structure made of a lipophilic (loves fat) tail and a series of hydrophilic  (loves water) tails, loaded with oxygen and hydroxyl groups. This is a typical structure of a detergent: one side will mix well with water, the other will mix very well with fat and oils. The result? You can form microspheres that can dissolve well in water and dissolve fat into water. This is how a detergent works, it helps to breakdown fats into small spheres and dissolve them in the drain water.
Polysorbate 80, due to this property, is very good to dissolve drugs and medicines that under normal condition would barely dissolve into biological fluids. This is why we have it in vaccines, but we also have it in medicines. Thats the job of biopharmaceutics: finding formulations to dissolve drugs into the body and allow them to reach a concentration high enough to display their therapeutic activity.

The use of polysorbate 80 in drug delivery of anti-cancerous drug is probably the first and foremost main driving factor on investigating its effect on the BBB. Brain tumors (primary and metastatic alike) are up until now one of the most dreaded and deadliest form of cancer. For instance, the average expected lifespan upon diagnosis of a grade IV glioma (aka glioblastoma multiforme) is grim: 18-months, with less than 5% survival after 5 years. The major issue is being able to deliver drugs and chemotherapy across the BBB. As reported by Pr. William Partridge (UCLA) the BBB remains the bottleneck in drug development for the treating neurological disorders (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC539316/?fref=gc&dti=873247819461536)

The first report of the investigation of polysorbate 80 on the BBB is probably by Spiegelman and colleagues in 1984 (http://thejns.org/doi/pdf/10.3171/jns.1984.61.4.0674), investigating the effect of the solvent used in etoposide solution for treating cancer. According to their  result, they noted a statistical difference in the BBB permeability  (using Evans Blue and 99mTc as tracers) following the injection of 1.125ml/kg. According to their paper, 5mL solution contained 400mg of polysorbate 80 or a concentration of 80mg/mL. Based on this, we can assume that the BBB effect was observed for a dose of 90mg/kg. Thats a very huge dose.
If we go back to the manure anti-vaxxers say, the amount injected via vaccines is enough to cause a barrier opening. According to John Hopkins University Institute of Vaccine Safety (http://www.vaccinesafety.edu/components-DTaP.htm), the expected concentration of polysorbate is lesser or equal to 100mcg or micrograms. Thats 0.1mg per dose. If we assume such dose is injected to a newborn (average weight ~3 kgs), then the amount injected is about 0.033mg/kg. Thats 2700 times less than what has been reported to induce a BBB disruption. Also you have to factor the bioavailability of polysorbate (that is 100% upon IV route) making this number a very optimistic number.
Now, the interesting twist about polsyorbate 80 is its use to enhance some drug carriers and its widely used for finding novel formulation to enhance the delivery of anti-cancerous drugs across the BBB. You can find a list of publications on Pubmed about that aspect (https://www.ncbi.nlm.nih.gov/pubmed/?term=polysorbate+80+blood-brain+barrier). What if polysorbate 80 not only will not injure your brain, but actually may help deliver drugs to help your brain fight disease?

 

Keep in mind that polysorbate 80 is good at dissolving lipid in water solutions but it is not good to let charged molecules accross the BBB, just in case someone comes with the claims that it conjugates with aluminum. Thats some high-school chemistry level.

 

 

Categories
Sciences

[Sciences] The TB-BCG vaccine, why I love vaccines and how certain vaccines give us immunity for life.

IMG_0660

A couple of days ago, I had to have a physical exam for some administrative process. Among the different steps of the physical exam, I had to have a TB test. TB stands for tuberculosis. It is caused by an infectious agent, Mycobacterium tuberculosis. Mycobacterium genus contains some nasty germs inside including tuberculosis and foremost the one causing the leprosy. These are germs usually associated with poor hygiene but highly contagious.
M. tuberculosis is one of the particular bacteria that is very challenging to grow in cultures and until modern molecular biology technique, differential coloration using the Ziehl-fucsin technique (using malachite green) was used for identifying these germs from the population.
In the pre-antibiotic era, the only solution that was working at some point, as we called in Europe “sanatorium” consisting of fresh, high-altitude air with a good nutrition would help the “natural immunity” fight off the infection as well as strict law enforcement against “spitting in public area”. The arrival of streptomycin and the development of attenuated vaccines called “bacille de Calmette-Guerin” (thus the acronym BCG) and resulted in a huge decrease in cases of tuberculosis worldwide, leading to the closure sanatorium all over the world.
The BCG was a very popular vaccines and according my French immunization record, my last shot was dated from 1985, with a BCG test realized in 1993. This test, as seen in the picture, consist of an sub-cutaneous injection of tuberculin, a protein present on M. tuberculosis (both the virulent and BCG alike). These surface proteins act as a “passport” for our immune system, If the immune system recognize your plate is not valid, it will pull the foreign agent aside as if you fail to have valid document at the US Custom Border Protection booth. Its job is to kill and destroy anything that is not holding the right “passport”. This is were vaccines are important, they act as fake illegal aliens and train your immune system to recognize nasty agents. Thus, if one real threat appeared, it will able to immediately induce an appropriate response and neutralize the threat, but also insert into the server the useful information for newer generations of CBP agents to recognize the threat. This is the memory lymphocytes B-cells and T-cells that will persist more and less longer and these are the direct output of the vaccination process.
Thats bring me back to my injection, my body have for a good twenty years never in contact of such germ or such proteins and never get reminded about it. I got the test injected, it just took 24 hours to have a massive inflammation (hot, red, swollen and painful spot) to show that my whole immune system went completely on alert and identified a threat. This twenty years after encountering the agent.

I never had a chance to see a sanatorium in my life and tell us something about the impact of vaccines on public health. If such medical institutions and other diseases (like measles, mumps, rubella, whooping cough, polio, shingles…..) appears old or harmless, it is because vaccines have done a tremendous job in protecting us against these diseases. Do we still hear about sanatorium, iron lungs? No more, because vaccines worked and work to protect us. And because when everybody is vaccinated, the germs have no safe house to propagate and spread, this is what we call “herd immunity”. By protecting ourselves, we also protect those who cannot be protected because they are too young to get their first shot or face an immune compromised body due to various reasons.
However, the recent call of “anti-vaxxers” held a fallacious number of celebrities holding no credentials in biomedical sciences or public health, except holding abilities to talk with the gluteal muscles or known for a “ho-la-la” career have beem spearheading a movement to associate vaccines with death, autism and other conditions. This of course resulted in the recent increase in cases of measles and other diseases we considered eradicated (but were just hiding and kept at bay). It shows that deliberately breaching our protection and the “herd immunity” by standing on fallacious and pseudoscientific claims can have rapid and damaging effects on global health, including several cases of fatalities in various part of the world. You have to sit down and think about that some states like Oregon and Washington have vaccination rates plummeted so much that they went below some developing countries.
Vaccines work, vaccines are keeping us safe for diseases that were so devastating 50 years ago that vaccines were victim of their success and wiped out them from collective memories.   I love my vaccines and remind everyone to check your immunization records, be sure you have yours up-to-date. And foremost, vaccinates your damn kids, they will say thank you later in their life!

IMG_0656