[Sciences/Junk Sciences] Let’s talk about Ashley Everly and her “Vaccine Guide” binder full of…A focus review on the “Ingredients” section.

One feature amongst the AV community is their ability to illustrate the whole Dunning-Kruger Effect with real life example. If you are not familiar with the Dunning-Kruger effect, it is a cognitive bias initially reported by David Dunning and Justin Kruger, in which a person commonly overestimate or display an overconfidence on his/her knowledge on a topic but yet fails to master the topic in question, often living in a belief of superiority, in particular when challenging experts.
The average person on the Dunning-Kruger (DK) effect will make extraordinary claims while denigrating giving the credentials to experts, often arrogantly claiming they know more than an seasoned expert that spent his/her education and professional career on the topic.
Most of the time (>99%), AV experts are using and abusing credentials that don’t have. For instance, Avers claiming to be medical doctors only to be a doctor in chiropractic or naturopathy, or scientists whereas harboring at best a Bachelor’s degree. Amongst them is Ashley Everly.

1. Who is Ashley Everly?

Ashley loves to depict herself as a “toxicologist” and love to parade on social media as is. However, if you look a bit more attention to her LinkedIn profile, you will notice that she has a Bachelor degree in Environmental Toxicology from UC Davis, and some internship at the California EPA from 2007 to 2008. Then a 8 years gap, with suddenly posing as a toxicologist for “Health Freedom Idaho”, a notorious anti-vaccine group.
Let me be clear. In order to claim yourself as a “toxicologist”, you have to have a minimum a doctoral degree in toxicology from an accredited institution, have passed the peer-review amongst peers by showing evidence of publications in peer-reviewed journal and ideally hold a faculty position in that specialty.
At best, with a BSc you can claim the title of “lab technician” but nothing more. It also tells us the low-standard quality that “Health Freedom Idaho” stands and the quality of the science they must be using.

Screen Shot 2019-08-20 at 8.11.19 AM

Ashley loves to parade as a peacock during mating season in the AV community as an expert, but once facing real experts quickly cave in and go into hiding. Let me show you some of my encounter with Ashley on Facebook a couple of months ago:

Ashley_Everly
The PNAS study in question is this one (http://www.pnas.org/content/111/2/787?fbclid=IwAR1cwoO3monJZbDGvITilGMYBh7VRUFDs2lzyAexEaljdi6_69vhlGYDDpM) and as of today Ahsley was never seen again discussing on that topic. She quickly flounced and disappeared. So much for a “toxicologist” to refuse to address criticism from a peer and being completely clueless on interpreting a study she herself brought into.
Ashley, if you read that post, I hope that these last four months were enough for you to read and comprehend the PNAS paper and answer my question.

2. Vaccine Guide: Ingredients/Excipients/Contaminants

You will find below my rebuttal to the whole section, with the exception of reviews (because a review is a collection of studies picked from the literature and remains at the discretion of the author. I treat them as an opinion piece and I don’t discuss opinions. I discuss data and their interpretation). I maintained the sections and went through most of the “studies” present in her binder as published in May.

2.1. Aluminum Adjuvants:

Aluminum involvement in neurotoxicity
This is a study that is published in a potentially predatory journal (Hindawi), and you can feel the low-quality of the study presented by the sole figure making the paper (Figure 1).
The author main hypothesis is that aluminum is associated with neurological diseases and claim that chelation therapy (using IV infusion of EDTA) can help alleviate the patient.
The cohort of subjects are divided into several categories including a healthy, neurodegenerative diseases (ND, including MS, mislabeled as SM. An error that should have been labelled by reviewers but went through, Alzheimer=AD, Lou Gehrig/Amyotrophic Lateral Sclerosis (ALS) and Parkinson’s (PD), and non-ND (e.g. fibromyalgia group).
Here is the first problem: Why did the authors not provided blood/plasma levels of Al in patients? Blood levels would be a much better biomarker than urine, because it would indicate the systemic concentration of aluminum and pointed out that ND patient have abnormal Al load by default.
The second problem is the use of urine. It is interesting, but not as great than plasma samples. Especially because you have a high risk of variation in Al urine concentration based on the kidney function. Do these diseases affect the kidney function? I don’t know but showing an absence of difference in creatinine clearance between the patients would have really helped.
Third is the chelation method. By applying chelation therapy to these patients, they are forcing the pulling out of Al from their body, which can be highly variable and also an unreliable outcome for assessing heavy metal poisoning. Again, I repeat, blood sampling is the gold standard.
The fourth problem is putting different NDs together to try to show some statistics: that’s comparing apples to oranges. I am not even mentioning the error bars that are SEM instead of SD. SEM is equal to SD/square root of the sample size. It has little meaning in statistics. SD is a more appreciated parameters as it allows the assessment of the statistics (and verify the P-value calculated by the authors is correct or incorrect). Ideally, you would like to show the urine Al values for each individual as a dot-plot.
With honesty, a journal that is publishing studies based on solely one figure raises an important red flag about the quality of peer-review performed, and you can see that the important flaws in the methodology is unacceptable.

Contact allergy to aluminum induced by commonly used pediatric vaccines
It is interesting to see a comment taken as a “gotcha” moment by Ashley. Yes the authors reported a series of granuloma in Swedish children and note how Ashley highlighted only the pieces that were fitting her narrative, by willingly highlighting the final words of the authors.
“We want to point out that itching granulomas are benign and self-limiting and no cause to refrain from vaccination in consideration of the risk for a serious infectious disease. They are poorly known but easy to recognize once you are aware of them. They should be familiar to all health care staff working with children to avoid mistrust and anxiety in the parents and unnecessary investigations of the child.” The same authors reported that this is likely due to contact allergy and seems to improve over time as reported in their 2018 paper (https://www.ncbi.nlm.nih.gov/pubmed/29572857)
In conclusion: Yes, granuloma can happen in certain vaccinated children, it is mostly an allergy contact with aluminum and causes only minor effects (itching) that resolves over the year.

Aluminum hydroxide lead to motor deficits and motor neuron degeneration
Introduction: The whole hypothesis of the paper is to address the possible cause of the Gulf War Syndrome (GWS). GWS was a condition described in veterans of the first Gulf War (1990-1991) characterized by various neurological conditions. But for some reasons, Shaw is suspecting that GWS was correlating with cases of ALS with some veterans diagnosed with GWS. Shaw speculates that the anthrax vaccine (AVA) given to soldiers during the Gulf War (in anticipation of the use of anthrax-laden weaponry by Saddam Hussein) is the causative agent, and went up to nail to aluminum adjuvants to vaccines.
Using existing literature documenting high aluminum (Al) levels in mice following treatment at elephant dose (Table 1) and dubiously claiming these mice showed sign of ALS (spoiler: they just reported Al levels in tissues, no reported ALS phenotype in these mice), Shaw goes with big stretches to claim that the anthrax vaccines caused these to happen.
Methods: Shaw used CD-1 male mice, 3 months old (about 12-weeks old). That’s about young adults that matches the demographic of the military. Gender differences can be discussed but I would speculate the proportion of men serving the country back then was predominant.
The number of animals is about okay for a fair power of analysis (if you assume about 30% or more differences following treatment, with 10-15% variability, an optimum number would be 12). Note there are four groups: saline (PBS), squalene, aluminum hydroxide (AH) and AH+squalene.
The concerning part is that Shaw mentions the following: “The current study will report only on the aluminum treated and control groups from this experimental series. “. Why are you mentioning the squalene and AH+squalene group and not showing the data? Are you hiding something that your study is showing but contradicting your hypothesis? How did reviewers let that fly in the first instance? That’s already a big red flag you have not caught.
The Al dosing seems adequate but here is the problem: “In Experiment 1, we performed two injections of a suspension of aluminum hydroxide of (50 μg/kg) in a total volume of 200 μL sterile PBS (0.9%) spaced 2 weeks apart. The mice in this experiment would therefore have received 100 μg/kg versus a probable 68 μg/kg in humans. In Experiment 2, mice received six injections for a total of 300 μg/kg aluminum hydroxide over 2 weeks. Controls in both studies were injected with 200 μL PBS. “2 injections within two weeks or 6 injections within two weeks. What does the military say? Five doses injections at 0, 4 weeks, 24 weeks (6 months), 52 weeks (1 year) and 76 weeks (18 months) according to this site: https://www.health.mil/Military-Health-Topics/Health-Readiness/Immunization-Healthcare/Vaccine-Preventable-Diseases/Anthrax.
Remember, we are just talking about anthrax vaccine here. Under which rationale Shaw consider the fate of aluminum in a mouse body very different from a human body? Considering Al is mostly cleared by renal route and that mouse have physiologically a lower renal clearance level (based on creatine clearance) than humans, what we have here is basically a deliberate overdosing of mice with aluminum by running a complete irrational immunization schedule. He squeezed us a whole injection schedule that is spanned over a year and a half (76 weeks) within a 2 weeks period. A major an unacceptable methodological flaw that was repeated in the sheep studies by Lujan (the studies claiming vaccines in sheep caused ASIA).
How did you miss that Glenn? You had 24 hours to read that paper! And you were telling us PV how to design an experiment? This experimental design is so flawed, yet it flew under your nose.
I will give it to you, it also flew under the nose of reviewers, but I speculate either the reviewers were friendly to Shaw or had no expertise to review that paper at all.
Results: Let the fun begins with Figure 1. First thing here is Shaw is looking for signs of neuronal damage and spinal cord injury in his animals by immunocytochemistry. You can do that by staining tissue sections with antibodies. In ALS, motoneurons (the neurons controlling skeletal muscles) are the one affected by the disease and die. Motoneurons are localized in the ventral portion of the spinal cord in a region called the ventral horn. Motoneurons neurotransmitter of choice is acetylcholine, so you can label motoneurons with antibodies targeting an enzyme involved in acetylcholine biosynthesis (choline acetyltransferase or ChAT). Aside from neurons, you have glial cells that are here to support neurons. The major type is astrocytes. When astrocytes are injured or get wind of a bacterial information or injury, they become activated. One protein indicative of an astrocyte activation is glial fibrillary acidic protein (GFAP). Under physiological condition, GFAP is barely detected. Under injury (in my case, stroke injury), GFAP levels will go up, in particular in the region being injured. Iba-1 is a staining for activated microglia. Microglia are resident immune cells in the brain, sitting ducks waiting for an injury to happen. When an injury happens, microglial cells go berserk, will release tons of pro-inflammatory cytokine and press the RED ALERT button, activating both astrocytes and the BBB to let immune cells come inside the brain and induce neuroinflammation.
First flaw here is that Shaw never show us any representative tissue staining to let us appreciate his bar graphs with actual staining. That was unacceptable in 2009, it is still unacceptable in 2019. He also omitted to put all three spinal cord regions (cervical, thoracic and lumbar) for both ChAT, GFAP and Iba-1. Indeed, Shaw plays with the cherry-picking of both what he wants to show in terms of data and statistics.
Aluminum in cervical SC showed no differences in ChAT-positive neurons but has a higher number in the aluminum group. It is about 50-60% higher, yet Shaw conveniently brushed the statistics under the rug. How come he denotes us a statistical significance (P<0.05 as pointed by the * symbol) in 1C but not in 1B, despite showing similar difference and even smaller error bars? What about ChAT in the lumbar section?
This is not the first contradiction, how Shaw explains that in the thoracic region you have more reactive astrocytes in the cervical region, but less in the thoracic region? What about GFAP in the lumbar section?
Finally, same applies to Iba-1. Considering the huge variability observed between regions for ChAT and GFAP, how much credibility the Iba-1 lumbar section tells me. I bet he showed the opposite pattern in the cervical and thoracic regions and he brushed that off under the rug. How convenient it must be to brush off data not aligning with your hypothesis and only show the data that align with it, don’t you think? That’s called cherry-picking and that’s a no-no in science.
Figure 2: Shaw is trying to show us aluminum in the spinal cord using morin staining. Two things here. Shaw is showing us a spinal cord staining from control mice and you can see the high autofluorescence, but that does not stop him to claim there is no staining. Second, he is showing us blurbs of immunoreactivity. The problem is how do you sort real staining from garbage? The answer is you need to have a counter-staining, in particular a nucleus counterstaining. One would be the use of DAPI (that stains the DNA) and see if these blurbs are parts of cells and not just debris. You can also make things up by tuning in the exposure time. This, with the lousy quantification (how did he identify these are cells without DAPI? What is the sample area in micrometers2 or mm2?), is already sinking the boat of that paper. We are two figures and yet have to find any nuggets in that garbage dumpster.
Figure 3: He is trying to sell us a spinal cord section of its aluminum mice versus a brain cortical slice of a human (entorhinal cortex) suffering from Alzheimer’s’ to claim that aluminum is inducing Tau hyperphosphorylation. What is that garbage? First, Tau hyperphosphorylation is not associated with ALS, second Alzheimer’s is not ALS, third brain cortex is not spinal cord and finally human is not mice. And that isolated cell that is immunoreactive amongst a desert of empty staining, that screams out loud he cherry-picked a region of interest to show one single positive cell amongst thousands negative. What I am supposed to do with that?
I would have shown the following: control versus aluminum, spinal cord sections at three spinal cord regions (cervical, thoracic and lumbar), low-magnification and high-magnification to show the depletion of motoneurons and possibly the Tau hyperphosphorylation. But I would also show a Nissl staining to exclude the presence of amyloid plaques and neurofibrillary tangles in my spinal cord versus the cortex of human sample given here to show this Tau hyperphosphorylation was not associated with Alzheimer’s.
Figure 4: This is a statistical joke, you must be kidding me right? To tell me that he got a P<0.0001 with SEM bars big like that is a straight out lie, or he just found the obscure statistical test to make is so. You cannot run the statistics overall and conclude so, you have to do the test for each timepoint between the group. Also, why did he use a two-way ANOVA when he only has two groups? That is telling me he is as meticulous in his stats as his experimental design. The only time I would consider something happening (don’t forget we are overdosing our mice) is on Fig4G and H.
Figure 5: Finally, a water maze test to assess the ability of mice to memorize. The idea is you have mice submerged in a pool. In this pool, there is a platform that they can hold on and have a firm surface to stand. Over time, if you let them swim every day, they will become better at it and find the platform faster. The problem is the water is blurred out, so they have to remember where in the room and in the pool the platform is located. Also, here Shaw is remembering us something. Do you remember that?
“In Experiment 1, we performed two injections of a suspension of aluminum hydroxide of (50 μg/kg) in a total volume of 200 μL sterile PBS (0.9%) spaced 2 weeks apart. The mice in this experiment would therefore have received 100 μg/kg versus a probable 68 μg/kg in humans. In Experiment 2, mice received six injections for a total of 300 μg/kg aluminum hydroxide over 2 weeks. Controls in both studies were injected with 200 μL PBS.”
Here Shaw is telling us “This is the 6x mice”. What about the 2x mice? What about the previous figures? Which experiments were they? 2x?6x? Did he mix the 2x and 6x into the whole aluminum group? Sloppy as hell. And you have the audacity to tell me this study still worth something? Not even a kopeck!

Administration of aluminum to neonatal mice in vaccine-relevant amounts associated with adverse long-term neurological outcomes
Note: Chris Shaw must have a knack for publishing his paper in JIB, almost as if the editor in chief was complacent to his botched studies.
Introduction: Shaw continues on his hypothesis that aluminum in vaccines is bad and that he must be causing neurotoxic effects in infants brain, in particular he consider that aluminum in vaccines is associated to ASD, self-citing their study published also in JIB in 2011. Therefore, the aim of this study is to demonstrate that the current immunization schedule is bad because it induces an aluminum overload
Methodology: Here is the big flaw of the paper, that will be used and reused by other groups later. Shaw assumes that a mouse born (P0) is equal to a newborn (which is not, the literature is indicating that a P0 mouse is about a 7month of gestation fetus. A human newborn would be about a P7 or a 7-days old mouse). Considering that mouse puberty is about 5-6 weeks old, Shaw designed an experimental design that is about injecting mice every 2-3 days for 10 days with a final injection at P17. Noteworthy, the apparent shift in injection schedule of the saline group, that is never matching the US schedule (referred as high Al) or Scandinavian (referred as low Al). This is unacceptable. When you design an experiment, you want to be sure you align your control group to your treatment to remove the maximum number of variables.
The second problem here is a pharmacokinetic problem. We can argue that the 17-days schedule maybe reflective of a “0-2 years” schedule in humans. The problem is does Shaw (or anyone else) validated that this rapid schedule is respecting the pharmacokinetic pattern of aluminum adjuvants? He has not. Nobody has not either. That’s a huge mistake here. Aluminum adjuvants fate based on the Flarend data tells us that it has a two-phase: a rapid but mild peak concentration (that is about 2% of the injected dose) following IM injection (within 24 hours) followed by a progressive decrease at a rate estimated by 0.6%/day. In a human schedule (2 months interval), this allow to get rid of most of the aluminum. In a mouse schedule (2 days), you are basically overdosing mice with cumulative dose of aluminum and logically reach toxic levels (aluminum is neurotoxic, if you have a blood level of it at least 3x what we consider the normal level). Shaw never bothered to show Al levels in blood/plasma at day 17 to see if his schedule was making sense or not.
Results: Figure 1 and show weight. A decrease in weight is usually sign of mouse distress. Again, Shaw shows us Mean+SEM, making the assessment of the statistics useless and have you taken him on his words. Interestingly enough, the high Al showed higher weight gain than controls and low Al. I don’t know what to do about, but maybe the high Al load due to his botched Al overload is responsible for that.
Figure 2 and 3 show different behavioral testing, so I will not go through details much. High aluminum group is showing the worsened outcome at 22 weeks, but surprisingly only males. Why not females? I would not expect a difference in PK parameters between sex, but maybe some neuroprotection from the gonadal hormones in females.
No memory test done (swim test, Morris maze test), no social behavioral test done, no novelty test done. These are behavioral tests that are important and surely more relevant than the ones shown, especially if you consider the hypothesis “aluminum in vaccines cause autism in mice”. Did Shaw not performed the test because it was a neglection or did he hide the results of them in a drawer because it was not fitting the narrative?
That’s it for the results. No assessment of aluminum load in the brains, no Cresyl violet to show brain structure abnormality, no immunostaining to show some sort of damage or neuroinflammation. Shaw get a lofty free pass at JIB and use it conveniently.
Discussion and acknowledgment: Seems Shaw is not at all embarrassed to oversell the data, in particular as his main benefactor (the Dwoskin foundation,

Biopersistence and brain translocation of aluminum adjuvants of vaccines
This is a review from Romain Gherardi. The problem with a review is similar to an opiniated piece, you can make your case with basically anything that fits your narrative and you often end-up self-citing your papers and own papers. But that does not mean Gherardi is doing great science and I would easily debunk one of his studies anytime.

Aluminum in childhood vaccines is unsafe
This is a so-called study written by Neil Z. Miller. This is an example on assessing someone’s credential is important. Scroll at the end of the paper and read the following
“Neil Z. Miller is a medical research journalist. Contact:neilzmiller@gmail.com.”
He has no credentials as a scientist, he has no affiliation with a research institute (with the rare time he points out to his PO Box in Santa Fe, NM) and he has absolutely no knowledge from what he is talking about (although he claimed the aliens talked to him and told him vaccines are bad). All of this published in JPANDS, the journal of the American Association of Physicians and Surgeons. This is a political (right-wing) organization that is anti-Medicare, anti-Government (they give tips to medical practitioners to tax dodge and reduce financial tracking) with a strong anti-abortion, anti-LGBT stances with dipping often with conspiracy theories (HIV denialism being one of them). Just Google Jane Orient (former president of the AAPS) and you will see a nebula of conspiracy theories promoted by her.
Neil is trying to sell the idea that aluminum is the new mercury, claiming levels went up as thimerosal was phased out. In Figure 2, Miller is giving the middle finger to basic pharmacokinetics claiming that an 18 months toddler holds almost 5’000microg of aluminum in his body from vaccines. The problem is that Neil omitted to count that such amount is mostly gone by that time, omitted a physiological function commonly referred as “pee-pee” (renal route represents 95% of the exit route for aluminum) and also omitted to count the exposure from food and water during that period. That’s sound like an awful lot of cherry-picking that only garbage journals would accept as valid papers. Of course, Neil cites the classical tropes and debunked studies from Shaw, Gherardi and Exley.
To make a point valid, someone has to show that the current CDC schedule 1) increases significantly Al blood levels during the first 24 hours of injection and 2) that the current schedule is translating into a cumulative increase in blood levels in infants. Both of these issues are never addressed by Neil, because Neil does not have the scientific baggage to answer these questions and conveniently ignored the literature that is not fitting his misconception.

2.2. Aborted Fetal Cell DNA
MRC-5 and WI-38 fibroblast ATCC technical data sheets
These are technical data sheets for these two cell lines. ATCC stands for American Tissue Cells Collection. It is a cell and tissue repository, a sort of giant cell culture database. If you are looking for a particular cell line and ATCC has it, it will provide you these cells (upon paying some $$$) so you can run your experiments. You can passage cells, in particular immortalized cells, as many times as the cells will accept. Some cells like HeLa and these two cell lines have been over 1000 passages easily, that’s is like 1000 generations of cells since the original isolation.
Ashley stupidity in plain sight:
“MRC-5 cell line………by J.P. Jacobs in September of 1966”. In other words, MRC-5 was created in 1966 and since the same cell line has been used to make vaccines. That’s should torpedo the claim of “they use aborted babies to make vaccines”. They used the same cells isolated from one aborted fetus sometime in 1965-1966 to produce billions of vaccines doses. One fetus.
The WI-38 is coming from the same period of time, produced by Plotkin in 1965. This is the original link to the study: https://jamanetwork.com/journals/jamapediatrics/fullarticle/501537. So what is Ashley trying to accomplish here? Stir the “aborted babies in vaccine” pot to scare people? These are cell lines coming from 2 babies, that are over 50 years old cells passaged at least 1000 times since their isolation. They are a far cry of the original product, yet they helped save the life of billions of babies worldwide. As Spock would quote “The need of many outweighs the need of few”.

Spontaneous Integration of human DNA fragments into Host Genome.
That’s a poster authored by Deisher. First, it is a poster. In other words, it has little or no value when it comes to publication. It is nice, but useless until it gets in its final form as a publication in a peer-reviewed journal. A poster is never peer-reviewed, at best it is reviewed by a scientist affiliated with a society meeting (for instance I review poster abstracts for the American Heart/Stroke Association for their International Stroke Conference meeting on regular basis) and give a “yeah/nay” for poster presentations. Poster presentation is usually the easiest form of communication that get an acceptance rate, it is not often you get rejected unless your abstract is poorly written or you are presenting something outside the field of interests of the meeting.
Ashley did a very poor job in cropping the poster making it difficult to read.
First, the author. Teresa Deisher. Deisher seems to have some experience in cardiac cell biology according to her PubMed profile, but her last relevant paper dates from 2010.
https://www.ncbi.nlm.nih.gov/pubmed/?term=Deisher%20TA%5BAuthor%5D&cauthor=true&cauthor_uid=26103708
Deisher is back in publication by publishing three articles, all in Issues in Law Medicine. That does not sound a journal you would publish biomedical sciences, right? It is also unlikely that you will have the right experts to peer-review that studies.
The second issue is Deisher is not affiliated with any public institution. She is affiliated with Sound Choice Pharmaceutical Institute located in Seattle, WA. The use of “institute” is a common strategy of anti-science to pose as legitimate, but often these institute are mere than a basement in a suburb area or a hut located in a mountain. If you are unsure about an institute, just Google Map it. That’s how the so-called institute look like.

 

Does it look like the Broad Institute (this is what I would put at the top of molecular biology institute)? https://www.broadinstitute.org

In this “study” Deisher took some placental DNA (Cot1), labelled it with a fluorescent dye (Cy3, that fluoresce in the red range following excitation by a light source).
She lists few cells that took the dye (3 out of 7 cell lines) in her Table 2 with some dubious results. First, she is not showing all the data (in terms of genomic DNA incorporation), she is not validated the DNA uptake method (is she relying on fluorescence to determine the incorporation, did she use a PCR analysis or DNA sequencing to show the incorporation of that genomic DNA? No for any of these).
The rest is even more laughable if we move to the Figures. If you want to show that you have incorporation, you have to show each fluorescence by its own using a negative control.
A negative control is important to show, so people will accept that your fluorescent signal is real signal and not some noise background. A negative control should be either your untreated cells (because some cells can have autofluorescence), or cells treated with free fluorescent dye (to remove the non-specific uptake of free dye by the cells, which can lead to a false-positive).
More importantly, you have to show the fluorescence of each channels separately before showing a merged picture (example DAPI, Cy3 and brightfield separated and finally a merged picture).
Figure 1: What I am supposed to see here? Why is there a blue color in brightfield? How should I conclude that the red blobs I barely see is genuine signal and not background noise?
Figure 2: That’s the most laughable picture. Here Deisher treated the cells with saponin. Saponin will drill holes into your cells and let anything enter your cells. We use saponin to show the internalization of a protein normally located on the cell surface (to allow antibodies to enter and detect these proteins inside the cells). Deisher forced holes into the cells, treated them with her DNA fragment and low and behold see some red signals. No shit Sherlock!
Then with other figures, she used LPS instead and of course no controls. LPS is a lipopolysaccharide present in Gram-negative bacteria. When these bacteria explode following an immune response, this LPS is acting like a nuclear bomb in the body. It will induce a massive immune response that is commonly observed in the clinic as septic shock.
So what I am supposed to do with that data if I find DNA uptake in cells that were nuked by LPS but I have no idea if such DNA uptake even occur in non-treated cells? Why not using saponin as well?
Garbage in, garbage out. What you call a disgrace I call it a common practice in anti-science papers. Produce garbage, sell garbage as science. If I was the MJ Murdock foundation, I would ask for some explanation on why their grant money was spent on useless junk science.

Characteristics of a new human diploid cell line Walvax2 and its suitability as a candidate for vaccine production.
Ashley, based on her markups, seems to never read anything more than the title and the abstract of a paper. Past the introduction, and you will not find her yellow marker. So much for a “toxicologist” eh? Walvax-2 is a new cell line developed by Chinese scientists and assessing it as a possible alternative to the MRC-5 and WI-38 cell lines. Walvax-2 was isolated from an aborted fetus that was medically aborted. The mother had a uterine scar from a previous C-section. A uterine scar can break during pregnancy and can kill the mother (and the fetus) if ruptured. It was not a complacent abortion; it was a medical abortion and the chance of having this fetus to become a full-term newborn was almost null. Maybe Ashley should go back to school and learn some A&P courses before playing the scare and ignored the uterine scar.
For the lay person, this paper is not that interesting. It is mostly validation and characterization for the cell lines at different passages (P6, P14 and P20) and benchmark them versus MRC-5 in their ability to be infected with known pathogens.
Yet, there is still a long way to go for Walvax-2, as the authors mentioned “ Nevertheless, more research needs to be done to investigate the susceptibility of Walvax-2 cells to a greater variety of viruses, and to develop fully the potential of Walvax-2 cells as a cell substrate platform for producing viral vaccines for human use in China” (p. 1005)
Those that make the claim this cell line is already in use in China and used to produce vaccines anywhere in the world are lying liars that should be called out for spreading nonsense.

Effect of thimerosal, methylmercury and mercuric chloride on Jurkat T Cell Line
This is a paper that looked at the toxicity of mercury at different forms (EtHg, MeHg and HgCl2) in a rat T cell line, the Jurkat cell line. Cells were treated at different concentrations for 48 hours. Let’s assume this is a blood concentration and what these cells are exposed would be plasma concentration of mercury.
First, how much mercury do we receive in a vaccine injection? The closest thing I came come with is the Burbacher study (Env Health Persp.2005) that investigated the effect of multiple injection of EtHg versus MeHg (that one was given orally) at 20microg/kg, 4 injections spaced one week each (so it’s really aimed to overdose the baby monkey with it). The value of mercury is supposed to represent the value found in a vaccine. At the 4th week of injection, the value of Hg coming from thimerosal was 16ng/mL (MeHg via oral route peaked at 32-35ng/mL). That’s 16microg/L. The molecular weight of elemental mercury being about 200g/mol, that puts us about to 0.08micromoles/L (0.08microM).

Vaccine_Guide_Ingredients

 

It seems it did not bother at all Ashley that the 50 microM she cherry-picked (why not the 1microM? It has a statistical significance too, a value of P<0.05) was absolutely not a concentration even reached in a vaccine overdosing (no one get 1 injection/week for 4 weeks). We are talking about 312x the maximum amount observed in the overdose scenario. So, what should I care about a toxicity at a concentration never reached in the body at any condition? Worse, Ashley, despite her claims of being a toxicologist completely nixed the nature of methylmercury. Unlike thimerosal/ethylmercury, methylmercury has a nefarious feature to bioaccumulate and stay longer in the body. If you look at the Burbacher study, it takes about 3 more times to clear MeHg versus thimerosal. Yes, methylmercury maybe less bad if subjected to acute exposure, but his effects are much worse in terms of prolonged exposure to it.

Low-dose mercury exposure in early life relevance of theimerosal to fetuses newborns and infants
Ashley is not bothered to read papers, and she will take any abstracts aligning to her agenda as valid and will claim she read the paper. It is a paper from Dorea. This guy is not known for doing sound science and he is pretty goofy and even dishonest on how he sells his findings. I will skip that one as it is a review.

Thimerosal induces DNA breaks and cell death in cultured human neurons.
This paper is looking at the effect of thimerosal at different concentrations on DNA breaks and cell death in both fibroblasts and neurons. Again, Ashley took it, read it up to the intro and ran away with it.
Figure 1: They exposed neurons at different concentrations for 2, 4 and 6 hours at different concentrations. Since it is neurons, we have to see how much would reach the brain. This brings us to the Burbacher paper as she measured mercury clearance in the baby monkeys. She found by the end of her 4-weeks regimen that Hg levels in the brain of thimerosal-treated group was 40ng/g brain tissue. If we assume that the brain is 100% water, that would equal to 40microg/L or 0.2microM. What is the minimum concentration used in this study? 2microM, 10x the concentration you would have in the brain if you overdosed on vaccines.
They see an increase of DAPI-positive nuclei at 6 hours at 2microM (DAPI is a compound stacking within DNA and fluorescein in blue. It only enters cells that are dead). You would assume to be problematic but in the same time we don’t have an idea of how many cells (as counted by their nuclei were in the field). I would have expected the authors to take a picture before/after fixation or use a dual staining propidium iodide (red, dead cells)/DAPI (total cells after fixation) to be able to normalize. We cannot exclude that the authors could have cherry-picked the field to downplay the number of DAPI-positive cells in the control group.
Fibroblasts were more robust and stood up to 10microM (remember, we found that the Burbacher reported 0.08microM concentration in her monkeys).
In Figure 3, the authors looked at caspase-3 activity in neurons. Caspase-3 is the endpoint of the signaling cascade known as apoptosis. It is programmed cell death and result in a cell death comparable to euthanasia. Increase in caspase-3 activity was only measured at 10microM. That’s about 100x the amount found in the brain in the baby monkeys.
Finally they performed a TUNEL assay that will highlight damaged DNA (and indicative of cells undergoing apoptosis). You can see an increased TUNEL activity at 2microM, or 10x the maximum amount reported by Burbacher.
I don’t think we have to go through the end of this paper and concluded a minimum toxicity of 0.5microM after 24 hours in their neurons.
Again, Ashley brilliantly showed her credentials one more time by citing papers that are absolutely not supporting the claims made by her.

Review of neurotoxicity studies of low-dose thimerosal relevant to vaccines
Again, a Dorea review. You cannot do much with a review, especially coming from Dorea but would be more than happy to review a full-research article from our friend.

2.3. Polysorbate 80:

The blood-brain barrier: A bottleneck in brain drug development
Another review by Ashley but this time she cites a very good one. This one is written by Bill Pardridge back in 2005. Bill is a professor at UCLA specialist of drug delivery at the BBB. So, this is a very good review if you want to understand the BBB. My problem? Again, Ashley likely put that one in following her “research” that maybe not more than a Google search of “polysorbate 80 blood-brain barrier”. She is just highlighting blindly whatever she thinks is nice but obviously never read references 28 to 30. In particular reference 30. This is the JNS study that I used in my infographic about polysorbate 80 and the BBB. Of course, she did not read, otherwise she would have noted two things:
1. The administration route is by direct injection in the carotid artery. Since when are we giving vaccines by a shot in your carotid artery?
2. The second is the dose. Huge dose given. mg/kg range. That is almost 3000x doses given at once. Straight into the carotid artery……of a mouse. Not sure it was even shown to occur in humans. May I say that 25 years seems indicative that it fell short to work in humans?

Specific role of polysorbate 80 coating on the targeting of nanoparticles to the brain.
Ashley is again happy to just share an abstract but this time no yellowing, I guess she just put on the fly. This is a paper working on how to optimize the delivery of nanoparticles across the BBB. They are trying different combination of the nanoparticles, from the basic surfactant-free nanoparticles (SFNP), to the one coated with PS80 or other stuff as seen in Table 1.
It is a particle bound PS80, not some free-roaming PS80.
Overall, nothing really much interesting for those not into drug formulation, but I have to say I was not really convinced by the fluorescence pictures.
It seems that first the microscopy pictures were overexposed, suggesting a weak signal in the brain parenchyma. The only region that seems rich with fluorescence (bright) is what appears the vascular region (as we have something delineating as a tube). I guess the nanoparticles can enter the BBB but must remained trap inside it or in the perivascular space.

2.4. Fetal Bovine Serum
The use of fetal bovine serum: ethical or scientific problem
This is what seems a review about the use of FBS in cell culture medium. For those unfamiliar, FBS is an important component of mammalian cell culture. It provides the right amount of proteins to maintain what we call an oncotic pressure constant, as well as cell growth factors that we cannot completely reproduce in the lab due to the complex composition (number and amount of growth factors present).
This is a review written for a journal maintained by FRAME: Fund for the Replacement of Animals in Medical Experiments. This is something in biomedical sciences every scientist is working hard to achieve and commonly referred into the 3R guidelines: Reduce, Replace, Refine the use of animal in medical research. In vitro (cells in a Dish) models can help reduce or replace the need of animals for certain types of research, but it is only a viable option if you can have almost all reagents coming from animal-free origin. You can understand that fetal bovine serum is an ingredient that can be problematic. Usually, FBS is used at a final concentration of 10% (10mL of FBS for 100mL complete medium), with often reducing it down to 1% (as FBS can interfere with some experiments). Vaccine products (as well as biologics) are not used directly from the crude cell culture medium. There are a series of purifications and extraction processes that are aimed to obtain a pure product. This process brings it the presence of FBS to traces levels.

2.5. Propylene Glycol
Hyperosmolarity in small infants due to propylene glycol
Here we have Ashley winning the potato prize for the definition of parenteral!
She is defining parenteral as “intravenous, intramuscular, subcutaneous or intradermal administration”. No shit, Ashley! You just mentioned the major administration routes except the oral route and did not even took the time to read the article. So much for your professionalism, Ash! But this is not what parenteral means in this article. Usually, parenteral routes are used for administrating fluids and nutrients to patients uncapable to feed and drink.
This is a case report made on a premature newborn (27 weeks of gestation). She was put on a total parenteral nutrition (TPN). TPNs are always administered via IV route.
The source of propylene glycol was a multivitamin administration (named MVI-12) given in the final IV nutrition bag (10mL). Few days after the onset, the patient developed complications including hyperosmolarity that resulted in stopping the parenteral nutrition. This prompted the authors to screen other babies also put on the same multivitamins and noted an hyperosmolarity that was variable in extent. This let the authors to recommend restraining the use of any medical products containing a high amount of polyethylene glycol in their formulation.
But the last question we have: was this propylene glycol reaction due to vaccines?

2.6. Glyphosate
Glyphosate pathways in modern diseases VI prions amyloidosis and autoimmune neurological diseases
Oh boy, what I have to say about that? That is a turd paper written by a computer scientist (Stephanie Seneff) that has ZERO, I repeat ZERO knowledge in biological sciences. Imagine, me as a BBB expert, writing an article about Linux operating system, claiming it is an utter piece of garbage, that its code is so degenerate that even penguins would write a better code. I would not sound credible right? Same here. Seneff makes extraordinary claims with ZERO evidence in her claims. Even critics of glyphosate such as Robin Mesnage and Michael Antoniou consider Seneff too kook to have her claims credible (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5705608/)

However, one thing is here laughable and that’s Table 4. Seneff claims that she can detect glyphosate at 3ppb. To reach such level of glyphosate, you need a very sensitive and quantitative method, usually used in analytical chemistry. Such methods are usually referred as liquid chromatography (LC) or gas chromatography (GC)-mass spectrometry (MS). The MS method allows to identify chemicals by their unique analytical fingerprints. We see such methods used in TV shows such as NCIS or CSI, but also at airports when the TSA has some suspicions of presence of prohibited products (the famous swipe test they perform). Here they use an ELISA. ELISA stands for Enzyme-Linked Immunosorbent Assay. It is a technique that uses antibody-antigen interactions to identify a chemical. The problem with ELISA? The very high risk of false positive. Do you want to have proof of the high variability? Check the LOD values given by two labs: 0.075ppb and 0.15ppb. That’s about a mean of 0.11ppb with a standard deviation of 0.053ppb. That’s a coefficient variation of 47%!!!!! The FDA tolerates a CV of 7.5% for compounds detection at 10ppm or less. 1ppm=1000ppb.
https://www.fda.gov/media/70019/download
If we consider the 0.11ppb as a reference, that’s put us to 6.17nanomoles/L or 1microg/L limit of detection. The LC-MS method developed by McGuire and colleagues found their limit of detection to the same levels, and nothing was able to come significantly higher than that, whether the volunteers had a regular grocery based-diet or an organic-based diet. https://www.ncbi.nlm.nih.gov/pubmed/27030536
Unlike the McGuire study that is published and publicly accessible for the validation methodology, the ELISA is a well-kept secret kept by ABRAXIS that have been very mysterious on how their kit works. I have approached them for assessing glyphosate levels in cell cultures sample, and the impression left was a shroud of secrecy. Ask any analytical chemist, the gold standard for chemical analysis for organic molecules is the LC/GS-MS method. ELISA has so many variables involved and steps (I practice them routinely) that you cannot afford such variability in a rigorous analytical method.

3. Concluding remarks:

I had focused on this section because it was aligning on my expertise, and let others deal with the other sections. But here is the final conclusion I have, as a peer to Ashley: Ashley has ZERO credential as a scientist and as a toxicologist, and her “Vaccine Guide” perfectly illustrate that point. She does not know to read papers, she does not know how to face criticism but she knows to cherry-pick, she knows how to put irrelevant papers into a binder and pose as the next Nobel Prize contender.

Ashley is a hack, she is an impostor, and she is dangerous and therefore should be rightfully called on her lies.

Advertisements

[Sciences/Junk Sciences] Contre-lettre au billet d’Adrien Senecat « Les évidences relatives de la tribune de No Fake Science sur l’information scientifique” (Le Monde – 07/26/2019)

Avertissement :L’article suivant est un article d’opinion servant à une contre-réponse à un article publie en tant que “article d’opinion” par un journaliste du quotidien “Le Monde”. Je suis scientifique français immigre aux États-Unis, enseignant-chercheur en pharmacologie et neurosciences. Pardonnez d’avance certaines coquilles et l’usage abusif de termes anglo-saxons en lieu et place de termes francises.

Il est rare que je prenne ma plume pour écrire un article sur mon blog en Français, ceci pour plusieurs raisons. Le fait de vivre aux États-Unis depuis 10 ans, d’avoir un contenu principalement international et puis surtout discuter d’un contenu scientifique. Cependant, un article (ou plutôt un billet d’opinion) signe par Adrien Senecat  publie dans le quotidien « Le Monde » (https://www.lemonde.fr/les-decodeurs/article/2019/07/26/les-evidences-relatives-de-la-tribune-de-no-fake-science-sur-l-information-scientifique_5493749_4355770.html) et partage sur le réseau social a servi d’une discussion vive entre moi et un ancien camarade de fac (pour la petite histoire, on a été ensemble sur les bancs de la Fac de Médecine il y a 20 ans. Il a réussi le concours de P1 et devenu médecin, j’ai raté mon concours de P1 et je suis devenu enseignant-chercheur aux États-Unis. Comme quoi il y a une vie après la P1).
Cette pièce d’opinion m’a surpris, et en même temps m’a donné du grain à moudre durant mon weekend. Pourquoi je reste sceptique et même absolument pas impressionne par cet article ? Je suis sceptique par les arguments avancés par l’auteur, par les qualifications du journaliste et en même temps exprimer ma lassitude de voir les sciences maltraites et caricatures par le journaliste de base. Venant d’un quotidien comme le Monde, je m’attends à une qualité journalistique digne d’un journalisme d’investigation, non de titres racoleurs et d’un nivèlement journalistique par le bas.

Mais allons au cœur du sujet et vidons le cahier de doléances envers cette pièce.

1. L’auteur en question :

Qui est l’auteur de cet article d’opinion e et quels sont les mérites qui lui donnent une qualification pour discuter d’un sujet pointu qu’est la science ?
Adrien Senecat (selon son profile LinkedIn) a un diplôme de journalisme de l’EFAP Lyon. Après un rapide passade dans l’hebdomadaire “Le Pays Roannais” et sur la radio “RCF Lyon Fourvière”, il s’est spécialisé dans le journalise “high-tech/web” pour L’Express d’Octobre 2011 à Avril 2015. Il quitta l’express pour Buzzfeed pour une période d’un an, avant d’obtenir son poste de journaliste d’Avril 2016 jusqu’à maintenant dans l’équipe “Les Décodeurs”, avec comme intitule “factchecking”. En conclusion, on peut supposer que l’éducation scientifique se limite à l’enseignement du lycée. Considérant que les personnes entrant une carrière de journalisme le font majoritairement à partir de filières générales débouchant vers un Bac L ou ES, on peut vraisemblablement considère que son éducation scientifique est au mieux anémique et absolument pas préparé pour ce sujet et encore moins donner le niveau requis pour un « factchecking » scientifique.

Je précise ici que c’est mon premier article lu écrit par Mr. Senecat. Donc, je suis à ce niveau « blind » et seulement lu avec mon propre niveau. Je ne peux juger de sa qualité sur ces autres « factchecking » et de ce fait ma contre-lettre s’applique qu’à ce billet écrit pour le quotidien.

2. Qu’est-ce la méthode scientifique et pourquoi elle est importante quand on parle de « Fake Science » ?

La méthode scientifique est à base de toute les sciences dure moderne et s’appuie sur l’expérimentation scientifique de Claude Bernard, un physiologiste Français du 19emesiècle. La pierre angulaire de la méthode scientifique est le scepticisme.
Toute nouvelle étude est passe au peigne fin, pour s’assurer que les résultats sont de hauteur a la rigueur scientifique. En science, les découvertes scientifiques les plus robustes se font dans l’intimité d’un journal de peer-reviewde haut vol tel que “Science” ou “Nature” et présente dans de grandes conférences scientifiques en tant que “keynote speakers”. J’ai l’habitude de comparer le monde de la recherche scientifique avec le monde de la musique métal. On a nos propres “Rockstar”, on a notre propre “Hellfest”, on a le même défi de vivre de notre travail par un système de mécénat (les demandes de financement de recherche restent assez proche de soumission d’une maquette d’album a un producteur de musique). Pour réussir dans le métier, il faut exceller dans la qualité du travail et dans l’innovation scientifique. Mais elle se fait généralement discrète, présenté rarement dans les médias conventionnels. Les scientifiques aiment rester des personnes discrète, détaché du « spotlight » des plateaux télés. C’est un peu comme le slogan d’une marque de frite surgelés, ceux qui paradent le moins dans les écrans télés en font le plus de découvertes scientifiques. Malheureusement, les scientifiques ont négligé d’adresser le public de leurs découvertes, laissant la porte ouverte à la « Fake Science » qui compense son incapacité scientifique par une esbroufe devant les plateaux télé et radio.
En science on a une deux issues possibles à une hypothèse : ou bien elle est validée par les résultats expérimentaux (reproduits par d’autres laboratoires et vérifies par des résultats convergents obtenues par d’autres approches) ou bien elle n’est pas. Quand la masse et la qualité des résultats et d’études atteint un niveau critique et que la réfutation de ces données devient difficile, on atteint un consensus. Un consensus n’est pas inscrit dans la pierre et s’adaptera en fonction de nouvelles informations et découvertes.
L’article en question publie par le collectif “No Fake Science” dans le quotidien « La Tribune » (https://www.lopinion.fr/edition/politique/science-ne-saurait-avoir-parti-pris-l-appel-250-scientifiques-aux-192812) et signée par plus de 250 signataires (médecins, scientifiques, pharmaciens, ingénieurs….) met en alerte sur la progression rampante de la « Fake Science » dans la sphère publique.
A titre personnel, l’utilisation du terme « Fake Science » est maladroit. Le mot « Fake » est utilisé en anglais Nord-Américain pour désigner un faux, une pâle copie, une escroquerie, une tromperie. Ce terme a une importance légale, car on peut avoir une étude complètement bâclée (par exemple, l’étude rétractée faite sur des rats nourris au maïs OGM. Cependant, le laboratoire a gagné un procès en diffamation car l’accusation a été faite que les résultats présentes dans l’étude rétracté étaient « Fake ». En réalité, les résultats aussi mauvais et bâclés (ayant de ce fait aucune valeur scientifique) étaient bien existants, avec une preuve physique de leurs existence (cahier de laboratoires sous forme physique ou électronique).
A moins qu’il y ait démonstration qu’une étude a été montée de toute poil, je préfère l’utilisation de « Junk Science » (science poubelle) quand je m’adresse au sujet de l’antiscience.

L’antiscience est la face oppose de la méthode scientifique. Que l’on parle des anti-vaccins, des médecines alternatives (homéopathie, acupuncture, reiki, cristaux…), des créationnistes, des négationnistes du changement climatique anthropocène, des anti-nucléaires ou des anti-OGMs, on retrouve souvent le même fil rouge qui font que leur approche est biaisée et leurs évidences de faible qualité scientifique (en particulier, un fil rouge que je vois quasiment tout le temps dans n’importe quel étude anti-vaccin) :

  1. On part sur une conclusion prédéfinie, et l’on réalise les expériences qui confirment le résultat.
  2. Généralement, les résultats obtenus s’alignent rarement avec la conclusion initiale donc on va éliminer l’utilisation de groupe contrôles, on va exagérer les doses administrer en utilisant des quantités faramineuses, ou bien un compose qui n’est communément utilise ou bien une approche exotique utilisant uniquement une seule technique.
  3. Si on n’obtient toujours pas le résultat voulu, on a sélectionné le résultat qui s’aligne à note conclusion et ignorer les autres communément appelé « cherry picking » (« cueillette des cerises ») ou bien couper les coins de la rigueur statistique par l’utilisation du « p-hacking » pour trouver une signifiance statistique là où il n’a point.
  4. Publier le tout dans un journal de basse qualité (car aucun journal de qualité accepte un torchon sans passer par un « peer-review » rigoureux), au pire un journal « Open-Access » a comportement prédateur (dans lequel le journal acceptera de publier n’importe quel torchon moyennant la somme coquette de $2000-3000 en frais de publications).
  5. Présenter ce torchon comme la preuve ultime d’une contre-étude fiable questionnant le consensus dans les « echo-chambers » sur les réseaux sociaux et par certains journalistes à l’éthique journalistique discutable sinon malhonnête. Jouer les cartes du martyr et du « whistleblower » (lanceur d’alerte) sur les plateaux télés, dénonçant une cabale et une censure qui a pour but de cacher la « vérité® » au public. Le but est de semer le doute dans l’esprit du public.
  6. Accuser les détracteurs et critiques comme « shills » (agent paye par un groupe d’intérêt), tout en cachant les conflits d’intérêts financier dont vous êtes vous-même coupables (par exemple, plusieurs scientifiques anti-vaccins siègent dans le directoire de fondations ayant un agenda anti-vaccins et bénéficie d’une manne financière de ces mêmes organisations à travers le financement de leurs propres recherches).

Une antiscience a aucune chance de soutenir ses thèses erronées devant ses pairs lors d’un congres scientifique international. Leur seule chance pour disséminer leur « Junk science » se fait par le biais de la « fausse-équivalence » et de trouver un journaliste assez naïf (comme est le cas avec cet article de Mr. Senecat) pour se dire qu’il y a deux camps dans chaque débat, y compris scientifique et que chaque camp a le droit à la parole sans peser le poids des « facts » de chaque camp. Ce que fait notre auteur avec ce paragraphe d’introduction :

« Le débat public autour de ces thèmes ne saurait être considéré comme “scientifiquement clos”, reconnaissent les auteurs. Pour autant, les points précis retenus en exemple sont consensuels parmi les spécialistes et doivent être présentés comme tels », assurent-ils. A y regarder de plus près, ces six « consensus scientifiques » n’en sont pourtant pas tous. Revue de détail.»

3. Les vaccins :

Dans sa globalité, la section est correcte de manière scientifique :

« Il est donc tout à fait juste de parler de consensus scientifique sur ce point. On peut cependant noter qu’un tel consensus n’éteint pas automatiquement tout questionnement sur la politique de vaccination. Ce n’était certes pas le propos de la tribune de No Fake Science, mais des questions demeurent ouvertes sur l’âge auquel vacciner, le nombre d’injections à pratiquer, la composition des produits utilisés. ».

L’auteur questionne, a juste titre la politique de vaccination. Mais l’auteur oublie de préciser que la politique de vaccination, bien que se basant sur la même littérature scientifique, reste à la discrétion de chaque gouvernement base sur son propre corpus scientifique et de la géographie. C’est un point récurrent que je vois utilise de manière sotte par les anti-vaccins anglophones qui considère que la politique vaccinale du CDC (Center of Diseases Control, organisme fédéral de sante publique) est en place non seulement aux États-Unis, mais aussi au Canada, au Royaume-Uni, en Australie ou en Nouvelle-Zélande !

Premier carton jaune a l’auteur pour le zeste de doute sur les vaccins, en pleine explosion d’épidémie de rougeole (les États-Unis ont déjà dépassé le record de cas de rougeole cette année compare à la plus grande crise d’épidémie depuis son éradication du territoire depuis 2000 (précédent l’effet vague de l’étude Wakefield).

4. L’homéopathie :

L’homéopathie, ou ce que j’appelle la poudre de perlimpinpin qui se base sur un principe développé par Samuel Hahnemann il y a 200 ans.

Ses deux principes violent les lois de la biologie et de la chimie :
* Que l’on puisse soigner “un mal par un mal” sans aucune évidence de causalité (par quelle mécanismes biologique un extrait de foie/cœur de canard ait capable de soigner un état grippal ? Aucune réponse).
* Comment expliquer que les produits homéopathiques puissent expliquer une activité pharmacologique malgré une dilution ridicule qu’il est statistiquement impossible de détecter une molécule de substance active dans une préparation diluée pour usage thérapeutique ?
* Comment expliquer “la mémoire de l’eau”, un argument souvent utilise par les homéopathes pour réfuter argument #2 ? Ça va faire plus de 200 ans que Lavoisier a établi les bases de la chimie modern et 400 ans que Paracelse a défini les bases de la pharmacologie. 200 ans et toujours aucune évidence des principes de Hahnemann démontré par la science moderne.
Le problème est que bien que ce sont des préparations homéopathiques, une etape-cle reste la préparation de concentre communément appelée “teinture mère” (Tinctura mater). Cette préparation (souvent hydro-alcoolique) reste un concentre de composes extrait de plantes avec une activité pharmacologique documentée. Une erreur de dosage peut avoir un risque important de surdosage qui peut être mortelle. Ce fut le cas avec un extrait d’Atropa belladonnautilise comme remède “Hyland teething tablets” antidouleurs pour les éruptions dentaires chez le nourrisson et le petit enfant. Le principe actif est l’atropine, un puissant antagoniste des récepteurs muscarinique de l’acétylcholine. A fortes doses, ce compose peut entrainer la mort du patient. Ce produit a été reitre par le FDA (https://www.fda.gov/news-events/press-announcements/fda-warns-against-use-homeopathic-teething-tablets-and-gels) après le rapport de cas fatal (on estime 10 décès lies a l’utilisation du produit. Pour rappel, tout supplément sur le marché US n’est pas régulé par le FDA, le FDA enquête qu’après cas d’effets secondaires sérieux ou fatal reporte par les médecins traitants).

« Là aussi, cependant, le fait qu’un consensus scientifique existe ne veut pas dire qu’une seule politique publique est possible. »

Si l’on réalise une expérience 100 fois pour valider une hypothèse et que l’on 100% un taux d’échec pour cette hypothèse, il est fort probable que cette hypothèse est invalide et se doit d’être abandonne. Les antis sont généralement têtus et pense que la 101eme expérience confirmera ce que 100 expériences ont échoué à vérifier. Avec l’homéopathie se posent deux problèmes, éthique et financier :
* Est-il éthique pour un médecin de prescrire un remède inerte juste pour satisfaire un effet placebo chez le patient sachant que le remède inerte a une probabilité quasi-nulle de traiter la condition du patient ?
* Dans un climat ou les dépenses de sante augmentent, est-il raisonnable to dévier des fonds dans une intervention thérapeutique qui a pratiquement zéro effet thérapeutique pour un produit qui est couteux ? On crie beaucoup à la chasse au gaspillage, le déremboursement des produits homéopathiques est un moyen de recentrer les dépenses vers des approches supporte par les faits scientifiques.
Un deuxième carton jaune pour l’auteur et l’on peut sentir le refrain “Je ne suis pas anti-X, mais…”, une autre tactique que je vois utilise par les trolls anti-vaccins quand je les débats sur les réseaux sociaux.

5. Le réchauffement climatique :

Là, je suis en alignement avec l’auteur. Le changement climatique est réel, que le réchauffement climatique est anthropogène (due par l’activité humaine) qui a contribué l’augmentation du CO2dans l’atmosphère, un gaz à effet de serre connue depuis au moins 100 ans. Les modèles mathématiques développés il y a 20-30 ans se sont révélés assez proches des valeurs expérimentales mesures sur le terrain. Que cela plaise ou non au gouvernement US, on a un effet sérieux dont on sent les premiers signes alarmant. En tant que climatologue et communicateur scientifique, je recommande de suivre les travaux de Michael Mann (https://www.michaelmann.net) et Katharine Hayhoe (http://katharinehayhoe.com/wp2016/) qui sont tous deux climatologues et excellent communicateur scientifique.

6. Le glyphosate :

Le glyphosate. Un sujet très a cœur des Français jusqu’en dans les plus hautes sphères, associe avec Monsanto comme un croquemitaine. Mais là encore beaucoup d’erreurs de jugements, de stéréotypes et d’exagération des faits que l’auteur, bien que habitue au « factchecking » selon ses dires, se laisse badigeonner dans la saumure.
« Inclure le glyphosate dans une liste de sujets qui font l’objet d’un consensus scientifique est discutable. Cet herbicide massivement utilisé dans le monde est, en réalité, au cœur d’une controverse scientifique, où chaque mot a son importance. »
Pour être honnête, la controverse n’existe que dans la tête des des politiciens, des « écologistes-bobos » et d’autres adeptes des délires conspirationnistes que même certains scientifique critique du glyphosate appellent à mettre de cote (donc je ne citerai point que selon Stefanie Seneff, informaticienne du MIT, que le glyphosate serait selon elle responsable des causes du spectre d’autisme chez les enfants, ou bien que le glyphosate change notre microbiome intestinal car une étude publie dans PNAS montre que le microbiome intestinal est fortement changée par la présence de glyphosate à haute dose) (https://www.ncbi.nlm.nih.gov/pubmed/29226121).

« Ici, les auteurs de la tribune mentionnent les « différentes instances chargées d’évaluer le risque ». Et il est vrai que celles-ci jugent « limités » les risques du glyphosate pour la santé humaine. Le problème, c’est que ces agences sanitaires ne sont pas les seules à explorer le sujet. Le Centre international de recherche sur le cancer (CIRC), une agence de l’Organisation mondiale de la santé (OMS), a ainsi classé le glyphosate comme « cancérogène probable » en 2015. Cette décision n’a pas de valeur réglementaire, mais elle est le fruit d’un travail scientifique. Plusieurs études sérieuses ont également pointé de possibles risques pour les agriculteurs qui utilisent des produits à base de glyphosate. »

Disséquons les points ici et commençons par la classification du CIRC du glyphosate 2A. Ce que Senecat a oublié de manière involontaire ou non est de pointer du doigt l’écran de fumée opaque délivré par Chris Portier (président du CIRC) dont ses relations étroites avec les firmes d’avocat qui ont pignon sur rue pour entrainer des poursuites pénales, résumé par Risk-Monger dans son enquête « Portier Papers » https://risk-monger.com/2017/10/13/greed-lies-and-glyphosate-the-portier-papers/

Cette suspicion d’intégrité a été confirme par une investigation journalistique par Kate Kelland de Reuters (ce n’est pas Buzzfeed, hein. On tape quand même dans du très haut de gamme quand on considère la qualité de l’information). Dans cet article, la journaliste a montré que certaines personnes ayant accès au brouillon final du monographie du CIRC a modifié de tel sorte pour faire passer le glyphosate d’être plus cancérigène que les études ont conclu. https://www.reuters.com/investigates/special-report/who-iarc-glyphosate/

Étonnant que notre « factchecking » ait ignore ces articles compromettants dans son analyse, j’assume que c’est un oubli involontaire de notre auteur. Si je peux souligner ce fait a l’auteur, ceci considère ce que l’on appelle un « conflit d’intérêts ». On a une technique assez rode que d’autres scientifiques à une intégrité scientifique douteuse comme Andrew Wakefield on utilise : une firme d’avocats cherche un moyen d’argent facile, trouve un scientifique comme « mercenaire » pour publier une étude qui compromet un produit chimique ou une procédure médicale populaire. Ce scientifique publie une étude suggérant un lien entre une maladie et le produit chimique en question. Avec une telle étude en poche, les firmes d’avocats sont prêtes pour envoyer des procédures de poursuites pénales et en même décrocher un sacre pactole.
Aussi étonnant est le silence de plomb de la position isole du CIRC dans sa décision de classifier le glyphosate en tant que « cancérigène probable »  seule contre plus de 17 organismes de sécurité sanitaire nationales, résumé dans une illustration infographique par « Toughtscapism » (une scientifique environnementale, qui blogue comme moi durant son temps libre) ici (https://thoughtscapism.com/gmos/#jp-carousel-41330).
Je suis curieux de savoir quelles sont ces études qui ont donné l’idée de Portier de classifier le glyphosate en catégorie 2A (pour comparaison, l’alcool est classifie 1, bon à se souvenir lors de la prochaine étude trouvant des traces de glyphosate dans le vin, la bière ou le schnaps).
« Ces éléments font que bon nombre de spécialistes se montrent beaucoup moins catégoriques que les auteurs de la tribune No Fake Science. « C’est un sujet difficile avec pas mal d’incertitudes et il est nécessaire d’approfondir nos connaissances »expliquait ainsi récemment au Monde Robert Barouki, médecin, toxicologue et directeur de recherche à l’Inserm. »
Je salue le langages mesure du Dr. Barouki, co-auteur d’une publication majeure qui démontre l’absence d’effets biologique longue durée du maïs OGM chez les rats (https://www.ncbi.nlm.nih.gov/pubmed/?term=barouki+R+glyphosate). Malheureusement, il semble que ce soit la seule étude auquel il approcha d’une manière indirecte la toxicologie du glyphosate et j’aurais préfère que d’autres chercheurs dont la toxicologie du glyphosate est le pain quotidien (avec une expertise adéquate base sur leur publications) soit également intéressé.

Mr. Senecat questionne la position du collectif par rapport au glyphosate : « Outre la santé des personnes, l’usage massif du glyphosate dans le monde pose également des problèmes environnementaux, qui sont documentés par des études scientifiques. Si bien que, en résumant, le glyphosate à un simple « improbable » risque cancérogène pour l’homme, le collectif No Fake Science semble s’écarter de sa propre recommandation de ne pas « choisir ce qui nous convient et laisser en rayon ce qui contredit nos opinions ».

Le problème du glyphosate est le même que n’importe quel pesticide (qui sont utilisé aussi bien dans l’agriculture classique et Bio, à bon entendeur), celle de déterminer les bénéfices/risques sur le rendement et sur la santé humaine et environnementale. Il faut en particulier comparer aux précédentes générations de pesticides. Depuis 30 ans, le glyphosate a été adopte par son profil faible de toxicité aiguë (de l’ordre de 500mg/kg et plus, on parle plutôt d’une DL50 autour de 5’000-10’000mg/kg), de faible usage (une canette de concentre est suffisante pour l’épandage d’un terrain de football américain. On est très loin de cette fausse idée que les champs sont trempes de glyphosate). Le problème d’écotoxicité bien que moindre compare aux anciennes générations, reste quand même assez longtemps (demi-vie estime de quelques jours à 90 jours, source : http://npic.orst.edu/factsheets/archive/glyphotech.html#ecotox).

Il y a également un risqué de résistance qui s’applique à n’importe quel pesticide et reste qu’une solution temporaire jusqu’à le développent d’une nouvelle génération de pesticides plus efficace et plus sure. Comme chaque chose dans la vie, le 100% efficace ou le 0% risque n’existe pas car c’est un paramètre impossible à achever dans la réalité. Donc quelle alternative on nous laisse ? Trouver des pesticides moins toxiques et plus efficaces dans le futur, mais en même temps le glyphosate reste un pesticide malgré son âge qui a le meilleur ratio bénéfices/risques de l’arsenal contemporain.

Troisième carton jaune pour Mr. Senecat, donc j’appelle cela une expulsion de terrains pour trois fautes journalistiques sérieuses. Si de telles fautes aurait été faite par un journaliste de « l’Écho des Savanes », j’aurais laisse passe. Mais de la part d’un journaliste qui se glousse d’être dans le « factchecking », cela est inacceptable.

7. Les OGMs :

Deuxième hystérie collective de la population Française de base, et un « cash-flow » profitable pour n’importe quel marchand de peur. Je m’y connais, moi aussi était jeune, con et anti-OGM. Les anti-OGM (et comme chaque antiscience) c’est comme une certaine marque de frites surgelé : « Ceux qui en connaissent le moins (biotechnologie) en parlent le plus », étude a l’appui (https://www.nature.com/articles/s41562-018-0520-3).

Pour marquer son scepticisme, l’auteur écrit « Le fait qu’un organisme soit génétiquement modifié (OGM) ne présente pas, en soi, de risque pour la santé. » Ici encore, la formulation retenue par les auteurs est contestable. La référence utilisée (un article de l’OMS sur les questions fréquentes sur les OGM) n’est, en effet, pas aussi catégorique. ».

Mais que dit l’OMS ? En fait pas grand-chose, et renvoie la patate chaude aux autorités nationales « En revanche, la plupart des autorités nationales estiment que les aliments génétiquement modifiés nécessitent des évaluations spécifiques. Des systèmes de circonstance ont été mis sur pied afin d’évaluer avec rigueur les organismes et les aliments génétiquement modifiés du point de vue de la santé humaine et de l’environnement. Les aliments traditionnels ne font généralement pas l’objet d’évaluations similaires. Il existe donc aujourd’hui une différence importante dans le processus d’évaluation qui précède la commercialisation de ces deux groupes d’aliments. »

Cette phrase explique bien le problème qu’on rencontre la science quand il vient aux décisions politique. La science n’a que faire de la politique, malheureusement la politique a souvent un problème avec la science, surtout quand celle-ci déraille des slogans politiques, surtout dans certaines populations électorales. Vaccins, avortement, changement climatique, mesures de sante publique, contraception, orientation sexuelle et identité sexuelle…on a souvent un antagonisme émanant de la classe politique refusant d’écouter ce que la science a de dire.

Malheureusement, par la nature même des autorités de sante, il peut être difficile de recommander une initiative impopulaire en disant que les OGMs sont aucun risque car Madame Michu donne plus de crédit a d‘anciens 68-ards devenu eux-mêmes un membre de la “nomenklatura” qu’a un panel d’experts de l’INRA quand a la question des OGMs. Les OGMs sont devenu un sujet tellement tabou qu’il a fallu la mobilisation de 107 Prix Nobel dénonçant la politique calamiteuse de Greenpeace par rapport aux OGMs (https://www.washingtonpost.com/news/speaking-of-science/wp/2016/06/29/more-than-100-nobel-laureates-take-on-greenpeace-over-gmo-stance/?utm_term=.c154b3d3f8b9).
Nous avons (en temps qu’Homo sapiens sapiens) modifie génétiquement toute notre agriculture et notre élevage depuis le Néolithique. Nous avons joué “au sorcier” maintes fois utilisant les lois de la génétique de manière aléatoire en croisant des variétés et sélectionnes des mutants ayant des traits d’intérêts que ce soit esthétique, nutritionnelle ou de rendement. On a joué ainsi plus de 9800 ans a “l’apprenti-sorcier à l’aveugle” jusqu’au expériences des petits pois de Gregor Mendel dans son cloitre.
Les OGMs reste jusqu’à présent une technique qui a montré son efficacité et sa sécurité quand on parle de temps, d’argent et de traits recherche. N’est-il pas hypocrite que l’on interdise une méthode qui permet d’éditer un génome de manière chirurgical (OGMs y compris CRISPR/Cas9) dans l’industrie Agricole, mais en même temps laisse le champ libre à la formations d’OGMs de manière complètement aléatoire (mutagenèse force) car considère de manière “naturelle” (http://www.info-nbt.fr/la-mutagenese-une-nbt-deja-ancienne.html)? Ou bien pourquoi les autorités sont si frileuses à certain types d’OGMs (agriculture) mais en même accepte sans ronchonner d’autres OGMs (produits pharmaceutique obtenue par génie génétique).
L’hypocrisie est encore pire lors ce que l’on interdit la culture de plantes transgéniques mais on autorise allègrement leurs importations (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5592980/). Là est le manquement de l’auteur qui gratte que de manière superficielle ignorant ces détails et se contente que de trouver information qui confirme ses biais.

En es temps de changement climatiques, on a point le luxe d’attendre 50 ans pour trouver une variété résistante aux aléas climatiques ou à l’apparition de nouveaux pathogènes dans nos latitudes.

Oui les OGMs ont leurs problèmes, mais pas les problèmes imaginent et fantasmes par la population et maintenue par un journalisme malhonnête. On a l’issue de l’accès des ressources en biotechnologie pour les pays en voie de développent, afin qu’il puisse trouver une solution à leur problèmes spécifiques ; ou bien les financements servant à développer des brevets par des centre de recherche publique de protéger leurs inventions (eh oui, la propriété intellectuelle existe aussi pour les découvertes scientifiques et aide au financement de Nouvelles découvertes. L’histoire de la warfarine (un anticoagulant) développé par Dr. Paul Link at l’Universite de Wisconsin via le Wisconsin Alumni Research Foundation qui se finance grâce aux licences de brevet), mais également la mise en place d’un système de sécurité sanitaire pour s’assurer de l’innocuité de nouveaux produits OGMs.

Apparemment pour Senecat, vivre de ses brevets est le mal absolu:
« Au-delà des questions de santé évoquées par la tribune, les OGM posent néanmoins d’autres enjeux, notamment en termes de brevetabilité du vivant et de dépendance des agriculteurs aux sociétés qui en commercialisent les semences. Autant de réserves politiques qui ne relèvent pas forcément de l’obscurantisme ou de la mauvaise foi. ».

On accepte bien que le piratage d’œuvres artistiques (y compris films, séries TV et musiques) est une violation des droits d’auteurs, mais en temps on demande aux scientifiques de renoncer à une protection de leurs inventions. Le terme brevetabilité du vivant est souvent utilise comme argument de “straw man fallacy” (“fallacieux d’homme de paille”) pour discréditer le parti adverse par une exagérations des points argumentaires discute. On a un “straw man” aussi bien sur la “brevetabilité” (je me souviens de mes jeunes années ou l’on brandissait l’épouvantail Monsanto et ses grains OGMs avec le gène “Terminator”), que sur l’idée de voir les agriculteurs redevenus serfs sous le joug des multinationaux. Ce que Senecat n’a pas dû apprendre durant son passage sur Buzzfeed et sur les réseaux sociaux est ce que j’appelle les méthodes d’investigations.

Cette idée que Monsanto tient les fermiers comme je tiens mon chien par la laisse est le cas “Bowman vs. Monsanto” qui remonta jusqu’à la cour suprême des EU, donnant raison à Monsanto (https://www.npr.org/sections/thetwo-way/2013/05/13/183603368/supreme-court-rules-for-monsanto-in-case-against-farmer). Les agriculteurs sont libres d’acheter leurs semences ou leur plaisent et rarement gardent les semences pour l’année suivante. Les raisons sont multiples mais surtout pour s’assurer de la qualité des semences chaque année (surtout en termes de rendements). Le cas Bowman se base sur l’achat de graines de soja OGM “Roundup Ready” génétiquement modifie pour être résistant au glyphosate. Cela donne une certaine aisance a l’agriculteur avec un produit prêt a l’emploi, sans se soucier d’une perte de rendement ou le choix du pesticide. Comme chaque produit, on se doit de lire le CLUF (généralement on clique “OK” sans lire les clauses du contrat). Dans ce cas, l’agriculteur en question donna son accord écrit pour utiliser les semences pour les planter durant la saison et de ne pas les réutiliser l’année suivante”. Que l’on soit d’accord ou non avec cette clause reste à la discrétion du client. Mr. Bowman, en signant le contrat, accepta cette clause et accepta d’acheter le produit de Monsanto. Si Mr. Bowman n’était pas d’accord sur cette clause, rien ne l’empêchait d’acheter ses graines chez un autre pépiniériste. Le problème survenu quand Bowman décida de garder quelques graines de Soja “RR” pour un plantage hors-saison, violant les clauses du contrat. Monsanto eu vent de cette violation et entama une procédure juridique.
On est donc bien loin de cette image fantasme du pauvre fermier sous le joug de Monsanto que Senecat nous peints dans sa tribune. J’ai le droit d’acheter un CD, de le convertir en fichier audio AAC sur mon Mac pour écoute personnel. Mais je n’ai point droit de poster ces fichiers en téléchargement libre sur Internet. On a le même problème ici.

Quand on a des saccageurs fauchant des champs expérimentaux de cultures OGMs qui se passent pour des “héros et martyrs” dans les réseaux sociaux et sur le PAF, détruisant le fruit de plusieurs années de travaux de scientifiques de l’INRA tout en demandant les études complémentaires sur l’absence de risqué sanitaires, n’est-il pas indicatif de l’hypocrisie ambient à ce sujet ? Malheureusement, Senecat joue à l’apprenti-sorcier jetant de l’huile sur le feu en jouant sur la peur et l’appel au “naturel”.
Que différencie un faucheur d’un “sauvageon” incendiant une voiture lors de la veille du Nouvel An ? On a destruction et saccage de propriété d’autrui.

8. Le Nucléaire :
Le dernier parti du billet se focalise sur le nucléaire. A l’heure du changement climatique et de la série “Chernobyl” sur HBO, on a la discussion du nucléaire qui revient. Et à son habitude, l’auteur reste suspect des points aborde par la tribune des “No Fake Science”:
« Mais, de nouveau, la mise en exergue d’une seule affirmation, au détriment d’autres enjeux essentiels du sujet, peut donner l’impression que No Fake Science a fait son choix dans le « supermarché » de l’information scientifique. « Le point que nous voulions mettre en avant était la faible émission de CO2 de ce moyen de production électrique, pouvant participer à la lutte contre le réchauffement climatique. Le propos n’avait pas l’objectif d’aller au-delà », répond le collectif. ».
Le nucléaire en lui-même n’est pas la solution miracle à elle seule. Chaque source d’énergie a ses avantages et inconvénients. Les énergies fossiles ont dominé le 19eme jusqu’à maintenant au détriment du réchauffement climatique (gros producteurs de CO2), mais également de la pollution atmosphérique (dont les particules de gaz d’échappement des moteurs diesels ou bien des industries). Les énergies renouvelables sont une alternative intéressante, mais aussi ont leur limitation. Beaucoup de chemin reste à parcourir quant au rendement et a l’approvisionnement continue et stable en énergie. L’Allemagne qui a pourtant été le fer de lance “Gruene” (vert en allemand), n’a pu trouver une alternative aux centrales nucléaires pour une énergie propre (CO2) par la réouverture des centrales au charbon (http://www.leparisien.fr/societe/urgence-climatique-l-allemagne-doit-fermer-ses-centrales-a-charbon-28-07-2019-8124954.php).
L’urgence a court-terme reste à diminuer la production de CO2pour mitiger le réchauffement climatique. A ce point l’énergie nucléaire reste la méthode alternative accessible immédiatement. Oui il y a le problème des déchets mais on a des solutions. On a des méthodes pour recycler certains déchets et l’on a une expertise nationale à ce niveau. Le montant de déchets reste bien moindre que le montant de déchets et matériaux utilise pour la production de machines utiliser pour produire les énergies alternatives (solaire, éoliennes) tels représenté dans un diagramme par Toughtscapism (https://thoughtscapism.com/2017/11/04/nuclear-waste-ideas-vs-reality/).
L’autre alternative ? On retourne à l’Age de pierre pour couper net notre production de CO2avec zéro production de déchets et un licenciement sec pour Senecat : plus d’électricité, plus d’internet, plus de réseaux sociaux, plus de “buzz”, plus de “factchecking” et peut-être on réécoutera les sages paroles du vieux chef du village.

9. En conclusion :
Ma conclusion est la même et en phase avec ce que “Risk-Monger” appel le “poison du précaution” (https://risk-monger.com/2019/06/11/the-poison-of-precaution-iupac-keynote/). On vit dans une époque ou les réseaux sociaux ont une place prépondérante dans notre société. On vit à coup de “likes” sur Facebook, sur Instagram, sur Twitter. On a un changement de paysage ou soudainement on a donné une importance a des “motivateurs”, des “coach” ou bien “des experts” tout en regardant d’un mauvais œil les experts “classiques” comme dépassé, ou bien à la solde de groups d’intérêts dans nos délires conspirationnistes.

On vit dans un monde où l’on questionne les experts qui ont un Bac+8 et une réputation scientifique par leurs pairs par la qualité de ses publications.
Revenons à mon ancien camarade de fac et discutons comment cette mentalité peut être délétère.

Imaginons que je vais chez mon ORL pour un maux de gorge. Mon ORL diagnostique ce mal de gorge en tant qu’infection par streptocoque et me prescrit des antibiotiques.
En attendant mon tour pour récupérer ma prescription, je navigue sur le groupe Facebook « Crunchy Mommies », parlant de ma visite chez le médecin.

Karen, agent de caisse le jour et vendeuse « ceinture noire » des huiles essentielles (HE) Pomme Déterre la nuit (m’assurant que son insistance à vendre ses 5mL d’HE coutant $50 pièces, m’assurant que ce n’est pas un « Ponzi scheme ») commente sur mon poste : « Ne prends pas ces antibiotiques, tu vas bousiller sa flore intestinale ! Prends donc un flacon d’HE d’origan pour ton angine ! Ton docteur ne connaît rien et il est sous la solde de Big Pharma ! ». Voilà comment Karen, caissière s’improvise ORL.

Cette histoire peut paraître rocambolesque mais est assez proche des histoires que je rencontre avec des mamans « On The Fence » (hésitante à vacciner). Les réseaux sociaux ont paradoxalement ouvert les portes à n’importe qui sur Internet de se parader comme « expert » sans démontrer aucune qualifications et diplômes. SI l’on veut diminuer l’effet des « Fake Sciences/Junk Sciences », il faut une alliance entre les experts scientifiques et des journalistes scientifiques qui ont un bagage intellectuel et idéalement une formation scientifique de base pour pouvoir décoder un article de « peer-review ».
Malheureusement, Adrien Senecat est le symptôme plutôt que la solution dans le combat des « Fake Sciences ». De gré ou de force, Senecat via ce billet d’opinion a démontré son inaptitude et d’immaturité de « factchecking » quand on parle de questions scientifiques. Senecat est comme l’un de ces journalistes d’une planche de « Tintin au Pays des Soviets », auquel un commissaire Soviet montre avec opulence sous les yeux ébahis de journalistes occidentaux son « Village Potemkine ». Senecat est tel un étudiant qui pense plus connaître que le professeur. Ce symptôme a un nom, on l’appelle « l’effet Dunning-Kruger » et Senecat nous a démontré par cet article être plein dedans.

[Garage Sales/Computers] Restoration of the $10 HP All-In-One Elite 8200.

Following my restoration project of my “2003 Dream PC” I rescued from the garbage container in the back alley a couple of months ago (and serving as my “emulation” computer on a daily basis), I decided to get back on the hunting season and prey on deals on online garage sales through Facebook marketplace, mostly looking for anything not more younger than a computer from 2004-2005. But sometimes I make exceptions, like that one.
I was looking for a decent monitor online for my second PC, that would have both a VGA and DVI connector (I have a fond for having my LCD monitors being connected to my PC via their DVI connector, unless it is a CRT in which VGA is the only option) and found that  post.

IMG_0551 2

 

I was firstly skeptical because the seller said she did not have tested and did not have any cables. The HP LCD monitor was kind of cool and able to rotate it 90º (great for some shoot-em up on MAME that requires a “portrait” screen). But the second one was even more interesting. You see the sticker there? These stickers were indicative of Windows 7 system, and Intel Core i3 processor. That was telling me that the second monitor as a complete all-in-one computer. I had a 50% chance that such screens were DOA, but at $10 it was a low-risk bet.
I purchased them and brought them home. The HP monitor worked like a charm for the first 12 hours, only to wake up with a dead screen (likely the backlights died or the whole panel died. There was indication of some power still going on inside the screen). The second “monitor” indeed became the interesting find. It was indeed an All-In-One computer from HP as well. A Elite 8200 model All-In-One. It still has the Windows 7 COA sticker in it, furthermore seems it is a Pro version of that OS (suggesting this kind of models were aimed for businesses). IMG_0427.jpeg

The only caveat is that the power supply was lacking and needed a special kind. The jack connector type we find commonly on laptops. Therefore I needed to purchase a power brick for this puppy in order to fire it up. What is interesting is that system is relatively easy to expand the RAM and hard drive. You just remove the lateral plastic panels on the back. Three easy steps and you can change the RAM (DDR3 SO-DIMM format), the hard drive (serial ATA 3.5″) and voila.
IMG_0431 2

The biggest challenge was to find EXACTLY the power supply brick that have the right voltage and amperage to this machine. I was able to find one brick on eBay for $25 with shipping fees in it.  Once the power supply arrived, plugged in……and it turned ON!!! The only issue was the hard drive. It was not detected. I removed the drive (a WD Blue of 250GB size), replaced it with a 320GB SATA drive sitting in my drawer and plugged it in. Worked like a charm. Interestingly the hard drive worked quite well once put in an enclosure and passed the WD Diagnosis without problem.
The computer per se seems manufactured sometimes in 2011/2012 based on the BIOS that was apparently never updated. It also has a DVD drive on the side, has 4 USB ports and a output audio jack. It has a decent CPU, a good amount of RAM (8GB, suggesting it was upgraded at some point). The folders present on the original drive (that I erased once it was able to mount and pass the hard drive test) showed a last modification date around 2016, suggesting the computer since was taking the dust. Maybe the owner lost the brick somewhere during a moving and let it sit around for all that time. A sleeping beauty that was just waiting to be awaken.
img_0536.jpeg

I took a snapshot of the Windows key underneath the base, plugged in my Windows 10 flash drive and installed Windows 10 64-bits on it. The nice thing about Windows 10 is that if you have a Windows 7 OEM key, you can use it to get a free upgrade to Windows 10. Some people will argue why not keep Windows 7. The problem is Windows is aimed for an obsolescence very soon and a growing numbers of programs need a 64-bits OS to run (Fortnite for example is asking you for such OS).
That system just came right on time for my daughter’s birthday. Set it up a desk and the computer is now hers, bringing a nice addition to her cheap $100 ASUS Intel Celeron laptop I bought on last Black Friday (you know the 2GB/32GB MLC drive). For $35, it is a very good desktop for the stuff she is doing and still can go for a couple of years until it dies.
As “The 8-bit guy” said on one of his video in a  Youtube channel, the definition of obsolescence is a subjective notion that depends on what your purposes are and which tasks you are aiming for. The PC I have salvaged is beyond obsolescence and barely able to connect to Internet as most Web browsers are outdated and connecting with such browsers will end you up with a “DNS error” blank page. But this computer is still doing great in emulating the old stuff I am playing with, I can still install some games bought on GOG.com (after downloading them on my current PC) that are labelled XP-compatible. And it still gives me happiness.
Hopefully, this AIO PC will be able to keep my kids happy for several years and enjoy a nice end of life in my home. It is maybe me, as a GenX kid, but I have a certain sentinel value to my computers. There is a feel of joy, and accomplishment once you find these wrecks and put them back in working conditions, and firing them up. And if they end up unsalvageable, at least I know their parts will someday serve another computer. But that is another story to be told, and another weekend project in building…….

[Sciences/Maladies Orphelines] Resume de la conference GLUT1 a Washington, DC

La semaine dernière a eu lieu la conférence sur le déficit GLUT1 2019, qui s’est tenue au Hilton Crystal City. Il était bien situé près de l’aéroport national Ronald Reagan (DCA) facilitant l’accès à la conférence et idéalement situé au 2ème étage de l’hôtel, avec différentes salles de bal utilisées. Avant d’entrer dans les détails, je dois révéler que j’ai assisté à la moitié de la première journée et surtout à des sessions que j’ai jugées pertinentes en tant que scientifique. Si quelqu’un d’autre veut rédiger un résumé sur l’autre session, n’hésitez pas à me contacter, je n’hésiterai pas a l’ajouter à ce post.

Séance du mercredi après-midi:
La première session à laquelle j’ai assisté était une séance plénière de recherche animée par le Dr Juan Pascual (UT Southwestern), qui nous a donné quelques informations sur l’essai clinique en cours des NIH en cours concernant l’essai clinique de triheptanoine. Nutrigenix menait un essai clinique parallèle et, après l’arrivée des premiers résultats, a décidé d’arrêter cet essai. Les premiers résultats du Dr Pascual ont montré une réduction des crises convulsives chez environ 7 patients sur 12 traités par TH. Le traitement a été défini sur une période de temps (si je me souviens bien, il était de 6 mois) et arrêté. Certains patients ont continué à s’améliorer, d’autres ont montré une diminution de l’amélioration. Une caractéristique intéressante était le rapport sur les modifications de l’EEG suggérant une possible interaction entre les circuits excitateurs et les circuits inhibiteurs.

La deuxième session a été animée par le Dr McKenzie Cervenka (Université John Hopkins), qui a interrogé des patients adultes. Une caractéristique intéressante est que tous les enfants initialement diagnostiqués avec GLUT1DS dans les années 1990 sont désormais tous des adultes et que nous apprenons à connaître la maladie à mesure qu’ils vieillissent. Une observation intéressante est que nous avons une surabondance de GLUT1DS chez les adultes. En particulier, un examen des antécédents médicaux suggère que les symptômes bénins signalés dans leur enfance peuvent indiquer une infection à GLUT1DS, mais sont passés inaperçus en raison du manque de diagnostic ou de symptômes cliniques appropriés.
Chez l’adulte, il semble être qualifié d’encéphalopathie / déficience cognitive légère et chronique accompagnée de crises peu fréquentes. Une spasticité et une ataxie variable ont été rapportées, telles que la dyskinésie paroxystique induite par l’exercice. Bien que le niveau recommandé de cétose chez les enfants soit d’environ 5mmol/L, ce niveau ne semble pas être atteint chez les patients adultes (probablement plus autour de 3mmol/L).
Il n’y a aucun signe de différence de sexe ou de genre en ce qui concerne l’apparition (50%), les symptômes les plus courants sont: ataxie (63%), difficulté cognitive (66%) et difficultés d’élocution.
Environ 82% ont signalé certains déclencheurs (excitation, stress ou anticipation, chaleur, faim, fatigue….) Et 67% ont signalé des modifications des symptômes à la puberté.
61% des patients interrogés suivent le régime cétogène (52% du traitement classique et 33% du régime Atkins modifié). Environ 41% des personnes qui suivaient le régime céto n’avaient eu aucune crise pendant leur enfance.
Grâce à la prise de conscience récente, l’âge des patients sans crise se rajeunit.
Environ 46% des patients prenaient des médicaments antiépileptiques, notamment de l’acétazolamide, du lévétiracétam ou de la lamotrigine.
Notamment, 91% des répondants ont trouvé que l’activité physique réduisait les symptômes, 100% étaient capables de faire des activités quotidiennes de base et 36% étaient capables de conduire. Notamment, 19% ont fondé une famille et ont des enfants atteints de GLUT1DS. Cela a amené quelques discussions sur la manière dont les patientes enceintes devraient traiter le KD et sur le besoin urgent de mener davantage d’études pour évaluer l’effet du KD sur la grossesse. Il y a des spéculations selon lesquelles un tel KD pourrait être protecteur chez ces patients et le fœtus.
Une mise à jour récente des directives a été publiée récemment dans Epilepsia et est disponible en accès libre ici: https://onlinelibrary.wiley.com/doi/10.1002/epi4.12225

Les autres recommandations étaient de suivre la ration calcium / créatinine dans l’urine, d’augmenter l’hydratation et de prendre en compte le risque de carence en carnitine induit par l’utilisation de plusieurs antiépileptiques. Les niveaux de carnitine doivent être surveillés 1 et 6 mois après le début de la MK.

Enfin, une présentation d’un scientifique de Sanofi sur la thérapie génique s’explique en particulier par le succès récent et l’approbation par la FDA de deux thérapies géniques des maladies RPE65 et SMA.

Le reste de la journée a été consacré à une session d’affiche, assez succincte (moins de 10 affiches), mais qui a été une expérience enrichissante de permettre à des patients et à un gardien d’interroger mes connaissances scientifiques et de les expliquer. Ce fut une expérience difficile mais enrichissante, ainsi que des données alignées sur d’autres études.

Séance du jeudi matin:
Il y avait une série de différents panels principalement axés sur les patients et sur le régime cétogène.
Le premier orateur était le Dr Eric Kossoff (Université John Hopkins), soulignant les périodes intéressantes pour le régime céto. Il existe plus de 3000 publications à ce jour et 7 essais contrôlés randomisés. Il devient assez intéressant de demander à l’American Epilepsy Society d’organiser une session satellite (ou une session au sein de la session plénière) sur le régime céto en général.
Il semblerait que le régime cétogène chez la souris modifie le microbiote intestinal.

une étude récente citée dans Cell: https://www.cell.com/cell/retrieve/pii/S0092867418305208?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS00967230308%3
Les auteurs ont signalé une diminution de deux types de bactéries (Akkermansia muciniphilia et Parabacteroides spp) chez des patients atteints d’épilepsie réfractaire. Le régime céto a pu être modifié 4 jours chez la souris. Fait intéressant, la restauration de ces bactéries a restauré la protection contre les convulsions. Il n’y a pas encore d’études cliniques à l’appui de ces affirmations.
Il n’y a pas non plus de raison de restreindre les liquides ou les calories.
Tous les régimes semblent être valides, vous choisissez celui que vous pouvez coller. Cependant, il est recommandé d’adhérer au KD pendant <2 ans et à la version Atkins modifiée pendant> 12 ans. Environ 80% des patients ont présenté une réduction des crises épileptiques supérieure à 90% sur le KD et le MAD et 64% n’ont plus besoin d’anticonvulsivants. Le régime céto semble très efficace pour la cognition et la dyskinésie.
Bien que l’arrêt du KD chez les patients atteints d’épilepsie puisse être envisagé environ 2 ans après l’absence de crises épileptiques, le consensus est de continuer chez les patients sous GLUT1DS.
Le régime céto est un véritable engouement depuis deux ans, avec de bonnes informations et beaucoup de mauvaises informations. Il a été recommandé d’utiliser précautionneusement certains produits étiquetés comme compatibles céto.
Une autre discussion a porté sur l’effet à long terme du régime cétogène. Les données actuelles suggèrent que, bien qu’une élévation du cholestérol total et des LDL aient été observés à 3 mois, une normalisation sous la forme d’une diminution semble se produire. Plus d’études longitudinales sont actuellement en cours.
Il n’y a pas de risque décisif de malformations congénitales chez les patients suivant le KD.
Enfin une information rapide sur l’huile de CBD. Certains patients ont admis l’utilisation de l’huile de CBD en tant qu’adjuvants et n’ont trouvé aucun bénéfice supplémentaire. La restriction de l’huile de CBD dans le cadre d’essais cliniques reste a être documente.

Séance du jeudi après-midi:
J’ai moins de notes prises parce que c’était généralement général, mais il y a eu quelques faits saillants de la recherche, y compris ceux du Dr. Umrao Momani (Université de Columbia) en éliminant sélectivement SLC2A1. Il semble que le gène knockdown chez les souris de la petite enfance (P2) et de la petite enfance (P28) ait eu une plus mauvaise performance motrice par rapport à la désactivation au stade avancé. Cela a également été rapporté par une aggravation des crises et une diminution de la densité capillaire dans le cerveau de ces deux groupes.
Le Dr DeVivo a également signalé les problèmes de patients classés comme des patients du type GLUT1DS en raison de la présence de symptômes rencontrés chez les patients GLUT1DS, mais ne montrant aucune mutation connue dans SLC2A1 ni aucune mutation connue des gènes SLC2A3 (GLUT3).

Voici quelques-unes des notes prises lors de la conférence G1D. Rendez-vous à San Diego en 2021!

[Sciences/Rare Diseases] Summary of the GLUT1 Deficiency Conference 2019 – Washington, DC

Last week occurred the GLUT1 Deficiency Conference 2019 that was hosted in the Hilton Crystal City. It was nicely located near the Ronald Reagan National airport (DCA) making it easy access to the conference and conveniently located in the 2ndfloor of the hotel, with different ballrooms used. Before I go into details, I have to disclose that I have attended half of the first day and attended mostly sessions that I considered relevant as a scientist. If anyone else wants to write a summary on the other session, please feel free to contact me, I will add to this post.

Wednesday Afternoon Session:

The first session I have attended was a research plenary session hosted by Dr. Juan Pascual (UT Southwestern) giving us some update on the current NIH clinical trial ongoing when it comes to the Triheptanoine Clinical Trial. Nutrigenix was running a parallel clinical trial and after initial results came in decided to discontinue such trial.

Dr. Pascual initial results reported a reduction in seizures in about 7 patients out of 12 on the TH. The treatment was defined in a time period (if I remember well it was 6 months), and stopped. Some patients continued to improve, some showed a drop in the improvement. An interesting feature was the report of changes in the EEG suggesting a possible interplay between excitatory circuits and inhibitory circuits.

The second session was provided by Dr. McKenzie Cervenka (John Hopkins Univeristy) that has been surveying patients that are adults. An interestin feature is that all children initially diagnosed with GLUT1DS in the 1990s are now all adults and we are learning about the disease as they age.

An interesting observation is that we have a glut of GLUT1DS occuring in adults, in particular a review of medical history suggest that mild symptoms reported in their childhood maybe indicative of GLUT1DS but went under the radar due to the lack of proper diagnosis or clinical symptoms.
In adults, it seems to characterize as a mild and chronic encephalopathy/cognitive impairment with infrequent seizures. Some varying spasticity and ataxia were reported, as the paroxysmal exercise-induced dyskinesia.

Although the recommended level of ketosis in children is about 5mmol/L, such level seems not reachable in adult patients.
There is no signs of sex or gender differences when it comes to occurrence (50%), the most common symptoms are: ataxia (63%), cognitive difficulty (66%), and speech difficulties.
About 82% reported some triggers (excitation, stress or anticipation, warm, hunger, fatigue….), and 67% reported changes in the symptoms when reaching puberty.

61% of the patients surveyed are on the ketogenic diet (with 52% on the classic one and 33% on the modified Atkins Diet). About 41% on the keto diet were seizure-free in childhood.
Thanks to the recent awareness, the age of seizure-free patients is getting younger.
About 46% of the patients were on AEDs, in particular acetazolamide, levetiracetam or lamotrigine.
Notably, 91% of respondent found physical activity reduced symptoms seizures,100% were capable of basic daily activities and 36% able to drive. Notably, 19% started families and have children with GLUT1DS.  This brought some discussion about how pregnant patients should handle the KD and the urgent need for more studies to assess the effect of KD on pregnancy. There is some speculation that such KD maybe protective in these patients and the fetus.
A recent update in the guidelines were published recently in Epilepsia and available as open-access here: https://onlinelibrary.wiley.com/doi/10.1002/epi4.12225

Other recommendations were to follow calcium/creatinine ration in urine, increase hydration, and consider the risk of carnitine deficiency induced by use of multiple AEDs. Carnitine levels should be monitored 1 and 6 months after initiation of KD.

Finally, a presentation from a Sanofi scientist about gene therapy, this is in particular in light of recent success and FDA approval of two gene therapies for RPE65 and SMA diseases.

The rest of the day was a poster session, that was pretty succinct (less than 10 posters) but really was an enjoying experience to have patients and caretaker asking about my science and explaining them. It was some challenging but enriching experience, as well as some data aligning with other studies.

Thursday morning session:

There was a series of different panels mostly driven for patients and about the ketogenic diet. The first speaker was Dr. Eric Kossoff (John Hopkins University) highlighting the interesting times for the keto diet. There is over 3000 publications as of today, and 7 randomized controlled trials. It becomes interesting enough to have the American Epilepsy Society to have a satellite session (or a session within the plenary session) about the keto diet in general. There are some indication that the ketogenic diet in mice is altering the gut microbiota, based on a recent study cited in Cell (https://www.cell.com/cell/retrieve/pii/S0092867418305208?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867418305208%3Fshowall%3Dtrue)

The authors reported a decrease in two types of bacteria (Akkermansia muciniphilia and Parabacteroides spp) in patients with refractory epilepsy. The keto diet was able to alter 4 days in mice. Interestingly, restoring these bacteria restored seizure protection. There is yet any clinical studies to support these claims. There are also no reasons to fluid or calorie restriction.

All diets seems to be valid, you choose which one can stick. However, it is recommended to adhere to KD for <2 years and to the modified Atkins for >12 years. About 80% of patients have over >90% seizure reduction on KD and MAD and 64% no longer required anticonvulsants. The keto diet seems highly effective for cognition and dyskinesia.

Although the discontinuation of KD in epilepsy patients can be considered about 2 years after seizure-free, the consensus is to keep on in GLUT1DS patients.

The keto diet is really entered a craze in the last two years, with some good information and also a lot of bad information. It has been recommended to operate precaution on some products labeled as keto-friendly.

Another discussion was on the long-term effect of ketogenic diet. The current data suggest that although an elevation was observed for total cholesterol and LDL were reported at 3 months, a normalization as a decrease seems to occur. More longitudinal studies are currently performed.

There is not conclusive risk of birth defects on patients following the KD.
Finally a quick information on CBD oil. Some patients admitted the use of CBD oil as adjuvants, found no additional benefits of it. Restriction of CBD oil restrains it the use of clinical trials.

 

Thursday afternoon session:

I have lesser notes taken because it was mostly general but there were some research highlights including from Dr. Umrao Momani (Columbia University) by selectively knocking-down of SLC2A1. It seems that knockdown the gene in infancy (P2) and early years (P28) mice fared worse on motor function compared to late stage silencing. This was also reported by worsened seizures and decrease capillary density in the brain of these two groups. Dr. DeVivo also reported about the issues of patients that are classified as GLUT1DS-like patients due to the presence of symptoms as encountered in GLUT1DS patients but showing no mutations known in SLC2A1 and also no known mutations in SLC2A3 genes.

These are some of the notes taken at the G1D conference, see you in San Diego in 2021!

 

[Sciences/Neurosciences] Propionic Acid Induces Gliosis and Neuro-inflammation through Modulation of PTEN/AKT Pathway in Autism Spectrum Disorder (Abdelli et al., Sci. Rep. 2019)

Once a wise man said: “Be always wary of scientific studies trumpeted by mainstream news outlets as groundbreaking. Once the smoke settles down, the study in question is rarely groundbreaking, but rather limited with a lot of caveats”. If I have to summarize this study, that would be within the lines of the wise man. Despite what news outlets have been selling, this is a study that has its own merits, but its methodological limitations and caveats outweighs the novelty and significance. Especially considering the publication occurred in Scientific Reports, the response of Nature Publishing Group to the open-access model, this is an extra layer of concern as Scientific Reports prestige and quality has suffered major setbacks in the last few years due to papers retracted for blatant scientific misconducts that should have been spotted by reviewers.

About the authors: We have three authors. The first author seems to be a postdoc, as she apparently graduated from the same program from another lab. The second author maybe an undergrad, although a faculty with the same family name is listed. Finally the senior author is a faculty with an expertise (based on the publication record) on gastrointestinal (GI) tract physiology and pathophysiology. However, none of them seems to have a history of publication in any field of neurosciences. This is an important point, because it explains a lot of methodological flaws that anyone with neural stem cell biology (and brain development) could pick easily.

1. Introduction and hypothesis: The authors are basically using the rationale of changes in metabolomics observed in ASD patients and reported by several studies. In these studies, there are some indications that certain patients on the spectrum (especially those qualified as severely disabled) display an impaired GI function, in particular something we could qualify as something similar to inflammatory bowel diseases. There are studies suggesting that such GI condition is associated in a changes in the gut microbiota, yet with a fairly low resolution (we are able to document changes in a family of bacteria, but not able to pinpoint to the level of Genus species yet). In particular, there is a study that was recently published in Cell Stem Cells (very high impact factor journal) that highlighted changes in mice behavior and gene expression profile of several genes associated to autism following the fecal transplant from patient on the spectrum considered severe (https://www.cell.com/cell/fulltext/S0092-8674(19)30502-1).

In this study, the authors speculate that certain metabolites biosynthesized and/or bio transformed by these class of bacteria are contribution to the symptoms. In particular, the authors consider acetate (AC, CH3-COOH) , propionate (PPA, CH3-CH2-COOH) and butyrate (BA, CH3-CH2-CH2-COOH) as potential culprits, citing studies showing an elevated levels of these small chain fatty acids (SCFA) in fecal cultures of ASD patients compared to control patients. The authors also cite two rare genetic diseases such as neonatal propionic acidemia (PA) and propionyl coA carboxylase (PCC). PA will be very useful to us, because it will help us set what we would consider a pathological level of propionic acid (PPA) in blood. Yet comes the most speculative, and I would say the “jumping the shark” moment of the paper. The authors assume that since processed foods are rich in PPA, such amount of PPA can lead to the development of ASD in the fetal brain during pregnancy. That’s a lot of speculations with little or no layers of evidence. We have here the authors trying to make a statement four to five steps too far from the existing literature for several reasons (and also lakcing the literature backing them up) that be identified as the following:

1.   Where is the literature providing evidence that these SCFA cross the GI tract, at which extent (bioavailability studies)?
2.   What are the levels of PPA (and other SCFA) in the plasma/serum levels of neurotypic patients versus patients on the spectrum? For patients suffering from PA, I have found an old study referring to serum PPA as high as 0.337-1.35mM, with a normal level about 0.00337mM (https://www.jpeds.com/article/S0022-3476(81)80004-2/pdf)
3.   How much of PPA can cross the blood-brain barrier? This is an important question to answer. We can try to build on the analogy of the SCFA to ketone bodies (acetoacetate, beta-hydroxybutyrate) that are formed when someone is fasting or forced into a ketogenic diet. Considering that patients suffering from GLUT1 Deficiency Syndrome are showing improvement when put on a ketogenic diet (with BHB levels around 2-3mM), we can speculate these compounds can cross the BBB readily.
4.   How much of PPA can cross the placental barrier? I don’t have any clue either.

These talking points are important because it will determine if the experimental design of a study is sound or deeply flawed, which eventually will set the quality of the paper and the robustness of the conclusion made. That’s something I should mention by now, is that both the authors and the news outlets have been very fond of superlatives and trying to sell that paper at much higher level that it is meant to. Not only it is scientifically inadequate to make extraordinary conclusions without highly robust data to support them, but also it is nowadays dangerous to do so as such studies will be used by scammers, charlatans and other snakeoil salesmen to promote their supplements claimed to “cure autism” and use these type of studies to claim their products is supported by science.

2. Materials and methods: The authors used neural stem cells (NSCs) derived from fetal tissues (obtained from Life Technologies/Thermofisher) and maintained in a classical medium formulation aimed to maintain these NSCs in their pluripotency stage. The cells were passaged no more than three times, according to the author. This is important, as NSCs/NPCs passaging over time will coax them towards the astrocytes lineage compared to neurons (in terms of development, neurons appear earlier and mature earlier than astrocytes).
Now there is something intriguing. The authors claimed they looked at PPA and BA at concentrations ranging from 0.1mM (that would be physiological), 0.5, 1 and 2mM concentration. Considering such treatment would reproduce the fetal brain, we have to factor in what is the amount capable to cross the three barriers: GI barrier of the mother, the placental barrier and finally the BBB. However, the authors never showed what happened to these concentrations except the 2mM which is limit deadly (remember? This is the one that is detected in newborns having the rare genetic mutation). Are the authors trying to model the effect of PPA to model such diseases or are they assuming that the amount of PPA in processed found will be high enough to put the pregnant women into a severe metabolic acidosis? I don’t know but that like a red flag.
The differentiation of these NSCs into neurons/astrocytes were left to occur in a fairly random fashion, as the authors use the same medium used for NSCs maintenance but without growth factors. I think this is an important issue here as we may have a significant variability in terms of yield between passages in particular when it comes to neurons/astrocytes ratio (personal communication with Clive Svendsen).
Otherwise, nothing else really fancy and classical techniques found in any neurosciences studies: Immunocytochemistry, neurite outgrowth, qPCR…

3. Results:
3.1. Figure 1: I am a bit perplexed from what I see and from what I get when it comes to quantifications. The first issue I have is the lack of scale bar. A scale bar tells you how many pixels equals to a length. For example, a 512×512 pixel image may indicate you that 100 pixels equals to 50 micrometers. Here you have to trust that the experimenter did not fudge the data, crop the pictures and really show you a 10x magnification.
For simplicity, I will focus on the Day 10. My concern is about the health of the neurospheres in some of the groups, in particular the BHB-treated group. You can see in controls; we have nice rosette-shaped neurospheres with a dark core reflecting a dense. In contrast, look at the BHB treated group. These neurospheres are small, frail and lack the morphology observed in control. I wonder if BHB at this concentration is showing signs of toxicity? If yes, the authors were not concerned at all by this issue. And this is something concerning. If BHB is neurotoxic, so how can we make a conclusion to BHB as inhibitor. There are ways to show the viability of these neurospheres: Hoechst staining, propidium iodide staining, Fluorojade staining……..Because of this important issue, I will not consider the BHB treatment as valid.

Slide1

If you compare the data shown in Fig.1B and 1C compared to the quantification made in Fig.1A (and shown in the bottom), you can see we have a certain discrepancy here. I would skip the issue in the y-axis labelling (the correct symbol for micrometers is µ (mu) not n(nu)), but compare the 10 days timepoint to the data we actually obtain from the Fig.1A. In scientific publications, you have to be sure that what your representative blot/micrograph picture shows matches your quantitative analysis. In other words, what I see in the micrograph pictures in Fig.1A should be reflected in Fig.1B and 1C.
Then explain me why the differences in diameter reported is not as different between my quantitation (using ImageJ internal functions) is different from the one displayed. How do the authors justify the use of SEM instead of SD, except for making the graph look nicer (you can see the data actually suggest a much more variability that likely undermine the statistical difference)? How does the authors explain the 3x difference in neurosphere counts between me and them? Did they crop the pictures? If so they should have accounted for, and highlights the importance of having a scale bar in micrographs pictures and normalizing such data to a surface area (e.g. pixels2, µm2, mm2….)

3.2. Figure2: These are immunofluorescence pictures of plated neurospheres allowed to differentiate by their own on Matrigel-coated plates (Cultrex). The pictures are okay, although not very convincing for some and certainly not suitable for quantification. The use of flow cytometer is definitively a go-to when it comes to assessing cell populations.

Slide2

We are also here having a mixed results and missing important cell markers. First, the authors should have performed a nestin staining, as nestin is one of the markers present in NSCs/NPCs. Second, the use of GFAP as astrocytes marker has to be taken with a big grain of salt. I am not sure experts in the field would have let this fly with just one marker. GFAP can be expressed by NSCs and NPCs. Showing at least two markers per cell lineages (NeuN/bIII tubulin for neurons, S100B/GLAST1 for astrocytes) would have been much more convincing. bIII tubulin antibody (in particular the one used in this paper) is known to have a very strong non-specific staining. A good bIII tubulin would show nice neurites. I have attached a picture of an iPSC line developed by Sigma-Aldrich. You can use it to compare so you can see what a good bIII tubulin and a good GFAP staining should look like in NPC-derived astrocytes. Here we have just some blobs (that indicates a possible non-specific immunoreactivity) that dangerously overlap with GFAP. Technically, you cannot have a neuron that express both GFAP or bIII tubulin. It is either or but not both. R&D Systems has a nice interactive map that shows you the different cell markers expressed by the neural lineage as its differentiate into neurons and astrocytes here: https://www.rndsystems.com/pathways/neural-stem-cell-differentiation-pathways-lineage-specific-markers

What I am supposed to do with that?

3.3. Figure 3: The authors looked at both GFAP and bIII tubulin at mRNA levels (PCR) and protein levels (by ELISA). I would have personally put the PCR data first, followed by the ELISA data. The PCR data was normalized to GAPDH and the DeltaDeltaCt method was used, which is good, the authors also have represented the apparent bIII tubulin or GFAP/GAPDH ratio, which is good.  However, I am more skeptical on the ELISA data. The reason why? The data is represented as micrograms of protein/microliters of cell extract. I am skeptical why the authors did not run a Western-blot analysis for these two housekeeping genes, since you expect a lot of proteins being expressed. The authors also forgot to mention if they have diluted the samples or just added the crude extract at is. This is important because you can easily blunt the accuracy of your ELISA. LSBio is honestly a cheapskate when it comes to showing a standard curve, unlike more established ELISA kits manufacturer such as R&D Systems or Abcam, that will show you their standard curves and tell you the coefficient of variation in them. The maximum concentration of the standard curve is 1000pg/mL or 1ng/mL, with a detection range of 15.63-1000pg/mL and a sensitivity of 9.38pg/mL. The authors reported concentrations for GFAP was 0.8-3 pg/ml according to their graphs. Something is wrong here, and we have at least 2 reviewers that completely miss that. Are the authors telling me that they were able to detect GFAP and bIII tubulin below the sensitivity level (9.38 and 313pg/mL respectively)? Give me a break! I would also have advised the authors to normalize their concentrations into something meaningful like mg of proteins. It is easy to take a fraction of the cell lysate and measure the total protein concentration by BCA. I ask my students whenever they use an ELISA for quantifying a cellular protein to normalize their amount detected (pg/mL) to a total protein concentration (mg/mL) which allows us to normalize the data. Failure to do this normalization is like showing a Western-blot without a proper loading control (e.g. actin, GAPDH….).

3.4. Figure 4: In this figure, the authors are trying to show the expression of GPR41 (aka free fatty acid receptor 3 or FFAR3) in those cells. Honestly, this is my breaking point of tolerance. First, the authors underwent some cherry-picking of the data, showing you only the PPA treatment in astrocytes (where is the BA treatment? Where are the BHB treatments?) and only the BA treatment in neurons (where is PPA? Where are BHB?).
I am also very skeptical that what the authors call astrocytes are really astrocytes looking. What we see in Fig.4A looks very similar to 4B: very thin cytoplasmic projections looking like neurites. Only neurons form neurites in cultures. Astrocytes have more a flat-shaped feature, sometimes a bit fusiform like shape. Again, the GPR41 protein expression is really up when you have tons of PPA given (mM and more). How come this went through peer-review unabated and have at least 2 reviewers did not notice this gross conundrum in the data?

4. Rest of the figures and conclusion: I can go further with this paper. It was looking very interesting and promising, but the lack of expertise from the authors quickly percolated into loose and inconclusive data. This is the kind of paper you wish the authors would seek feedback from across the street, from some faculty with a neuroscience background and give them an honest feedback to make this paper good and scientifically sound. What we have indeed is a half-baked study, served as the next big thing since sliced bread. Not only the data is far from convincing of the claims made my authors (I would probably accept as a possible model for modelling PA or PCC, but this paper IS ABSOLUTELY NOT SHOWING THAT PPA IS CAUSING AUTISM for several reasons below:

1.   It does not account for the PPA levels found in normal persons, even less provide a study showing PPA levels in people eating processed foods (if such dietary habit even lead to such outcome).
2.   It does not consider that in order to be valid the authors have to show that you have a 100% bioavailability of PPA across the GI barrier, the placental barrier and the BBB, which are not reported or cited by the authors in any credible form.
3.   It does not account that the levels used as so ridiculously high that a pregnant mother would deal with a possibly deadly metabolic acidosis.
4.   It also ignored that BHB was showing signs of neurotoxicity.
5.   There is a worrisome pattern of data cherry-picking, with groups popping in and out intermittently, sometimes even in a complacent manner. This is a no-no and an unacceptable behavior that has no place in any respected peer-reviewed journals. Why did the reviewers overlooked that issue?
6.   There are several inconsistencies in the data, especially whether the axis labels are botched or if the authors really provided measurements that were nornally impossible to reach (below detection limit).

This paper should at least had a “major revision” to fill the gaps. Yet, it went through at least 2 reviewers and none of them were able to see the obvious methodological flaws. As a reviewer for Sci Rep on a seldom basis, I am very concerned about the quality of review provided by the journal in the recent years, especially in light of series of retraction. Conclusions? The news outlets have been trying to sell an overhyped paper that does not live much under scrutiny. This is just “same old, same old” when it comes to journalistic reporting on science (trying to fudge it as groundbreaking), but also opens a dangerous precedent. I will bet that within 12 months, there will be some quack doctors and snake oilsalesmen that will claim they can cure autism by selling you supplements aimed to reduce the PPA or by selling you a dietary fad book, claiming it will cure your child autism by dietary restriction. I guess the keto diet will soon join the casein-free/gluten-free diet as outdated and have another fad being served as dietary torture to children on the spectrum.

[Computer/PC] Someones trash is someones else treasure. Rescuing a 2002/2003 dream PC from the dumpster.

“Someone’s trash is someone’s treasure”. That’s a kind of sentence I like. Not a hoarder, but I am an avid (and budgeted) retro gaming enthusiast with a knack for old computers.
You can say that I grew right as the home computer and video games market grew. I was born with a 2600 joystick in my hand (well to be honest, I was 3 years old when my uncle brought an Atari 2600 home and got to touch the iconic joystick), and early on developed the interest to computer and gaming. I still remember my first contact with a computer (a Thomson MO-5 in my elementary school, with the optical pen and the “turtle” sitting in the room, never to be used unfortunately). Over the years, some computer hit home, given by someone or God knows found somewhere, but never worked (I remember that we had an Alice 90 at some point, and a Yashica YC-64 MSX-based computer. But both lacked their cables and therefore were rendered unusable).
I still have a fond memory of my first computer that actually worked, an Amstrad CPC6128. It came with all the bells and whistle as a super-duper bundle pack: the Amstrad 6128 keyboard, the dedicated color monitor and TV tuner base (taking place right under the monitor, with a SCART and RF input and small speakers no less!), a 15-games pack (including Rambo, Galivan, Cauldron, Sorcery…..) and a computer desk!
The Amstrad CPC6128 unfortunately did not last more than a year, thanks to the infamous 3″ floppy drive, quickly replaced by an Atari 520ST. This one lasted for over 5 years before dying…..from a defective floppy drive (and also by having the Joystick/Mouse port desoldered). That was the beginning of the hiatus for me and computers. I have been playing on my friends computers then, but it took me until 1997 to finally close the hiatus and get back to PC. I bought my first PC back in 1997, through the help of my big brother friend. Because he was in Paris, he could get used components very cheap and mounted my first PC (a Pentium 120 with 16MB RAM mounted on a generic Taiwanese Intel 430FX chipset, an S3 Trio, 1.2GB WD Caviar, a 12x CD-Rom drive, an ESS Audiodrive and a VGA screen that could only do 640×480@60Hz by default). This was my first PC and since then I have mounting and building my own, gathering pieces and buying stuff on the cheap, one component at the time.
To be honest, it has been over 15 years I dropped fixing computers for the fun as I switched to a Mac as I was starting grad school. Since, I have been mostly using Macs and almost never had to put my hands inside a PC Tower.
But about a year ago, I got somehow the flame of PC building on again. This was maybe triggered by watching Youtube videos on various channels including Linus Tech Tips, Lazy Game Reviews, the 8-Bit Guy, Jesus Metal Rocks or Nostalgia Nerd. Let’s say seeing computer builds, especially seeing some old PCs being restored back to their glory ticked me.
A couple of months ago, as I was dumping the trash in the dumpster lining the backalley.  One of the neighbor apparently was doing some cleaning in his garage/man cave as they closed on selling their house and likely prepared their move. There was not one, but three computer and lots of computer stuff that were dumped in the dumpster. Most of it was junk (old cables and adapters, incomplete PC games….). One computer was a Dell Pentium III but DOA. However, there were two Antec cases in that dumpster. One was just an empty case (Big and grey tower), but the second one was almost complete PC. All it needed was a hard drive, a mouse/keyboard and a screen. I took my luck of “finders keepers” and rescued it, as well with some cables (80-pins IDE cables, yum! Pretty useful if you are modding OG Xboxes) and kept it in the garage until I had time to spend effort on it (lets say April/May were crazy months on my professional side, sucking my free time over the weekend).
Low and behold, this computer was almost a time capsule of my college years, sometimes between 2002-2004 period. The PC was an AthlonXP 1700+, mounted on an ASUS A7N8X Deluxe (no less!) built around an NVIDIA nForce2 and harboring an on-baord SATA RAID (that was the early days of SATA), 1GB DDR RAM (2×256 and 1x512MB), 2 Ethernet connectors (yikes!), and 6 USB connectors (including 4 USB2.0).
Accompanying this beefy beast is a BFG Geforce 6600 card (and my first time encountering a graphic card with the need of a Molex connector. I remember seeing them on the 3DFX Voodoo 4 but never had a graphic card needing additional wattage), two optical drives (a DVD-burner and a CD-Burner). The whole stuff was mounted on an Antec case. Back in the days, Antec cases were fairly high-end and pricy cases, much more than the generic Taiwanese case. But you have to give them credit, the design in it is pretty good, mostly screwless and really accessible to easy change compounds in it. This was probably the beast during my college years. My last PC I have before switching to an AthlonXP 1800+ with 256MB RAM, a Geforce 2 GTS (a Leadtek if I remember well), and an 40/100GB WD Caviar IDE drives (I had one for the OS, the other for my personal data).
After unmounting it piece by piece, giving it the TLC it greatly needed (the dust of West Texas can be very unforgiving for electronics), I remounted it and rewired the cables (Antec has nicely designed a backspace behind the drive bays, allowing to use it as a corridor to pass cable throughs),
IMG_0240

Now the beast was restored in all its glory. Cleaned up, rewired to improve the air flow as much as I could. I also took the opportunity to replace the 1700+ with a 2400+ processor bought on eBay for a couple of bucks, some Arctic Silver 5 thermal paste (because the thermal paste must have been long gone) between the CPU and the cooling fan, replaced the DDR RAM with 2x512MB ones (that I had from stocks gutted on computers) to enjoy the Dual Channel feature of the mainboard (you can identify them by the blue color, however it does not allow you to utilize the third memory bank), added a Seagate 320GB SATA drive lying around.

Considering the case is from 2002 (QC stamped on the bottom) the design is pretty cool, with an aeration funnel located just over the CPU ventilation. The removal of 5.25″ drives is pretty sleek and effortless, without the need of screws. These are things we could laugh about in 2019, with all these fancy cases, but back 20 years ago, these were options simply non-existent on the average beige PCs.
The only issue I have is the front USB ports that give me hell to work.  They simply don’t recognize anything plugged in. I gave some TLC to the front plastic and cleaned the small USB controller that provides the front connectors and lead to the mainboard via a special pin connector. Also tweaked the jumper setting the voltage on the USB connector (after reading the manual). Low and behold, it worked! It recognized a flash drive I had some drivers in it. That is great, because it would provide me easy access to plug in USB controllers like a generic Gamestop-compatible Xbox360 wired controller.
For the display, I also went on the cheap and acquired a Dell E197FP monitor on Facebook marketplace for $10. I wished I could find one monitor with the USB extensions, DVI connector and underneath speaker unit (that is really handy when you want to keep a compact computer), but this one will make the job. The nice thing is its old 4:3 tipping a 1280×1024 resolution on a 19″. That would be the same of my good old IIyama 17″ screen I had back then, and most importantly matching the resolution size you want for emulation, especially arcade cabinets using MAME.
For the choice of the OS, I was divided between Windows XP or putting Windows Vista in it since I have found a copy of Windows Vista Ultimate Edition in a thrift store a couple of years ago for a buck. The advantage of Vista over XP is possibly the chance to have a better chance with the EOL. The concern was of course the performance and CPU overload (Vista came in 2006, about 5 years after this generation of machines. That would be trying to install Windows XP on a Pentium II. Yes you can, but the CPU will remain a bottleneck for performance). Vista gave it a 3.2 overall score, which seems okay. But trying to stick best to the era primed over and led me to erase-reinstall Windows XP on that machine.
The biggest issue I have is dealing with the SATA driver. The chipset in it is a Silicon Image 3112 RAID. Usually, that would come in the form of a driver you would insert in the 3.5″ floppy drive during XP installation. But I have no floppy drive for this puppy and don’t have really the need of one (I am not planning to do early MS-DOS emulation on that one, as the mainboard has no ISA slot for an Soundblaster, and the PCI Soundblaster emulation is not great. Honestly, I am better with DOSBox for that purposes).
I have found a XP ISO image on the Net that was already patched with SP3 (very important, because getting hand on the last service pack for XP is not easy indeed) and also slipstreamed with a bunch of SATA drivers. Although the first phase went well (XP detected the drive, formatted it and copied the first batch of files), it would irremediably fail on the first reboot with the BSOD 0x000007B error. This one is commonly encountered on the Web with XP/Vista/7, seems to be the issue on how the OS deals with SATA interface (in some machines, you can ask the BIOS to treat SATA as ATA instead of AHCI. Unfortunately, not on mine. In addition, the absence of USB support for booting external drives put my USB floppy disk out of the game). The solution I found to this problem was to match a SATA driver that was  more recent to the latest BIOS (the mainboard got the latest official BIOS update dated from 09/2004). Using nLite, I created a new ISO that was inclusive of the latest XP driver I found (Silicon Image is long gone and I had to search on the Internet to find a driver) and booted into it flawlessly.
Finally! I was able to get best of both worlds! Windows XP and SATA Hard Drive. The machine is running great so far, although with fairly noisy fans running at full-speed all the time (despite having it plugged to benefit the Q-Fan feature of ASUS motherboard). It is almost like having a hair dryer, a stark contrast of the Dell Optiplex 7010 I have revamped earlier this year (these are pretty good potato PCs that for less than $200 gives you a decent Windows 10 PC, that my son use to play Fortnite on it).
As for now, I just finished to install the OS and its patches, as well as the drivers for it (Nvidia supported this Geforce for almost 10 years, with the latest WHQL driver dated from 2013). My main goal is to recreate somehow the experience I would have with this machine like 16 years ago. I must have some archived ISOs of Office 2003 somewhere, I installed Winamp to recreate my MP3 experience, PowerDVD for the DVD player (I am looking for options to region-hack the DVD-Rom drive (I have a ton of Region 2 DVDs that I brought from France when I migrated in the US), and emulators for anything 8 to 16-bits (the SEGA Genesis/SNES was the best we could come for emulation, with only a burgeoning attempt to emulate the SEGA Saturn/Sony Playstation or Nintendo 64 then).
One of the niceties is that GOG.com allows you to have some executable files of the games you purchase on their website, allowing me to have them run on Windows XP. In the same time, that I hate about Steam. Steam dropped XP support last year. That means you cannot play games on your machine anymore, Steam does not seem to allow you to have an offline copy. So much for buying Doom 3 for a couple of bucks yesterday, but only compatible on XP/Vista machines (to play on Windows 7 and better, you have to buy the BFG edition which is a remastered version designed for newer machines).
That would allow me to run basically games I have been playing from 1994 up to 2004 into this machine. I also have a bunch of educational and game CDs that I bought for the kids back in the days through garage sales, all concealed into a Napster CD sleeve (yep, apparently Napster licensed its name to some CD sleeves).
Ideally, that gives me an excuse to roam into Goodwills and other thrift stores and find some old games running mostly on XP. Ideally, I would like to find games from that era (or a bit earlier), as well as a PS2 keyboard (seems they are rare to find nowadays) to plug that puppy in. I also need to find a small table, ideally a small computer desk just for it. I remember IKEA having a very nice and compact computer desk (if I remember, its name was MIKAEL) that was discrete, yet well conceived for desktop PCs.
For some people, it was an obsolete machine that was taking the dust, but for me it is a nice time capsule that was uncovered, like a old and rusty oil lamp that has something magical about it. Someones trash is someones else treasure.