Skip to content

Category: Analysis

Losing the battle against coal

Public opinion in most countries seem to favor renewables as the future source of carbon-free power. Nuclear power is often regarded as a thing of the past, and an option that is “too expensive” or “too risky” to replace coal. In reality, when we put our money into renewables, we snatch defeat from the jaws of victory.

Renewables don’t get us far enough

These perceptions is one of the main reasons that the EIA International Energy Outlook 2013 has global electricity production doubled in the year 2014, with relations between power sources virtually unchanged. I.e. everything doubles, including coal/gas:

lost_battle_eia

This prognosis puts the hopes of the renewables’ crowd to shame. Nuclear power remains the only proven option to combat fossil generation, but its growth is severely hampered by the ever-increasing and largely unnecessary regulatory burdens.

Pace of technology adoption

In the graph below, with data mostly from the BP Statistical Review of World Energy 2013, the rapid pace of nuclear penetration in pioneer countries is obvious. When you get off the starting blocks with nuclear, 50% or more can be reached in a mere decade. Just as obvious is the comparatively slow pace of wind and photovoltaic adoption.

lost_battle_penetration

Denmark does have a fairly steep wind curve recently, but we should remember that Denmark is a very small country that relies heavily on its neighbors’ power grids for balance. Larger countries and areas such as Germany doesn’t have larger neighbors, and cannot easily integrate that much wind power.

Germany: The Black Sheep of Europe

Germany, the major industrial power of Europe, is the prime example of energy policy gone totally wrong. Last year, it actually increased its coal generation by 3.9%, from 76.0 million tonnes of oil equivalent (MTOE) to 79.2 MTOE. Half of the electricity of Germany is from coal, but it could have been rid of all of it already, had it pursued nuclear power instead of wind and solar. Here’s a table of German investment costs:

lost_battle_costs

As can be seen, electricity investments in renewables amounted to a whopping 60 billion euros from 2000 until 2008. That money could have built them 15 nuclear reactors with an output of some 140 TWh/year already completed today. The investments from 2009-2012, ie 76 billion euros, would suffice for an additional 19 reactors with an output of some 180 TWh to be completed in the coming few years. Instead of these 140+180 = 320 TWh/year of electricity for 60 years of lifetime, Germany now has 60 TWh/year of wind and photovoltaics with 20 years of life. Germany’s coal generation is some 280 TWh/year, so had the money been put into nuclear, the coal would soon be history.

Consequences of failed policy

What’s more, had Germany pursued the nuclear path, they wouldn’t have locked in a FiT surcharge of 5 euro cents or more per kWh for many years to come, and unneccessary cancer deaths would be down by thousands each year! Note: This does not include costs that are external to intermittent power sources, for strengthened grids and for industries to create power backup solutions to handle frequency fluctuations. And, of course, hundreds of million tonnes of CO2 wouldn’t have been released into the atmosphere.

The Future

Renewables investment in Germany, for electricity only, from 2010-2050 in the table above is put at a staggering 149+407=556 billion euros (there are higher estimates up to a trillion euro). This excludes transmission upgrades, backup power plants, demand side managements costs, industry consequences and more, yet it would suffice to build some 140 nuclear reactors that would produce 1300 TWh/year – probably more considering economies of scale. Germany today consumes 600 TWh/year and wants to cut that with 25% in order to reach its goals.

While there are signs that Germany’s consumers, voters and politicians doesn’t have the stamina to continue on this painful and polluting path for long, some of it, as EIAs projection shows, will be replicated in the rest of the world. The world could get rid of coal in short order, if it puts its money on the right horse. Will public opinion and regulatory regimes allow that? You decide!

Comments closed

Joseph Mangano never stops, and he never gets it right

Joseph Mangano has once again puffed too hard on the alarmist pipe, now with a new article in the August 15 edition of the political newsletter Counterpunch. We recognize the pattern from before: First spread a bit of scaremongering disguised as research in some fringe media. You mix the alarmistic message with some caution in order to cover your back in case somebody will put you to task, knowing that the alarmistic part will advertise itself, be inflated and spread through the internet and possibly some news media. Then some time later you publish an extended study with a similar message in a scientific journal with low quality threshold.

Joseph Mangano seems happily surprised that people once again are falling for his junk science.

This time the title of the Counterpunch article starts with the rather cynical Let the Counting Begin followed by Fukushima’s Nuclear Casualties. It is just a calculation exercise for Joe, and it could have been an interesting one if it weren’t for the fact that:

  • he is counting dead people in Japan during 2011, claiming that the cause of death for 38,700 of them are unexplained, with the implication that radioactivity from Fukushima is the cause, and
  • a closer scrutiny shows that once again he is handling the data in a very irresponsible way in order to push his own anti-nuclear agenda.

4 Comments

Friday arithmetic, the end of cheap energy?

It’s Friday, a thunderstorm is raging outside and I am on vacation so there is not much else to do than some simple calculations. On a Swedish blog I made a comment regarding how everyone is ignoring the uranium elephant when it comes to energy discussions. Pessimism abound regarding the reserves of fossil fuels and doomsday is commonly predicted. But I have long held the view that nobody can seriously claim humanity will ever run out of energy (however that doesn’t exclude bumps on the road), many might find that to be a naive idea, but some very simple arithmetic proves my point.

5 Comments

The first WHO and UNSCEAR reports on the health consequences of Fukushima

is on the way…. Nature has an article about it, here are some highlights.

The risk to the roughly 140,000 civilians who had been living within a few tens of kilometres of the plant seems even lower. Because detailed radiation measurements were un available at the time of the accident, the WHO estimated doses to the public, including radiation exposure from inhalation, ingestion and fallout. The agency concludes that most residents of Fukushima and neighbouring Japanese prefectures received a dose below 10 mSv. Residents of Namie town and Iitate village, two areas that were not evacuated until months after the accident, received 10–50 mSv. The government aims to keep public exposure from the accident below 20 mSv per year, but in the longer term it wants to decontaminate the region so that residents will receive no more than 1 mSv per year from the accident.

The WHO’s calculations are consistent with several health surveys conducted by Japanese scientists, which found civilian doses at or below the 1–15-mSv range, even among people living near the plant. One worrying exception is that infants in Namie town may have been exposed to enough iodine-131 to receive an estimated thyroid dose of 100–200 mSv, raising their risk of thyroid cancer. But data collected from 1,080 children in the region found that none had received a thyroid dose greater than 50 mSv. Chernobyl’s main cancer legacy in children was thyroid cancer.

But most importantly is this

A far greater health risk may come from the psychological stress created by the earthquake, tsunami and nuclear disaster. After Chernobyl, evacuees were more likely to experience post-traumatic stress disorder (PTSD) than the population as a whole, according to Evelyn Bromet, a psychiatric epidemiologist at the State University of New York, Stony Brook. The risk may be even greater at Fukushima. “I’ve never seen PTSD questionnaires like this,” she says of a survey being conducted by Fukushima Medical University. People are “utterly fearful and deeply angry. There’s nobody that they trust any more for information.”

To bad people like Sherman and Mangano, Gundersen, Busby, Caldicott, Matsumura and a host of other people and their fan clubs within the “environmental movement” are doing everything they can to spread excessive and scientifically unfounded fear of radiation.

3 Comments

Chris Busby and the Fallujah sex ratio – Part 2 (incompetence)

In the previous post, it was noted that Busby’s claims about a deviating sex ratio in Fallujah (first article here, second article here) may not be such a significant finding as he makes it sound, and that Busby is well aware of it but doesn’t change his approach about it. There are weaknesses in the study, both methodological and due to the difficult circumstances in performing the study. So, in lack of other data, the results of the study may be of interest, and if properly designed the survey may give much better results than other kinds of surveys. But with the weaknesses in mind it would be reasonable expect a more humble approach from Busby and co-authors about the conclusions, if they are serious about it, that is.

One of Busby’s most significant findings, according to himself, is the deviating sex ratio for the children born in the years after 2004, the year of the battle of Fallujah. The first study shows a decrease in the number of boys with respect to the number of girls, 18% below the normal level (860 boys to 1000 girls instead of the expected 1055 boys to 1000 girls). According to Busby this must be due to mutagenic stress induced by radioactivity from uranium. To support this theory he cites studies about lower sex ratios when the parents have been exposed to uranium in mines, medical radition treatment, and the Hiroshima bomb. So if we ignore the weaknesses of the study, we may agree with Busby that an 18% reduction in the number of boys born is interesting.

The problem is that he consistently ignores all other possibilities. The wikipedia page on human sex ratio gives a number of environmental and sociological reasons for deviations in the sex ratio. Busby does not mention a single one of them. Considering the heavy fighting in the city, there may also be further reasons for deviations, including stress and the simple fact that maybe people are not putting priority on making babies when their homes and a good fraction of the city (and the country) have been smashed into rubbles. The issue about if uranium based weapons were used at all in Fallujah is an open question, there are opposing views on this issue (it has surely been used in other parts of Iraq). If we assume that uranium based weapons were used, then we would expect Busby to at least mention the known chemical toxicity of uranium. Instead he puts all emphasis on the radioactivity from uranium, his theories about radioactivity is the only thing that matters for this self-proclaimed international expert on radiation.

Considering how many attempts Busby has made with epidemiological studies (and failed badly with some of them) it is quite remarkable that he still has not learnt to be cautious with the most important parameter: low statistics. Furthermore, with all the possible reasons that he excludes as potential causes, he never asks the question: Is a deviation in sex ratio always due to mutagenic stress from radioactivity? It is always due to mutagenic stress at all? Let’s find out.

As noted in an earlier blog post, Busby is quite upset with the Swedish Health Authorities (Socialstyrelsen) for not letting him use their cancer statistics data base, apart from the data that are publically available and that he already mistreated last year. I have some good news for him, he can play with another data base, the one from Statistics Sweden (Statistiska Centralbyrån, SCB), which has a lot of interesting data on the Swedish population. Let us use this data base in order to check the sex ratio for a few cases. Let us start with checking the sex ratio for the entire population in Sweden, i.e. the number of born boys every year divided with the number of born girls every year. As in Busby’s article we normalize to 1000 born girls and expect the sex ratio to be around 1055.

 

Sex ratio for Sweden during the time period 1968-2010. Source: www.scb.se

We see that the sex ratio is indeed very close to 1055, the average rate is 1058. And it fluctuates very little, it is always well within the span 1040-1070, with minor statistical deviations. But we have lots of statistics when we use the number of born children in entire Sweden. So let us look at the same situation for a medium sized city in Sweden, for instance Avesta, the city where I was born. We use a blue line for the sex ratio for each year, and a red line for the 5-year average:

Annual sex ratios, and 5-year averages for the Swedish city Avesta (population 21507).

Interesting fluctuations indeed! Busby has made a lot of fuss about the level 860 boys to 1000 girls. This in a total cohort of 4843 persons. The population of Avesta is more than 21 000, so we should have more than enough statistics in order to make a fair comparison and even err on the side of caution. We see that the sex ratio (blue line) fluctuates year by year around the expected value of 1055 boys to 1000 girls, though in some years the sex ratio is down to 800 boys to 1000 girls. But in the Fallujah study the data were shown as cohorts of 5-year averages. The red line shows the data for 5-year averages for Avesta. This curve does not fluctuate so drastically as the blue line, the extreme values cancel out. In spite of this we see a drastic decrease in the sex ratio for the last five years, going down to slightly less than 900. This is not as low as the value 860 in Fallujah. But it is based on better statistics, and from a trustworthy source that probably has the numbers correct down to each individual child. And to my knowledge Avesta has not been bombarded with uranium based weapons recently.

Now, can we find any city in Sweden where the 5-year average of the sex ratio at some time in the period 1972-2010 is lower than 860? In order to be fair we should set a constraint that the city should not be too small. The population pyramid in Sweden is very different from the one in Iraq, so in order to have enough children born for a fair comparison we set, arbitrarily, that we want to find a city with a population of at least 10 000, where the 5-year average of the sex ratio at some time has been below 860. Well, look:

Sex ratio for 8 Swedish cities with populations larger than 10 000 people. The Fallujah data are inserted as a thick grey line. Click on the picture for a link to a larger version.

The figure is a bit messy, but the main point is to look at the extreme values for each city. We find eight cities that fulfill the requirement of a 5-year average sex ratio that at some time is below 860. The cities are: Trosa (pop 11 492), Åtvidaberg (11 474), Mörbylånga (14 152), Burlöv (16 825), Strömstad (11 965), Filipstad (10 506), Nora (10 462) and Hedemora (15 141). The data from Fallujah are shown as a broad grey line.

Wait a minute, you may say. These cities are probably from the same region, and share some common environmental effect. Hardly, the map below show their locations in Sweden. Furthermore, the fluctuations for the different cities do not agree in time with each other. It is therefore very unlikely to find a common cause. Except low statistics, just as in the sample from  Fallujah. If we include Swedish cities with populations below 10 000 then we will find more than 30 with sex ratios below 860 in the 5-year average, some of them well below 750. The reason is, again, low statistics.

Map of Sweden, with the location of the 8 cities shown.

But what about the drastic decrease in the number of born children in Fallujah, the number of boys born went down with 50%? This must be due to an environmental effect, right? Well, not necessarily. There can be many different reasons for why there is a decrease in the number of born children, not the least after an intense battle occurring in the city. But let’s take a look at the 8 Swedish cities again, now we look at the number of born boys:

 

The number of born boys in the 8 cities that we look at.

Once again we see that the dramatic variations in Fallujah are not extreme when comparing with some of the Swedish cities. Each Swedish city has its own behaviour, mostly depending on local variations in the population. But most of them share a common drop from a peak value around the year 1992. For Hedemora the number of born boys is reduced to almost 50% over a ten year period, with 35% decrease as the most dramatic drop over a 5-year period. But why is there such a decrease in most of the studied cities? Well, let’s look at the number of born children for all of Sweden:

 

Number of born children in Sweden, 5-year averages, 1968-2010. Source: www.scb.se

We see that there was a peak around 1992 followed by a quite drastic decrease, in 1997 the 5-year average number of born boys was 26% lower than in 1992. Is it due to the bad economy of Sweden at the time? Or was it less “popular” to have children for a couple of years? There are surely studies available regarding likely causes. Whatever the reason, we can easily exclude uranium based weapons. This does not disprove any hypothesis about uranium being the cause in Fallujah. Furthermore, I have only looked at the sex ratio while the article deals with a number of health effects, but whoever argues for uranium being the cause has a lot to explain before talking about significant findings the way Chris Busby does.

 

A few conclusions:

  • All these cities have drastic variations in the sex ratio, even for the 5-year averages, and reach values lower than 860 boys to 1000 girls.
  • For several of the cities the 5-year averages varies dramatically, similar to in Fallujah.
  • The cities are distributed in different parts of Sweden with different geographical/environmental conditions.
  • The periods of low sex ratio for the 8 cities occur at different times, no common cause can be seen.
  • None of the 8 cities have suffered from war during the last 200 years, and during the last 40 years Sweden has been among the top ranked countries in the world when it comes to health status of the population.
  • The variations are as large, or larger, than in Fallujah, based on much more reliable data, and equal or better statistics.
  • Chris Busby should give up all attempts of epidemiology. This is not the first time he fails in this discipline, he just can’t do it right.
  • We do not learn anything about the causes of the health effects in Fallujah by listening to self-proclaimed experts like Chris Busby.

There are indeed more things to say about Busby’s studies on Fallujah. When time permits they will be brought up on this blog.

 

Mattias Lantz – member of the independent network Nuclear Power Yes Please

 

Related blog posts:

Chris Busby and the Fallujah sex ratio – Part 1 (dishonesty)

Bad science – Chris Busby and his articles on Fallujah

 

 

 

 

3 Comments

Chris Busby and the Fallujah sex ratio – Part 1 (dishonesty)

The city of Fallujah in Iraq suffered through intense fighting during 2004, and US troops bombarded the city heavily. The US military has admitted to the use of white phosphorous, which is quite toxic though not necessarily cancerogenic. Whether depleted uranium (DU) weapons were used or not is still an open question, there is a number of statements in both directions from many different sources.

During the last few years there have been news reports about an alarming number of children born with deformities, and other serious health effects among the Fallujah population. In July 2010 a study by Chris Busby and coworkers was published in the International Journal on Environmental Research and Public health (here). The title of the article is the rather alarming Cancer, Infant Mortality and Birth Sex-Ratio in Fallujah, Iraq 2005–2009, and it reports on the results of a survey done in Fallujah that reveals drastic increases in various forms of cancer and birth defects.

The details of the study can be found directly in the paper (here), or from Busby’s presentation about it in Stockholm in August 2010 that is available on Youtube (here). A transcript of what he says in the presentation and the discussion after is given here.

There are many things that can be said about the survey and the quality of it. Considering the difficulties of performing the survey, and the limitations of this kind of survey (knocking on doors and asking about the health status of the people living there), one has to be very careful and consider all the weaknesses before drawing any conclusions. Busby and coworkers cover much of these concerns in section 2.3 of the article; Strength and Weaknesses. It says, among other things:

One weaknesses of this type of study is population leakage due to migration. Although ten years is used on the questionnaire, from analysis in earlier studies of this kind [7] it has become clear that there is leakage of cases (due to deaths and subsequent population movements) and so the recent five year period is employed.

In other words, if the survey gives the result that 100 people in a population of 1000 suffer from a certain disease, giving a rate of 10%, it means that the actual rate can be different due to the fact that some of the people suffering from the disease may have died or moved away before the survey was done. This makes sense, but then there is a strange passage:

However, as a consequence of such a population leakage it is clear that the result will show the minimum cancer rates existing in the study group. In earlier studies this effect was especially found for lung cancer which has a high mortality to incidence ratio.

This part is not so obvious. Of course, if the people who died or moved away suffered from the same disease, the rate would be higher if they had still been alive and had participated in the survey. But it could also be the opposite, the people who died or moved away did not suffer from the same disease, and if they were still alive and participated in the survey the rate would be lower. So, if the disease we are looking at has a high mortality rate, as in the case with lung cancer, then the assumption may be reasonable, depending on how many people that have moved away or died of other causes. Clear it is certainly not.

Another interesting thing is that, while Busby and co-authors in the article are very careful about not stating that uranium is the cause of the health effects, Busby has no qualms about laying out the words in other places. For instance, in the Green Audit report where he claimed that 10 tonnes of enriched uranium had leaked from the Hinkley Point nuclear power plant, he puts them together without further explanation:

Most recently, alarming increases in breast cancer, leukaemia, childhood cancer and congenital malformation/infant mortality increases were found in Fallujah, Iraq, a city where uranium weapons were employed and uranium particles will have been inhaled (Busby et al 2010).

So even though the original paper does not show any connection between uranium and the health effects, he makes it sound like there is an obvious connection when he refers to the paper in other works.

Well, let’s move on. In section 2.5 of the article the sex ratio is defined in one line as:

The population data in 5-year age groups was used to examine the sex ratio in 5-year birth cohorts.

In section 3, which covers the results of the survey, we read the following regarding the sex ratio:

The responses show that there is an anomalous sex ratio in the 0–4 age group. There are 860 males to 1000 females, a significant 18% reduction in the male births from the normal expected valueof 1,055 (267 boys expected, 234 observed; p < 0.01)

860 boys to 1000 girls after the 2004 battle, this does indeed sounds serious if the normal ratio is 1055. To use Busby’s own words from the BSRRW meeting:

“It is absolutely standard, and very rarely diverse at all, that number, unless there is some problem.”

 

Chris Busby explaining the significance of the results with deviating sex ratio in Fallujah, at the BSRRW meeting in Stockholm, 10 August 2010.

But what possible reasons could there be for it? Section 3 of the paper continues:

Perturbation of the sex ratio is a well known consequence of exposure of mutagenic stress and results from the sensitivity of the male sex chromosome complement to damage (the females have two X chromosomes whereas the males have only one).

So according to Busby and co-authors, mutagenic stress is the cause of why less boys than girls are born. The text continues with an explanation of what can cause mutagenic stress (emphasis is mine):

A number have studies have examined sex-ratio and radiation exposure of mothers and fathers. Of relevance is the study of Muller et al. [10] of the offspring of 716 exposed fathers who were Uranium miners. There was a significant reduction in the birth sex ratio (fewer boys). Lejeune et al. (1960) [11,12] examined the offspring of fathers who had been treated with pelvic irradiation; at high doses there was an increase in the sex-ratio, but this reversed in the low doses (around 200 mSv). Schull et al. 1966 [13] found a reduction in the sex ratio in A-Bomb survivor fathers (mothers “unexposed”) for children born 1956–1962 a reversal of an earlier finding by Schulland Neel 1958 [14] of a positive effect in the 1948–1955 births. It should be noted that there were external and internal irradiation effects in these groups, with the internal effects predominating in the later years. Yoshimoto et al. 1991 [15] found an overall reduction in the sex ratio for A-Bomb survivors for children born 1946–1984. Thus the evidence suggests that exposure to ionising radiation at low doses and specifically exposure to Uranium may cause a reduction in the sex ratio.

The quoted references deal with uranium miners, medical radiation treatment, and radiation from the Hiroshima bomb. So, exposure to radioactivity among the parents may cause mutagenic stress, which leads to a reduction in the number of boys born. At least there were scientific reports about it during the 1950’s and 1960’s, five of the six references are quite old. One would expect that such a world famous radiation expert as Chris Busby should be able to back his reasoning with references that covers the development of the field until present instead of what happened more than 45 years ago. There is nothing wrong with referring to old articles, but if you only do it and ignore later developments (if they exist) then your line of reasoning may be very weak.

Regarding the low sex ratio in the case of Fallujah, could there be other reasons than uranium-based weapons? Please note that Busby and co-authors do not mention any of the studies that show a connection between deviating sex ratio and exposure to chemicals, heavy metals, smoking and other environmental effects (see for instance the Wikipedia entry on sex ratio). Busby does not even acknowledge the chemical toxicity of uranium, instead it has to be the radioactivity of uranium, if it is the cause. The authors mention depleted uranium several times, but are very cautious about drawing any conclusion regarding what is the reason. That is a wise approach considering all the uncertainties related to a study like this, and the fact that there are a number of other possible causes.

It is less wise, however, to emphasize uranium as a likely cause, or to claim that the 18% reduction of the sex ratio is significant when you are not even sure about what you have measured. During the talk at the BSRRW meeting in Stockholm in August 2010, a person in the audience, Dr. Eckerman, wanted to have a clarification of what the data really showed. After some confusion it turned out to be that the sex ratio was not derived from the number of born children, but from the number of children available at the time of the survey.

So, to repeat the quote from Busby’s presentation again:

“It is absolutely standard, and very rarely diverse at all, that number, unless there is some problem.”

As it turned out during the BSRRW meeting, there was indeed a problem. Not only did Busby ignore the earlier so cautious approach when he claimed that the deviation was significant, he also based the sex ratio on the wrong assumptions about the group of children.

Somehow this classic picture seems appropriate...

You may wonder what’s the big fuss about? Well, if the sex ratio is to be trusted it has to be derived from the number of born children in the study group. If you instead only have data on the number of living children at the time you make the survey, then you are missing the children who may have died or moved away. Busby disqualified the method himself when I asked about the age group 5-9 years old, which seem to deviate in sex ratio in the opposite direction, i.e. there are significantly (13%) more boys than expected. To be fair to Busby, for the age group 5-9 there are more children that may have died or moved away (or moved to the city) than for the age group 0-4 years, there have been 5 more years when things can happen. But even with that in mind, it is very irresponsible of Busby to claim that it is such a significant finding, when he ignores all the weak points in his reasoning.

So all the time Busby has known that the reported sex ratio is based not on the number of born children, but instead on the number of children available at the time of the survey. The message from Dr. Eckerman is quite clear; to speculate about the causes of the deviating sex ratio should not be done without keeping the limitations of the study in mind. And if you are honest in your approach, you make a clear statement about this the next time you present the study. In spite of this, when Busby a few weeks later has a presentation about his study at the Human Rights Council in Geneva (22 September 2010) he repeats the same thing without any caveats. In fact he says:

Then most important, we found the sex ratio… […] This is the most important result that we had here.

It seems as if he has forgotten Eckerman’s objections. To be fair, he does say that there are some structural problems with the study, and that those concerns are brought up in the paper. But he gives no details about these structural problems during the talk. Instead he goes on with all sorts of explanations about the causes (including some ludicrous speculation about cold fusion based weapons!), as if they were clearly established facts. A year later all concerns about weaknesses in the study seem to be forgotten. The new article about Fallujah (Alaani et al., Conflict and Health 5:15 (2011)) starts off with the following statement in the second sentence (emphasis is mine):

In addition to the increased cancer and rates and infant deaths, the epidemiological study [1] showed that there was a sudden significant drop in the sex ratio (an indicator of genetic stress) in the cohort born in 2005, one year after the battles which occurred in the city, suggesting that the cause of all these effectsis related to the time of the US led invasion of the city in 2004.

I could buy the argument if it was phrased something like “the epidemiological study [1] gave an indication, although with large uncertainties, of a reduction in the sex ratio…” But as we have seen before it is not in the interest of Chris Busby to be clear about the details, at least not when the details make the case weaker. Instead he never misses a chance to bring it up, for instance in the RT interview from 26 October:

http://rt.com/news/uranium-birth-defects-fallujah-729/

or in the LLRC press release:

http://llrc.org/du/subtopic/fallujah20oct2011.htm

 

Busby has to push the line that there has been a significant change in the sex ratio, and without stating it clearly he and his co-authors do everything but saying it straight out that it must be due to uranium.

So we now have seen how Busby in writing is very careful with stating too clearly that there is a clear connection between the deviating sex ratio and some sort of uranium based weapons. In talks and interviews however, he clearly gives a different message. And he consistently ignores all other possible explanations, just as if they wouldn’t even exist.

Now the question is, is the deviating sex ratio in Fallujah even relevant? We will look at this issue in part 2. Stay tuned.

 

Mattias Lantz – member of the independent network Nuclear Power Yes Please

 

Related blog posts:

Chris Busby and the Fallujah sex ratio – Part 2 (incompetence)

Bad science – Chris Busby and his articles on Fallujah


 

7 Comments

Sherman & Mangano admits errors – or do they?

Warning: The following text may contain personal attacks and wild speculations about certain people. This will not make any future attempts of dialogue with them any easier, but in my humble opinion they have had their chances.

Here we go again…

On 25 June, 2011, Janette Sherman and Joe Mangano (from now I will refer to them as S&M) had a new article in San Francisco Bay View. We just noticed it, almost a month after it was written. It has the title: Question marks, the elephant in the room and the refusal of nuclear power defenders to consider what has happened to people and the environment since Fukushima and Chernobyl. After browsing it I scratch my eyes, think for a few minutes (ok, I try to think, head hurts so much…), and then I read it again. Let’s take a closer look at what they write.

First a short introduction to set the stage:

By concentrating only on the CDC (Centers for Disease Control) data – incomplete at best – and ignoring the on-going radioactive releases from Fukushima, it is apparent that the pro-nuclear forces are alive and active.

Hey, wait a minute. It was not the pro-nuclear forces (whatever that is, but let’s embrace the term with a jolly “Fooorward!”) who started tampering with the CDC data in a way that would flunk any undergraduate student in Statistics 101, it was you, remember? This does not mean that we ignore the rest of the issue, but we do take offence when anti-nuclear forces fail to use the information from Fukushima to their advantage and have to cook up alarmistic results in order to make the situation look worse than it is. This is indeed very remarkable, aren’t the actual events in Fukushima bad enough for you?!?

If I had the mindset of S&M, I would write

By mis-treating the CDC data – incomplete at best – and ignoring all knowledge about radiation effects, and actual radiation levels in the US due to Fukushima, it is apparent that the anti-nuclear scaremongers are alive and active.

This is clearly not a way forward, at least not if you hope for a dialogue and an improvement of the nuclear debate. Oh well, let’s move on.

The second section explains that the titles of the previous articles (there are two versions, one in CounterPunch, and one in San Francisco Bay View) includes question marks in order to

stimulate interest and prompt demand for governments – Japan and the U. S. at least – to provide definitive and timely data about the levels of radioactivity in food, air and water.

Hm, Janette and Joe. May I kindly ask: Wouldn’t it be better to try to stimulate this interest in a way that does not include cherry-picking, unfounded alarmism that scares the heck out of millions of parents to infants, and a way of throwing random pieces of data around that should reduce whatever credibility you might have had before to a new low point?

Next section:

We received many responses, some in support of our concerns and some critical about how we used CDC data, including outright ad hominid attacks accusing us of scaremongering and deliberate fraud.

Oh really? I guess that I personally have to plead guilty to this charge, but they link to the blog entry by Michael Moyer in Scientific American as an example of an ad hominid attack. I re-read Moyers scrutiny of the CounterPunch article, but fail to see any personal attacks there. Ok, he uses words like “scaremongering”, “froth up”, “data fixing”, “critically flawed – if not deliberate mistruths”. Still, Moyer attacks S&M’s actions, not their personal traits.

Or do S&M really mean ad hominid attacks? My first assumption was that they mixed up “ad hominem” and “ad hominid“, but maybe they do know the difference? I bring the rest of us up to date by quoting the text on this link: “The former [ad hominem] is a criticism of a particular person; the latter [ad hominid] is a commentary upon a species.” So, have Moyer, myself, or anybody else involved in this issue, referred to S&M as neanderthals, platypus, or similar? Not that I can see, but it could be an interesting path to digress upon. Anyhow: Sensitive bunch, those scaremongers…

Now it becomes interesting:

Given the fallibility of humankind, we may have erred, and if so, will admit it. Given the delay in collecting data and the incompleteness of the collection, the criticism may be valid. MMWR (CDC’s Morbidity and Mortality Weekly Report) death reports have certain limits – representing only 30 percent of all U.S. deaths. They list deaths by place of occurrence, while final statistics are place of residence and deaths by the week the report is filed to the local health department, rather than date of death. Finally, some cities do not submit reports for all weeks. The CDC data are available at http://www.cdc.gov/mmwr/mmwr_wk/wk_cvol.html.

So, they admit that they may have erred, or do they say that they may admit it if proven to be the case? Or…? I am not sure, but this is probably as close to admitting anything that they will ever be. It is of course not their fault, it is the limitations of the CDC data that we should put the blame on, not a second thought about their method.

My first interpretation from reading the section is that they admit to that the statement in the first articles about a statistically significant 35% increase in infant deaths may not be correct (no matter who to blame). Unfortunately, it turns out that I am severely mistaken in my interpretation:

Since the article was originally published, we have had the chance to further analyze the CDC data. Historically, the change in infant deaths for the previous six years in eight Pacific Northwest cities from weeks 8-11 (pre-Fukushima) to weeks 12-21 (post-Fukushima) is about 6 percent – never above 11 percent. But in 2011, the change was 35 percent, far above anything ever experienced.

The same eight cities, the same comparison – four weeks 8-11 vs. 10 weeks 12-21 infant deaths:

  • 2005 +4.1 percent
  • 2006 +10.0 percent
  • 2007 +5.1 percent
  • 2008 +5.5 percent
  • 2009 +2.8 percent
  • 2010 +10.9 percent

The average for 2005-2010 is + 6.1 percent for a total of 1,249 infant deaths.

  • 2011 +35.1 percent (162 infant deaths)

Before 2005, there were missing data. But the years 2005 to 2010 are about 98 percent complete.

Argh! Now I really do want to turn ad hominid on these people, whatever species (sparsis timoris?) they may belong to! First they say that maybe they were wrong and if so they maybe will admit it. Then they go on and make more “analysis” from the CDC data base, using the same lousy way of handling the data!

There has been plenty of text here, so let’s lighten up with a few plots, showing the CDC data S&M are mistreating. The plots from my previous posts about S&M should be enough for saying that this is rubbish, but one more round with the CDC data base will not hurt, in case that somebody still believes in S&M’s fables. We start with plotting the change in infant deaths between weeks 8-11 and weeks 12-21 for the years 2005-2011, i.e. the ones that S&M now claim to have done a more careful analysis on.

 

Change in infant deaths between weeks 8-11 and weeks 12-21 for the years 2005-2011

The blue squares show the data as given by S&M. Nothing wrong in the data, this is what you get when you do the same treatment as S&M, weeks 12-21 give a higher weekly infant mortality rate than weeks 8-11, with a few percent every year. Except for 2011 where the 35% increase looks really alarming. But we know from before that they have cherry picked the weeks by only using four weeks before Fukushima and 10 weeks after. If we also plot the ratio of weeks 12-21 over weeks 1-7, shown as red diamonds, we get the following trend:

 

Change in infant deaths, blue squares as in previous figure, red diamonds for weeks 1-7 and weeks 12-21

Quite interesting that there is a decreasing trend, and that the decrease is at its lowest level this year, a 20% reduction while it before usually was a slight increase (the 30% for 2005 is almost as much as the 35% that S&M have made so much noise about). In other words, if we compare the weeks after Fukushima with weeks 1-7, there seems to be a very beneficial effect on child mortality. Could it be that hot particles are beneficial for infants? This is rubbish, of course, but so is the question asked by S&M.

Still, why the drastic increase this year if they only used four weeks for the years 2005-2010 as well? And why does the results look completely opposite if we instead compare with weeks 1-7 instead of weeks 8-11. The answer is: statistical variations. We are watching random noise, not real trends due to a single cause that can be easily deduced. For this we need to look at longer time spans (for a start, we probably need to do a lot more as well, but let us not confuse the S&M-fans by introducing too much real scientific reasoning). But in order to understand the discepancy, we have to remember that they present the ratio between the two time periods, not the actual numbers. In the original articles S&M talks about increased infant mortality (you may remember that the articles start with “U.S. babies are dying at an increased rate.” Let’s take a look at this.

 

Number of infant deaths for weeks 8-11 and weeks 12-21

Now we see something interesting. Not only are the number of infant deaths for weeks 8-11 at an all time low during 2011, the number of infant deaths for weeks 12-21 are also at a very low level! U.S. babies are dying at an increased rate? I think not! S&M are shamelessly (if they really believe in their own conclusions then they are indeed very incompetent) showing results for relative numbers, not absolute numbers. Perfectly ok if you are honest with how you handle the data, can refrain from cherry picking the weeks and the areas, and do not suffer from an alarmistic version of Tourette’s syndrome. But S&M prefers to show a relative increase of 35% for this year, while the absolute numbers show a decrease when comparing with earlier years.

Still not convinced? Let us include weeks 1-7, weeks 1-21, and weeks 1-52 for each year. Please note that the value for 2011 in the weeks 1-52 (black line) only include the data for weeks 1-25, so it can change somewhat depending on the number of infant deaths that will occur during weeks 26-52 (S&M would probably be able to predict the future, I will not make any such attempts).

 

Number of infant deaths for weeks 1-7, 8-11, 12-21, 1-21 and 1-52

If anybody wants to pursue the idea that there is a drastic increase of infant deaths in northwest U.S. after Fukushima, that person will have many things to explain, for instance the drastic increase in infant mortality this year for weeks 1-7. The data does not support the idea of increased infant mortality due to Fukushima, no matter what S&M say.

One more quote from the article by S&M:

We acknowledge that many factors can cause infant deaths, but the critics who ignore Japanese fallout as possible contributing factors are acting irresponsibly.

So, cheating with the CDC data is a more responsible way to act? Nobody is ignoring the fallout. We are just fed up with false claims wrapped in an alarmistic package. Therefore I will, like many others, once again ignore everything else that is written in the S&M article; a mixture of some valid concerns that are heavily diluted with half-truths, advertisement about the Chernobyl book (edited by Sherman) and other reports (written by Mangano), scaremongering, claims about lies and cover up, and some more nonsense claims. It would be interesting to discuss the valid concerns that are addressed, but if we every time have to filter it out from a sea of myths then we would rather spend our time elsewhere.

Janette Sherman and Joseph Mangano: We do care about the consequences from Fukushima, many of them will surely be serious. Many of us are also annoyed by the lack of interest from the media to write properly about the present status. But we do not subscribe to your way of portraying it, as long as you show yourself unable to stick to the truth. Concerned citizens have nothing to learn from you. We do see the elephant in the room while you try to make it into a mammoth. May your methods and your dishonest claims suffer the same fate as this extinct species.

One final quote. In the beginning of the article there is a picture of a pretty baby, with the following figure caption:

Not only is it unthinkable to put our babies at risk by continued use of nuclear power plants, but infant mortality is an indication of an entire population’s health. When an unusual number of babies are dying, we are all at risk and must take a stand.

S&M must have plotted all the data from 2005 until now. If not, here I have done it for you:

 

Number of infant deaths per week for the years 2005-2011

Ok, I plot all years over each other, a bit messy. A better alternative would be to plot the years after each other, as done in this figure by Alexey Goldin in his entertaining statistics analysis of the S&M hoax. One may also try to see seasonal trends by plotting mean values for a number of years. In whatever way, S&M still owes us an explanation for their claim of an unusual number of babies that are dying. Can we close this chapter now, please?

Ad hominem/hominid attacks

I start to ask myself, are there Cliffs Notes for epidemiology? Is this how Mangano passed his courses for the MPH degree? Since my first encounter with Cliffs Notes about 20 years ago as foreign exchange student in the U.S., the company seems to have expanded its activities substantially, and are also available on the internet. At that time I was only aware of Cliffs Notes that cover novels that you are expected to read in school. If you are too lazy to read the novel, the Cliffs Notes gives a summary of the book, typical issues and questions that are likely to be on the exam (or good to be aware of if you want to pretend that you read the book and do not want to get expelled from the book-reading club), and a few other short cuts for the illiterate who still wants to graduate from high school. I find no Cliffs Notes for epidemiology, but there are indeed Cliffs Notes on statistics! Well Joe, if this is how you made it through college, please go back and read the following part:

It is important to realize that statistical significance and substantive, or practical, significance are not the same thing. A small, but important, real-world difference may fail to reach significance in a statistical test. Conversely, a statistically significant finding may have no practical consequence.

In my first post on this subject the title was Shame on you, S&M! I should probably reconsider this title. If they had done this wilfully then I would stand my ground, but after reading their last article I get more and more convinced that it is just incompetence. They truly believe in what they are doing, and because they are victims of the Dunning-Kruger effect there is no way to make them understand that somewhere along the road they lost contact with reality. And just like Helen Caldicott, who in her debate with George Monbiot said “doctors can’t lie”, they are convinced that they speak from a higher moral ground.

So to tell somebody to be ashamed because they are incompetent in their field is about as useful as telling my 3-year old daughter to be ashamed for not knowing Bulgarian grammar. The difference is that my daughter may have a good chance to pick it up in a couple of years, if she would like to. For S&M, who have claimed expertise in their field for a long time, I see no hope at all. Maybe, just maybe, if they read Fooled by randomness by Nassim Nicholas Taleb, or some similar literature. They could really pick up some good lessons from a few chapters there. But it wouldn’t work, as far as I know Taleb’s books are not available in a Cliffs Notes format.

 

/Mattias Lantz – member of the independent network Nuclear Power Yes Please

 

Further comments

In the first post about S&M (here) I was criticized by a commenter for stating that Mangano has a track record of not handling data in an honest way, but I had not given any reference or link to back up this statement. That has been corrected and I have put two links in that post. But from now on it will be much easier. Three nonsense articles by Joe Mangano in slightly over two weeks, all three based on cherry picking. That is a track record as good (hrm, bad…) as any. To make it worse, good ol’ Joe has proudly put them on the RPHP web page:

http://www.radiation.org/

http://www.radiation.org/press/pressrelease110607PacificNWdata.html

http://www.radiation.org/press/pressarticle110610CounterPunch.html

http://www.radiation.org/press/pressrelease110607PacificNWReport.html

http://www.radiation.org/press/pressrelease110603PhiladelphiaResults.html

And still no reaction from CounterPunch regarding their strange analysis

I have written twice to Alexander Cockburn, but have received no response. Instead there are a number of new articles that are critical of nuclear power of all kinds. Fine with me, but the lack of interest to get the strange re-analysis by Pierre Sprey corrected makes me wonder about the statement “Ours is muckraking with a radical attitude” . CounterPunch is certainly full of articles with a lot of attitude, but the muckraking seems to be missing.

 

Earlier blog entries about S&M

17 June 2011: http://nuclearpoweryesplease.org/blog/2011/06/17/shame-on-you-janette-sherman-and-joseph-mangano/

19 June 2011: http://nuclearpoweryesplease.org/blog/2011/06/19/more-bullshit-from-joseph-mangano-take-2/

21 June 2011: http://nuclearpoweryesplease.org/blog/2011/06/21/counterpunch-verifies-infant-mortality-fraud-but-seems-to-create-one-themselves/

 

3 Comments

CounterPunch verifies infant mortality was alarmism but seems keen to create more of it

Muck-raking is a journalistic activity with a proud history that since the days of Ida M. Tarbell and Jacob Riis have led to exposing cases of fraud, social injustice, conspiracies, environmental pollution, and other inconvenient truths, to the public. On a number of occasions it has led to changed laws and policies, and the end to political careers when somebody’s darker sides have been exposed. One important aspect of this activity is to check the facts carefully in order to get them right. Otherwise the muck-raking turns into cheap sensationalism in order to sell a few extra numbers. Every country has its own collection of this less honourable tradition that stems from the yellow press days of William Randolp Hearst and Joseph Pulitzer. However, the people behind the bi-weekly newsletter ConterPunch proudly refer to themselves in the following way:

Ours is muckraking with a radical attitude and nothing makes us happier than when CounterPunch readers write in to say how useful they’ve found our newsletter in their battles against the war machine, big business and the rapers of nature.

In a follow-up editorial on the Sherman-Mangano study (link to the original article here), CounterPunch editor Alexander Cockburn explains that they have received plenty of critique after publishing the article, several readers suspected cherry-picking. So they had their “statistical consultant”, Pierre Sprey, go through the data. And indeed, he found that there was no ground for the claims by Janette Sherman and Joe Mangano, when one includes a longer time span for the period before Fukushima. By increasing the time from the four weeks, that happen to be in the dip, to ten weeks, the relative increase in infant mortality after Fukushima disappears. So far so good, their control of the data verifies the conclusion that I and others independently of each other made (my version here). Now the interesting part comes. Cockburn says it the best himself:

But then Sprey went further and looked at the Sherman/Mangano selection of eight cities from the 122 reporting to CDC: the eight were Berkeley, Portland, Sacramento, San Francisco, San Jose, Santa Cruz, Seattle and Boisie. Apparently, they selected Pacific Coast cities that were more or less within 500 miles of the coast and north of Santa Cruz. However their selection did not include all CDC cities within this categorization, because they left out Tacoma and Spokane, thus leaving themselves open to suspicions of cherry-picking cities. So Sprey included Tacoma and Spokane in the data set he reviewed in order to be geographically complete. When Sherman and Mangano’s overall selection of cities failed to produce a significant result for ten weeks before and ten weeks after March 11, 2011 (as well for the ten equivalent weeks in 2010 as compared with the same weeks in 2011) Sprey elected to look at smaller, geographically consistent groupings of cities. The results were striking. Simply by moving the boundary line northward from Santa Cruz Sprey found that the four northernmost Pacific Northwest cities in the CDC sample – Portland, Tacoma, Seattle and Spokane – show remarkably significant results – a larger infant mortality increase than the original Sherman-Mangano results. During the ten weeks before March 11 those four cities suffered 55 deaths among infants less than one year old. In the ten weeks after Fukushima 78 infants dieda 42 per cent increase and one that is statistically significant. To confirm once again that these results were not due to seasonality Sprey compared these infant deaths in the ten weeks after Fukushima to the deaths in the equivalent ten weeks a year earlier. The results were almost identical with the ten weeks before Fukushima in 2011. Within the equivalent ten weeks of 2010 53 infants died in these four cities.

Imagine my surprise. I had been playing with the data a bit, and also checked what happens if you include Spokane and Tacoma (see the forum post for details). My conclusion was that Spokane and Tacoma did not matter, while Sprey’s re-analysis shows a 42% increase! I must have done some error, or? There were several steps when I copied the information into a spreadsheet, quite tedious to get it into the format that I wanted, so there were many possibilities for mistakes. So I made a few random double checks and could not find any error, but it would be very time consuming to go through the data for every week again. I also checked the published erratas on the CDC pages in order to see if I had missed some vital correction. But then I found a quicker way on the CDC web pages, it turns out that one also can download the data for individual cities or regions directly for the entire year (here). So I extracted the data for the four cities from this link, in this way I would do an independent check of my earlier results. Anyone can check it themselves on the link above. Here is my table:

The table includes weeks 1-23 for 2011. If I understand what Cockburn writes correctly, Sprey have used weeks 2-11 for the time period before Fukushima (compared with the four weeks used by Sherman and Mangano) and weeks 12-21 for the time period after Fukushima (i.e. the same as Sherman and Mangano). So, with the use of a very complicated mathematical operation, called addition, I get 59 for the period before Fukushima, and 53 for the period after. This is identical with my earlier count. As a final check I asked another member of NPYP to make an independent extraction of the data from the CDC data base and then perform the mathematical operation mentioned above. Once again identical results. Sprey got 55 and 78, close enough on the first one, but the second…

Unless I am missing something vital, the numbers in the table above speak for themselves; There is no dramatic increase for these four cities, so something must have gone seriously wrong in Sprey’s re-analysis. Actually there are more strange things, Cockburn writes that for the 8 cities the new analysis gave “an increase of infant deaths of only 2.4 per cent” after Fukushima, while my analysis gave a 14% decrease (it is not statistically significant, but if I was pro-nuclear in the same way that Sherman and Mangano are anti-nuclear, I would of course argue for that Fukushima has caused a reduction in the infant deaths in northwestern U.S., and that I had the numbers to show it). Could it be the addition part that failed, or does wishful thinking of the style “there must be an increase in infant mortality somewhere due to Fukushima” play a part? Whatever the cause, Cockburn is explaining the significance of some details in the data from Sprey:

Looking a little more closely at the time trend of the infant deaths after Fukushima, Sprey found that the most dramatic increases in deaths were in the two weeks right after the March 11 disaster. Those two weeks saw a near tripling of weekly deaths, followed by a period of somewhat elevated weekly deaths lasting for about five weeks – roughly 25 per over the pre-March 11 rate, then settling down close to the average pre-Fukushima death rate for the last three weeks of the ten week period post-disaster. These results are necessarily approximate because the weekly sample of deaths is too small for precise statistical conclusions.

This part if of course nonsense when we look at the numbers. Let’s plot it as well:

Well, as Cockburn says, there is a “dramatic” increase immediately after Fukushima, by a factor of 3. Now that we have the numbers ourselves we can conclude that a factor of 3 means a jump from 3 on week 11 to 9 on week 12. We have a similar increase in numbers between weeks 5 and 6, but Cockburn does not indicate that as being dramatic, in fact he does not mention it at all. To his defense, the numbers he is looking at are not the same as mine, but where do they come from? Whatever the cause of the error, the dramatic 42% increase now looks more like a…decrease, the mean values are shown as horizontal lines (blue before Fukushima, orange after) in the plot above. Let’s plot the data again to make sure, we can do it in the same way as Mangano does, to make it clear for everyone:

This is, of course, a very dishonest way of plotting things if you want to show the whole picture, but if Joseph Mangano can do it, they why shouldn’t we? The main point is anyhow clear, there is no increase in infant mortality in the United States of America due to Fukushima. Got it? And if there is, we will no find out through sloppy analysis by charlatans like Sherman and Mangano. And, as it seems, not through the statistics consultants that muck-raking journal CounterPunch is using. It will take careful analysis by serious researchers to find out if there is any real effect. If they would bother to start looking. But Mangano said in the Fox News interview

:

this is a red flag to raise for more studies to be done

Actually, he is right. It is a red flag to raise for careful scrutiny of all the earlier work by Sherman, Mangano, and their alarmistic friends. Some of their earlier studies have become “common knowledge” in anti-nuclear groups. Last year we had a member of the green party in Sweden standing in the parliament during a debate about nuclear power, where he referred to studies of increased childhood leukemia, authored by Sherman and Mangano, and of course the Chernobyl book edited by Sherman. Maybe we all can move on with life and more pressing issues now, ok? Those who want to hang on to all bad things with Fukushima still have plenty of material to work on. But stick to the facts, please. Ok? Two questions remain:

  1. Why is Alexander Cockburn’s editorial so forgiving towards Sherman and Mangano? If I had been the editor of a journal, I would be furious if it turns out that some authors make fools of themselves, and of me as an editor, by cheating with data, and I would make sure that they were never to publish anything more in my journal. Ever. Especially when it comes to such an important issue that worries millions of people, not the least parents of small children. But Cockburn is so happy to have, through Sprey’s re-analysis, found that there was indeed an increase, so Sherman and Mangano was right even if they cheated, no shadow should fall upon them. After all, CounterPunch is a muck-raking newsletter with a radical attitude, so there must be some muck to find, and indeed they found it. But when it now turns out that not even this was right, what will he write in the next editorial? “We follow proudly in the sensationalist footsteps of William Randolph Hearst!” or what? We have already established that Janette Sherman and Joseph Mangano should be very ashamed of themselves. If I was Alexander Cockburn I would at least be quite embarrassed.
  2. What went wrong in Sprey’s re-analysis?

An email has been sent to Alexander Cockburn, requesting that they do a muck-raking investigation of the skills of their statistical consultant. Furthermore, CounterPunch now has a great opportunity to recover their lost credibility; How about a couple of articles with in depth investigations of the earlier works of Janette Sherman and Joseph Mangano? This could be the starting point in a long series of muck-raking articles where all the controversial statements from the anti-nuclear icons are carefully scrutinized, could it be that there are more “common truths” out there that are based on the same weak evidence (i.e. none) as in the present case? Alexander Cockburn, are you up to the task?

Mattias Lantz – member of the independent network Nuclear Power Yes Please

P.S. What about Seattle? Some observant persons may have noticed from the table above that Seattle does have an increase in infant mortality by a factor 3 between week 11 (2 deaths) and week 12 (6 cases). At least Seattle has an effect due to Fukushima, pretty please? Well, I’ll give you the plot for Seattle:

 

Infant mortality for Seattle, spring 2011, for stubborn people who wants to find increased infant mortality after Fukushima

6 cases the week that the radioactivity reached Seattle, compared to 2 the week before, that must be significant! If you still insist on this kind of reasoning, it means that you somehow have ignored the plots I show here and here. Still not convinced? Then go to the CDC data base and pick out the numbers for yourself, and please do not forget to check the pattern for Seattle week by week for earlier years. Here are the links to the CDC data base, instructions for some of them are found if you place the pointer over the link. I recommend to start with the last one, it is the easiest to handle:

http://www.cdc.gov/

http://www.cdc.gov/mmwr/mmwr_wk/wk_cvol.html

http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6010md.htm?s_cid=mm6010md_w#tab3

http://wonder.cdc.gov/mmwr/mmwrmort.asp

 

10 Comments