Sound and fury, signifying nothing

Estimated reading time: 5 Minutes
Australian wind farm, picture by David Clarke on Flickr
[caption id="attachment_7722" align="alignnone" width="880"]Australian wind farm, picture by David Clarke on Flickr Australian wind farm, picture by David Clarke on Flickr[/caption]

On January 21st, 2015 The Australian published an article on the front page of the paper reporting on a ‘groundbreaking study’. This study showed that ‘people living near wind farms face a greater risk of suffering health complaints caused by the low-frequency noise generated by turbines’. The wording of that sentence is quite important – the article specifically states that wine turbines cause a greater risk of health complaints. That was the first sentence of a front page article in Australia’s leading paper. Given The Australian’s reputation, it’s reasonable to expect that at the very least, things that make the front page are not completely false. Unfortunately, in this case, the story was unfounded.

This gets to the heart of how our media report on science.

The Australian’s article was based on a study commissioned by Pacific Hydro, a company operating wind farms in Victoria. The study, conducted by acoustic engineer Steven Cooper, asked six people who lived near wind farms to keep a diary of their health effects. These six residents had previously complained to Pacific Hydro about health effects of the wind farms. Pacific Hydro contacted the residents at times during the study to tell them when the wind farms were being turned off, on, up and down. In addition, the residents could see the wind farms from many parts of their house. Cooper then correlated incidences of health complaints in the participants’ diaries with times at which wind turbine activity changed. He found that health complaints were not made when the wind turbines were off, and were made when they were on.

What’s wrong with this? Well, quite a lot.

  1. The number of people in the study was very small (the six people were in fact living in only three separate households). This means that any conclusions in the paper would be very difficult to generalise to the overall population

  2. There was no control group in the study. A control group is a group who aren’t exposed to the potential changing variable but are monitored with the same level of rigour. In this case, a control group could have been people who don’t live near wind farms, but who are in other respects identical to the people who do

  3. The participants in the study were not randomly selected. They were selected based on the fact that they had already complained about health effects from wind farms, giving them a motivation (sub-conscious or otherwise) to present findings that supported their point of view

  4. Related to point 3, the participants weren’t blinded. That is, they knew what the study was testing for. Furthermore, they often knew when the conditions changed in the study, as Pacific Hydro told them when the turbine activity changed and the turbines themselves were visible


These methodological flaws mean that there is simply no way in which the study could have established a causal link between wind turbines and health effects. There is also absolutely no way to conclude that residents living near wind farms faced greater risks of health complaints than others. To be fair, Cooper himself was open about these facts with other news outlets, pointing out that the main topic of the study was to investigate the acoustic effects of wind farms, not the health effects. In this regard, the Cooper report is a perfectly impressive piece of research. This does not change its lack of contribution to the debate surrounding wind turbines and health impacts.

Yet The Australian declared on their front page that this report represented proof that living near wind farms increases the risk of health complaints which are caused by the wind farms.

The Australian, have some form on this particular issue. In April 2012 The Australian published another front page story about a dead eagle found near a wind farm. This story was complete with a picture of the eagle in question, a concerned farmer and a wind turbine looming menacingly in the crepuscular background. This article contained absolutely no references to scientific articles of any kind, but still made links between wind farms, bird deaths and human health. In February 2015, The Australian followed up the earlier article about the Cooper report with this article, again claiming a ‘cause and effect’ was found between living near wind farms and adverse health effects. Other media outlets also covered the story, although with less zeal than The Australian.

It is concerning that misleading reporting of science can repeatedly find its way on to the front page of Australia’s major daily paper. It is even more concerning that there are no real guidelines for the reporting of science and scientific studies. Let me suggest two policies that might go some way to improving this situation. I suggest that both of these policies should become part of the guidelines set out by both the Australian Press Council, which monitors print media, and the Australian Communications and Media Authority, which monitors broadcast media.

First, I propose that there should be a standard checklist of information that is required to be included in any news article reporting on a scientific study. This should include who funded the study, how many participants there were, how they were selected, whether the results were controlled and whether study participants were blind to the study’s intent and variables. The media outlet should report on where the study fits in the broader context of findings, including whether it is a confirmation of, or challenge to, existing research. For the benefit of lay readers, the outlet should also make a comment about how methodologically sound the study is, based on the criteria above. All of this could be accomplished in a short paragraph or two and, importantly, it does not infringe on freedom of the press. Simply requiring that certain information be included in a story does not in any way preclude the outlet from running the story.

Secondly, and perhaps more troublesome, newspapers and other media outlets should be compelled to retract stories which breach these guidelines. Furthermore, they should be compelled to retract offending stories, in the same place and in as many words as the original story. In the case of the January 2015 article, this would have meant a front-page article of close to 1000 words. Compulsion to retract in such a way would, I hope, lead to slightly less editorialising through poor coverage of science.

Yannick Spencer is is an MPP student at the Blavatnik School of Government.