## Saturday, 22 March 2014

### Media vs. Science, and the importance of a balanced information diet

A little over a week ago I attended the latest session of the Lost Creatures: Big Questions series  of discussions at the Queensland Museum here in Brisbane, titled 'Does the media help or hinder science? Passion vs Accuracy'. The session has since been made available online on the ABC website, but if you don't want to use RealPlayer this is the direct link to the MP3.

 Paul Barclay, ABC Big Ideas. Joel Gilmore, ROAM Consulting. Susannah Eliott, Australian Science and Media Centre. Anthony Funnell, ABC Future Tense. Suzanne Miller, Queensland Museum Network.

It was an interesting talk with some good points made by the speakers, and I recommend you give it a listen. My takeaways from the talk are that scientists generally aren't good communicators who can deliver short, clear, and accurate messages (not true of all scientists of course) that can be used by journalists to create interesting and provocative stories that are digestible by the broader public. Additionally, the media is severely lacking journalists with a scientific background who understand the process and realities of science to report on it accurately, while also needing to make their pieces provocative and attention-grabbing to as broad an audience as possible. These factors from the scientific community and the media results in lots of inaccurate, ill-explained, sensationalist, or over-hyped stories about the capabilities, promises, and dangers of science, which further results in disenchantment and mistrust of science by the general public.

Seemingly the discussion was balanced by placing blame in the courts of science and media, but I feel it was missing a third important element; you. The flow of information doesn't start with the scientist and end with the journalist. There's an audience that consumes that information and they need to take responsibility for their information consumption. In Australia we're afflicted with a binge drinking culture where there's a strong message of the importance of drinking responsibly, but what about thinking responsibly?

This is not to say that there isn't room for improvement on the sides of science and media, but at some point we need to say that we've done enough, and it becomes the responsibility of the individual to verify information, make their own judgements, and carefully filter and consider the information they accept. This raises much bigger questions though. How do we create a population that is responsible enough to consume information? Can we realistically expect the broader public to be scientifically literate? Where do we draw the line on pre-filtering information before it reaches the public and toeing the line of being a censored society? I don't have the answers to these questions, but I do have a few opinions.

Indulge me in using an analogy that I think conveys the idea well enough, but will crumble if you stretch the logic between the two concepts too far. Our brain is like our gut, but instead of consuming food the brain consumes information. Just like with our food diet, not all information is good for us. There's plenty of junk information that is convenient and at the tap of finger we can find information that makes us feel good, supports our prejudices, and requires almost no effort to consume and process. Modern media, such as the Internet, is prone to a deluge of click-bait, snap-shot sentiment, over-simplified (and sometimes simply wrong) infographics, and memes... memes everywhere.

You can't make people eat healthy foods, and you also can't make people consume healthy information. There's an element of personal responsibility that must be considered in this discussion. I'm no saint in my information diet; I've spent many hours looking at endless pages of memes, reading only headlines or articles that provide a dot point list summarising their content, and watching some of the most inane and tinselled TV shows ('If You Are the One' is a regular guilty pleasure) because they are convenient, entertaining, and require little mental effort on my part.

This issue follows the rule of supply and demand, where the media supplies what the public responds to. There's more demand for "sexy" stories that make us feel good or support our views, rather than stories that make us think and challenge our ideas. Perhaps the problem of an unhealthy information diet lies in the public's motivation to "eat" healthier information. Why shouldn't we want what's easy and convenient? Why should I seek out something that is harder for me to consume and doesn't provide entertainment value?

But I'm not telling the whole story, because there is a demand for wholesome information consumed by people who want a well-rounded information diet, just like there are people who look after their bodies by eating well and exercising. Why do these people exist? How did they obtain a yearning to learn beyond glossy headlines?
“If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” ― Antoine de Saint-Exupéry
Simply put, they were taught to want it. To make fundamental and enduring change in the public's relationship with science, it starts with kids and education. More important than being taught facts, kids need to be taught a need to seek out facts for themselves. They need critical thinking skills and knowledge of how to apply them to their own thoughts and others' ideas. None of this is a secret to the education system of course, yet we still find ourselves in the current predicament of science being misrepresented to appeal to the demands of the masses. If the education system understands the importance of these learning and critical thinking skills, why is it that we have this problem?

I believe it's because most education occurs at home, whether intentional or not. If there are parents not interested in imparting that yearnin' for learnin' they raise kids who are not curious about how the world works, and their kids raise a new family similarly, repeating the cycle. How do we break that cycle? I don't have an answer, but it is perhaps the most significant problem our society can tackle. Some of our greatest social issues could be mitigated or resolved if all kids grew into adults who maintain an interest in the world around them, tempered with the ability make informed judgements. We need to immunise our kids' minds against provocative and entertaining junk information with simple yet powerful mental tools of critical thought, and it is the responsibility of parents to do so.

## Thursday, 20 March 2014

### Probability continued

In my previous post I gave a simple overview of discrete random variables with binomial distributions and Poisson distributions. Whereas discrete random variables have values which are distinct and separate, continuous random variables differ in that they can have an infinite number of values within a range i.e. the range is infinitely divisible, however the area under to curve of a continuous random variable must equal 1.

One of the most well known distributions of continuous random variables is the normal distribution. You will recognise this as the bell curve, where the majority of results converge on a common central point at the peak of the curve and then quickly drop off and plateau towards the extremities in both directions.

That central peak is called the mean ($\mu$) of the distribution, and in the bell curve the distribution is symmetrical about the mean. As the mean of a normal distribution changes the normal distribution shifts along the x-axis. Another important descriptor of a normal distribution is the standard deviation ($\sigma$), which indicates the dispersion of values from the mean (i.e. average). The Empirical Rule of statistics, also known as the Three Sigma Rule, states that for a normal distribution 68% of data falls within 1 standard deviation, 95% falls within 2 standard deviations from the mean, and almost all data (99.7%) falls within 3 standard deviations of the mean.

 Wikipedia - Probability density function of normal distributions with different means and standard deviations

We can use these numerical descriptors to make predictions about future events based on these parameters of their probability. For example, lets say that the mean for a probability distribution is 70 and that the standard deviation is 13. From those we can calculate the probability that a random selection from the sample will be less than or equal to 60. In R we can use the pnorm(x, m, sd) function which takes the value x, the mean of the probability distribution, and the standard deviation, and returns the cumulative probability of x e.g. pnorm(60, 70, 13) = 0.2208782. If you wanted to find the probability that a random selection was larger than 60, because the function gives the cumulative probability equal to or less than x you'd simply minus the probability from 1 e.g. 1 - 0.22088782 = 0.77911218. Remember that the area under the curve is equal to 1, so if you subtract one side of the curve from position x all that you're left with is everything about that value of x.

Normal distributions come in various shapes based on different values for their mean and standard deviation (see figure above), however it is useful to standardise the normal distribution. This is done by centring the bell curve where the mean is 0, and the standard deviation is 1. This is called a standard normal distribution, as shown in the figure below.

 University of Virginia - Standard normal distribution

Notice the formula on the right side. It's saying that given a value x we can determine z by subtracting the mean from x, and divide the result by the standard deviation. This transforms the value of x into a z-score, which fits within the standard normal curve. Using the example from earlier this would be:
$$z = \frac{60 - 70}{13} = -0.7692307$$With x transformed to z, if you remember that the mean is now 0 and the standard deviation is 1, we can provide these values to R as pnorm(-0.7692307, 0, 1) = 0.2208782. Notice that the result is the same as pnorm(60, 70, 13) from earlier. This method of transforming and standardising data can be useful for comparing results in different datasets.

If you don't have the luxury of a computer running R to calculate the probability of a z-score, you can determine it using a Z Table. You use a Z Table by looking for the row in the left-most column that matches the first two digits (either side of the decimal point), and then within that row finding the column in the top row that matches the second digit after the decimal point. The cell in the table located at those two coordinates is your probability for the given value of z, as shown in the figure below.

You will notice that the value isn't exactly the same, but it's a decent approximation. I could've found a more accurate probability by rounding down the z-score to -0.77 to get a probability of 0.2206, which is closer to 0.2208.

One final thing to note is that if you have a negative z-score you don't necessarily need a Z table with negative values to find the probability. You can use a positive value Z table as below...

... and then subtract your probability from 1 e.g. 1 - 0.7794 = 0.2206.

Since that nicely covers normal distributions, I'll end this post here. I have more to cover on continuous random variables with t, Chi-squared , and f-distributions in another post.

To be continued...

## Monday, 17 March 2014

### The plot thickens...

Two new posts in the works, one on continuous random variables, and the other a discussion of the role of media in science.