+1-316-444-1378

Evaluating Scientific Claims
For todays assignment, you will be evaluating news articles that have been posted and re-posted on social media.  The first step is to read the article assigned to you. Find the article associated with the first letter of your last name. For example my last name is Iniguez, so my article assigned to me is “Microwaved Water Kills Plants”.

Article Assignments

TopicAquaman Crystal will allow humans to breathe underwater
Complete the the assignment on a word or pdf document and upload the assignment into canvas.

Requirements 250 word minimum
After you read your article, write a summary and include the main claims of the article
Pick any two of the 15 tips to evaluate claims to point out good science or point out any problems with the science (or lack of science) used in the article. 
In conclusion, give your thoughts on the legitimacy of this article. Is the evidence convincing in the article, or is there something that seems troubling to you?
How to Evaluate Scientific Claims

This list is designed to help non-scientists determine if a scientific claim is legitimate or simply media hype. In everyday life, people are bombarded with information: the safety of GMOs (genetically modified organisms), the success of a new diet pill, a fruit that fights cancer, or a lotion that erases stretch marks. How do we navigate this information and determine what is true and what isnt? The following information is from the Nature article: Policy: Twenty tips for interpreting scientific claims (Links to an external site.) (Links to an external site.).

15 Tips for Evaluating a Scientific Claim

Bias: The design of an experiment or the way in which data is collected may produce atypical results. For example, determining voting behavior by asking people on the street, at home, or through the Internet will sample different proportions of the population and lead to different results. Because studies that report statistically significant results are more likely to be published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions. An experiment might be biased by expectations: participants provided with a treatment might think they will experience a difference and so will act differently or report an effect. Researchers collecting results can also be influenced by knowing who received the treatment and who did not. The ideal experiment is double-blind: neither the participants nor the researchers know who received what.
No measurement is exact. Practically all measurements have some error. If the measurement process were repeated, one might record a different result. Sometimes the measurement error is large and sometimes it is small. It is always better to have a small error measurement.
The bigger the sample size, the better.  A large sample size is usually more representative of a population than a small sample size. Participants are naturally different from one another. For example, in a study designed to determine the effectiveness of a new drug, each person will react slightly differently to the drug. If we sample only three people and they each happen to have no reaction to the drug, that may not be representative of the entire population.
Correlation does not imply causation. Just because two things are correlated with one another, does not mean that one causes the other. It may just be a coincidence or a result of both patterns being caused by a third unknown variable. For example, there is a correlation between the number of murders in a city and the amount of ice cream sold. How can this be? Are more murders causing more ice cream sales? The answer is no. It has been shown that there are more murders in warm weather, and when the weather is warm lots of people also buy ice cream.
Extrapolating beyond the data is risky. Patterns found in a given range dont necessarily apply outside of that range. For example, it is very difficult to determine the future effects of climate change on an ecosystem. This is because the rate of climate change is faster than has been experienced in the past, and the weather extremes are entirely new.
Controls are important. A control group is treated exactly the same way as an experimental group, except that the treatment is not applied. Without a control, it is difficult to determine if a given treatment really had an effect. For example, if you have 1,000 participants: 500 will receive a drug (experimental group) and 500 will receive a sugar pill (control group). At the end of the study, you can compare the groups to see if the experimental group had a different outcome than the control group.
Randomization avoids bias. Experiments should choose participants randomly and assign those participants randomly to groups. This avoids bias. For example, studying the educational achievement of children from wealthy families only will suffer from bias. Children should be randomly selected from many different populations.
Seek replication, not psuedoreplication. Results that are consistent across different studies, replicated on independent populations, are more likely to be solid.  Applying a treatment to a class of children might be misleading because the children will have many features in common other than the treatment. The researchers might make the mistake of ‘pseudoreplication’ if they generalize from these children to a wider population that does not share the same features. Pseudoreplication leads to unwarranted faith in the results. Pseudoreplication of studies on the abundance of cod in the Grand Banks in Newfoundland, Canada, contributed to the collapse of what was once the largest cod fishery in the world.
Scientists are human. Scientists have a vested interest in promoting their work for status, research funding, or even financial gain. This can lead to selective reporting of results and occasionally, exaggeration. It is important to look at multiple independent sources of evidence.
Significance is significant. Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all.
No effect and not-significant are not the same thing. A non-significant result does not mean there was no underlying effect: it means that no effect was detected. A small study may not have the power to detect a real difference. For example, tests of cotton and potato crops that were genetically modified to produce a toxin to protect them from damaging insects suggested that there were no negative effects on beneficial insects such as pollinators. Yet none of the experiments had large enough sample sizes to detect impacts on beneficial species if there had there been any.
Limit generalizations. The relevance of a study depends on how much the conditions of the study resemble the conditions of the issue under consideration. For example, there are limits to the generalizations that one can make from animal or laboratory experiments to humans. The conditions of lab rats in an experiment are very different from the conditions of people in daily life, and it is therefore difficult to generalize a study done on rats in a laboratory to people.
Interrelated Events. It is possible to calculate the consequences of individual events such as an extreme tide, heavy rainfall, and key workers being absent. However, if the events are interrelated, (a storm causes a high tide, or heavy rain prevents workers from accessing the site) then the probability of them occurring together is much higher than might be expected.
Data can be cherry picked. Evidence can be arranged to support one point of view.
Extreme measurements can mislead. When looking at variation between two things (for example, the effectiveness of two schools), it is important to remember that many things can be responsible for their differences including: innate ability (teacher competence), sampling (children from one school might be an atypical sample), bias (one school may be in an area where students are unusually healthy and never miss class), and measurement error (how data about the effectiveness of a school was measured).