• Tue news: Merck's Keytruda stages comeback in head and neck cancer. GSK, Pfizer RSV vaccine sales fall. Astellas gene therapy bet. Extreme weather —>drug shortages. J&J discontinues bladder cancer drug. See more on our front page

Shocking: Study finds Fox News viewers LEAST INFORMED

















Really? But...but...but....others such as MSNBC only report what they want you to hear OR NOT hear:

http://www.newsmax.com/US/obama-martin-zimmerman-shooting/2012/05/24/id/440177

MSNBC performed better than no news and Fox News. This is the second study to find the same conclusion about your truly Faux News! You people constantly prove how uninformed you truly are, and two studies back up the fact that you people are wackos and not bright.
HAAHAHAHHAHAHAHAHAHAHAAHHAHAHAAHAHAHAHAHAHAHAA
 




Aside from a very few exceptions, all one has to do is read the threads/posts from the cons here on CP and they can see that the results of the study are 101% accurate.
 








i don't know what's more funny - the study itself or the frenzied rantings of SPN, Rockification, Phoebe, and MFAS that has them calling the study "biased".

You and BN live in a world of total delusion.

Sorry, but the study's results are unbelievable in light of all the facts.

It's easy to get a study to say what you want. I would hope that pharma reps would understand that.

As usual, you've got nothing.
 




You and BN live in a world of total delusion.

Sorry, but the study's results are unbelievable in light of all the facts.

It's easy to get a study to say what you want. I would hope that pharma reps would understand that.

As usual, you've got nothing.

Thanks for the laugh. I post this and the hate group classification on another board - you don't like it, so you make excuses. Hilarious!
 




i don't know what's more funny - the study itself or the frenzied rantings of SPN, Rockification, Phoebe, and MFAS that has them calling the study "biased".

It is rather ironic how they show up on various threads and posts here and confirm the results of the study with their 'hair on fire' rants.
 




A statistician replies to the survey. All things considered, this survey really has no substance. Read on:

I've been following your "False Equivalence" series and have generally enjoyed and agreed with your insights, but I fear you may have jumped to a possibly unfounded conclusion on this one. I'm a statistician by trade and have worked with various US government statistics departments the past and current work for an international organization. Though I find these results entertaining from a media frenzy point of view, a number of alarm bells go off right away when I see this survey. In ascending order of what bothered me most (with the relevant survey disclaimer quotes in italics):

1. It was conducted as a telephone survey. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, non-response (eligible individuals refusing to be interviewed)....." . With caller ID these days what are chances that randomly chosen people would pick up for an unknown number? And of those that pick up, how many are likely to agree to talk on the phone for 10 minutes to complete a survey such as this? I would surmise that the response rate was quite low (I didn't see any documentation in the report). A low response rate raises the possibility of nonresponse bias - the possibility that certain demographic types would be undersampled. The report states that responses were reweighted to account for discrepancies in race, age and gender proportions as compared to the national average, but presumable there are other factors that go into nonresponse bias.

2. Only 8 questions were asked. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, ..... question wording, the order in which questions are asked, and variations among interviewers." This is a structural bias issue. For example, what if Fox News reported particularly poorly on one or more of the topics included in the survey, but reported much better on some other topics not included? While I don't see any inherent bias in the questions that doesn't mean there isn't any. How were the questions selected? Did both liberals, conservatives and centrists screen them for bias? And how well the result of 8 random news questions relate to "what you know" anyway?

3. The deep breakdown of data in the survey. 1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles. The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week. So of the 1,185, how many watched Fox News and not any of the other sources listed? MSNBC? I would think that most people get their news from multiple sources (local news AND Fox News for example). These people are apparently excluded from the analysis. Presumably, the remaining sample could be quite small. Which leads to the possibly most important issue:

4. Lack of standard errors on the correct answers statistic. "The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup." The size of the subgroups on which the graph is based are not mentioned. Also +/- 3 percentage points does not apply to the number of questions answered correctly. I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there's a chance it may just not have been mentioned in the report).

While I'm not sure that the team at Farleigh Dickinson could have done a much better job than they did with their resources, I think this type of survey does not rise to level of "news" (nor do most soft surveys like this). It is extremely easy to jump to conclusions based on a graph that agrees with one's inklings about news sources even when the data behind it may not lend itself to clear cut conclusions. Another thing that should be noted is the issue of causality. You note in your post "that NPR aspires actually to be a news organization and provide 'information', versus fitting a stream of facts into the desired political narrative" While this could be true, it is also possible that even if the survey results were correct there may be a bit of self-selection when choosing news networks. In that case, ignorance could be the viewer's fault rather than the fault of Fox News.
 




A statistician replies to the survey. All things considered, this survey really has no substance. Read on:

I've been following your "False Equivalence" series and have generally enjoyed and agreed with your insights, but I fear you may have jumped to a possibly unfounded conclusion on this one. I'm a statistician by trade and have worked with various US government statistics departments the past and current work for an international organization. Though I find these results entertaining from a media frenzy point of view, a number of alarm bells go off right away when I see this survey. In ascending order of what bothered me most (with the relevant survey disclaimer quotes in italics):

1. It was conducted as a telephone survey. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, non-response (eligible individuals refusing to be interviewed)....." . With caller ID these days what are chances that randomly chosen people would pick up for an unknown number? And of those that pick up, how many are likely to agree to talk on the phone for 10 minutes to complete a survey such as this? I would surmise that the response rate was quite low (I didn't see any documentation in the report). A low response rate raises the possibility of nonresponse bias - the possibility that certain demographic types would be undersampled. The report states that responses were reweighted to account for discrepancies in race, age and gender proportions as compared to the national average, but presumable there are other factors that go into nonresponse bias.

2. Only 8 questions were asked. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, ..... question wording, the order in which questions are asked, and variations among interviewers." This is a structural bias issue. For example, what if Fox News reported particularly poorly on one or more of the topics included in the survey, but reported much better on some other topics not included? While I don't see any inherent bias in the questions that doesn't mean there isn't any. How were the questions selected? Did both liberals, conservatives and centrists screen them for bias? And how well the result of 8 random news questions relate to "what you know" anyway?

3. The deep breakdown of data in the survey. 1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles. The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week. So of the 1,185, how many watched Fox News and not any of the other sources listed? MSNBC? I would think that most people get their news from multiple sources (local news AND Fox News for example). These people are apparently excluded from the analysis. Presumably, the remaining sample could be quite small. Which leads to the possibly most important issue:

4. Lack of standard errors on the correct answers statistic. "The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup." The size of the subgroups on which the graph is based are not mentioned. Also +/- 3 percentage points does not apply to the number of questions answered correctly. I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there's a chance it may just not have been mentioned in the report).

While I'm not sure that the team at Farleigh Dickinson could have done a much better job than they did with their resources, I think this type of survey does not rise to level of "news" (nor do most soft surveys like this). It is extremely easy to jump to conclusions based on a graph that agrees with one's inklings about news sources even when the data behind it may not lend itself to clear cut conclusions. Another thing that should be noted is the issue of causality. You note in your post "that NPR aspires actually to be a news organization and provide 'information', versus fitting a stream of facts into the desired political narrative" While this could be true, it is also possible that even if the survey results were correct there may be a bit of self-selection when choosing news networks. In that case, ignorance could be the viewer's fault rather than the fault of Fox News.

So Rasmussen and other telephone surveys are bogus? Who knew?
 
















A statistician replies to the survey. All things considered, this survey really has no substance. Read on:

I've been following your "False Equivalence" series and have generally enjoyed and agreed with your insights, but I fear you may have jumped to a possibly unfounded conclusion on this one. I'm a statistician by trade and have worked with various US government statistics departments the past and current work for an international organization. Though I find these results entertaining from a media frenzy point of view, a number of alarm bells go off right away when I see this survey. In ascending order of what bothered me most (with the relevant survey disclaimer quotes in italics):

1. It was conducted as a telephone survey. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, non-response (eligible individuals refusing to be interviewed)....." . With caller ID these days what are chances that randomly chosen people would pick up for an unknown number? And of those that pick up, how many are likely to agree to talk on the phone for 10 minutes to complete a survey such as this? I would surmise that the response rate was quite low (I didn't see any documentation in the report). A low response rate raises the possibility of nonresponse bias - the possibility that certain demographic types would be undersampled. The report states that responses were reweighted to account for discrepancies in race, age and gender proportions as compared to the national average, but presumable there are other factors that go into nonresponse bias.

2. Only 8 questions were asked. "Survey results are also subject to non-sampling error. This kind of error, which cannot be measured, arises from a number of factors including, but not limited to, ..... question wording, the order in which questions are asked, and variations among interviewers." This is a structural bias issue. For example, what if Fox News reported particularly poorly on one or more of the topics included in the survey, but reported much better on some other topics not included? While I don't see any inherent bias in the questions that doesn't mean there isn't any. How were the questions selected? Did both liberals, conservatives and centrists screen them for bias? And how well the result of 8 random news questions relate to "what you know" anyway?

3. The deep breakdown of data in the survey. 1,185 people sounds like a lot, but when it is broken down to such a low level the sample size dwindles. The graph that you use in your post shows the average number of questions answered correctly by respondents who reported getting their news from just this source in the past week. So of the 1,185, how many watched Fox News and not any of the other sources listed? MSNBC? I would think that most people get their news from multiple sources (local news AND Fox News for example). These people are apparently excluded from the analysis. Presumably, the remaining sample could be quite small. Which leads to the possibly most important issue:

4. Lack of standard errors on the correct answers statistic. "The margin of error for a sample of 1185 randomly selected respondents is +/- 3 percentage points. The margin of error for subgroups is larger and varies by the size of that subgroup." The size of the subgroups on which the graph is based are not mentioned. Also +/- 3 percentage points does not apply to the number of questions answered correctly. I do not see evidence of statistical testing to show there are significant differences by respondents reporting receiving their news from different sources (though I suppose there's a chance it may just not have been mentioned in the report).

While I'm not sure that the team at Farleigh Dickinson could have done a much better job than they did with their resources, I think this type of survey does not rise to level of "news" (nor do most soft surveys like this). It is extremely easy to jump to conclusions based on a graph that agrees with one's inklings about news sources even when the data behind it may not lend itself to clear cut conclusions. Another thing that should be noted is the issue of causality. You note in your post "that NPR aspires actually to be a news organization and provide 'information', versus fitting a stream of facts into the desired political narrative" While this could be true, it is also possible that even if the survey results were correct there may be a bit of self-selection when choosing news networks. In that case, ignorance could be the viewer's fault rather than the fault of Fox News.

Wow, nice novel. Lighten up, it's a funny study! If this had shown MSNBC trading places with Fox, you people would have undoubtedly pounced.
 




Wow, nice novel. Lighten up, it's a funny study! If this had shown MSNBC trading places with Fox, you people would have undoubtedly pounced.

Another one of your self-serving lies. No, if the methodology was as shoddy as this one was, then no we wouldn't have.

You claim to be a science major and a pharma rep and you run with such trash even after it's been debunked? PATHETIC.
 




Another one of your self-serving lies. No, if the methodology was as shoddy as this one was, then no we wouldn't have.

You claim to be a science major and a pharma rep and you run with such trash even after it's been debunked? PATHETIC.

You just don't like the results and you are shooting the messenger. It hasn't been debunked. Like saying that judges decisions you don't like come from 'activists' judges, you decide you don't like this conclusion either, so you make up excuses that fall flat. Thanks for the laugh, MFAS, - they always come unintended from you, which makes it even more funny.
 




You just don't like the results and you are shooting the messenger. It hasn't been debunked. Like saying that judges decisions you don't like come from 'activists' judges, you decide you don't like this conclusion either, so you make up excuses that fall flat. Thanks for the laugh, MFAS, - they always come unintended from you, which makes it even more funny.

The only thing laughable is your posting.

The study was thoroughly exposed as being pathetic methodologically. You are the one who likes it because of what it says.

Run along now, kid. As usual, you are out of your depth.