0:00 hey everybody welcome back it seems like
0:01 people these days are super skeptical of
0:03 opinion polls especially if they don't
0:05 like the results is this mistrust
0:07 Justified smash that like button and
0:09 let's find
0:13 out remember this unit is called
0:15 American political ideologies and
0:17 beliefs these lessons are all about how
0:19 political scientists try to measure
0:21 people's beliefs the primary method is
0:23 through scientifically valid polls so
0:25 let's start by talking about polling
0:27 methodology and elements of scientific
0:29 polls first the sample must be random in
0:32 other words everybody in the population
0:34 must have an equal chance of being
0:36 selected think of like a random number
0:38 generator or something like that this
0:40 means that you can allow people to
0:41 select themselves to participate as
0:43 that's not random that's a good start
0:45 but the sample also needs to be
0:47 stratified meaning that the population
0:49 is divided into subgroups and weighted
0:51 based on the demographics of the
0:53 population okay hoay words but what does
0:56 that mean let's say you want to measure
0:57 a pinion of students at your school if
1:00 seniors are 22% of the school population
1:02 then they should be 22% of your sample
1:05 your poll shouldn't over or under sample
1:07 certain groups otherwise this will
1:09 negatively impact the validity of the
1:11 poll we're thinking here of things like
1:13 race gender party affiliation education
1:16 level income age location they should
1:18 each be the same proportion of the
1:20 sample that they are of the population
1:22 now even if you've done everything
1:24 properly your poll will still contain a
1:26 sampling error that's because you only
1:28 pulled a sample of the population you
1:31 didn't ask everybody so some of your
1:33 results may be the result of chance
1:35 variation meaning that if you did
1:38 everything the exact same way again the
1:40 results might be slightly different a
1:42 sample size of a thousand people is
1:44 considered scientifically valid and a
1:46 sampling error of plus or minus 3% is
1:49 generally acceptable but this means that
1:51 the results of the poll could be 3
1:52 percentage points higher or lower than
1:55 what's indicated any scientific poll
1:57 will report a sampling error this is
1:59 something that can contributes to its
2:01 validity additionally keep in mind that
2:03 the wording of the question must be
2:04 neutral clear and unbiased as any effort
2:08 to push respondents towards a specific
2:10 response makes the poll invalid and if
2:12 people don't understand the question the
2:14 results aren't valid either Additionally
2:16 the question needs to be framed
2:17 neutrally which isn't just the wording
2:20 of the question it can be about the
2:22 surrounding questions on the survey for
2:24 example let's say that a pollster wants
2:26 to find out if people support drilling
2:27 for oil in a wildlife refuge
2:30 even if that question is neutral
2:32 pollsters could manipulate respondents
2:35 for example if the first 10 questions
2:37 were about the environment and animals
2:39 they've primed the respondent to answer
2:42 in opposition to drilling on the other
2:44 hand if the first 10 questions were
2:46 about high gas prices and economic
2:48 problems they've primed respondents to
2:50 answer in support of drilling the type
2:52 and format of questions and answer
2:54 choices matters for example if the
2:57 question is open-ended or multiple
2:58 choice or r Choice Etc all these things
3:01 will affect the results lastly the
3:03 results must be accurately reported
3:05 should be clear to understand and should
3:07 only draw conclusions that can be
3:09 supported by the data this is typically
3:12 a bigger problem with media sources than
3:14 with pollsters themselves it's very easy
3:16 to report poll results and make stronger
3:18 assertions about public opinion than the
3:21 data support okay so let's switch gears
3:23 and discuss four types of scientific
3:25 polls each of these can be placed in the
3:27 broad category of mass survey
3:30 we're talking about interviewing or
3:32 pulling a large sample of the population
3:35 typically contacting them via phone or
3:37 sometimes internet these days first up
3:39 opinion polls which are delightfully
3:41 self-defining since they are used to
3:42 measure you guessed it opinion on some
3:45 issue let's say we wanted to know how
3:47 people feel about legalizing marijuana
3:50 this is the poll that we would conduct
3:51 by the way remember that people's
3:53 opinions change over time so a poll
3:56 showing that people opposed marijuana
3:58 legalization way back when dinosaurs
4:00 roam the Earth in 1970 isn't relevant to
4:03 policy discussions today a tracking poll
4:06 is a continuous poll used to chart
4:07 changes in opinion over time it asks the
4:10 same question every time such as about
4:12 presidential approval or perhaps who you
4:14 plan to vote for and it may even contact
4:17 the same people and then it just tracks
4:19 the changes in the response to that
4:20 question over time when you hear that
4:22 the president's approval rating is up or
4:24 down four points since last month this
4:26 is the kind of poll being discussed
4:28 benchmark poll are typically conducted
4:30 by a candidate before they've officially
4:32 announced their candidacy this is one of
4:35 the first things that they might do to
4:37 find out where they stand with the
4:38 public before any campaigning they can
4:41 gather information such as the strengths
4:43 and weaknesses of a candidate find out
4:45 if people have even heard of them what
4:47 they associate with that person which
4:50 demographic groups have more favorable
4:52 attitudes towards them that kind of
4:54 stuff whereas the first three are most
4:56 likely contacting people via the phone
4:58 or Internet exit polls are done in
5:00 person interviewing people as they exit
5:03 the polling place on Election Day
5:05 they're trying to gain insight into
5:06 voting behavior to help campaigns and
5:08 media organizations predict the outcome
5:11 of the election before the votes have
5:13 been counted they're trying to figure
5:15 out which demographic groups showed up
5:16 and voted and what the key factors
5:18 affecting voter Choice were in the
5:20 election and these insights helped them
5:23 project the result before all the votes
5:25 have been counted in contrast to these
5:27 types of polls or mass surveys campaign
5:29 sometimes use focus groups to gather a
5:32 small number of Voters and Lead an
5:34 in-depth discussion about a candidate to
5:37 determine how people feel about her it's
5:39 like a brainstorming session on steroids
5:42 where participants openly discuss their
5:44 thoughts and feelings about a candidate
5:46 or an issue super detailed insights
5:49 absolutely scientifically valid not so
5:52 much topic 4.6 is called evaluation of
5:54 public opinion data and to be honest
5:57 this section is less about learning
5:58 content and more about you feeling
6:01 comfortable analyzing polling data in
6:03 charts graphs and infographic form check
6:06 out the ultimate review packet for great
6:08 practice questions the other main idea
6:09 of this section is asking can we trust
6:12 polls are they reliable polling methods
6:14 have changed significantly since 2016
6:17 pollsters are far less reliant on live
6:19 phone polling since many households only
6:21 have cell phones and either block or
6:23 simply don't answer calls from pollsters
6:25 online opin probability based paneling
6:28 and even Tech texting have grown
6:30 significantly and most pollsters now use
6:32 more than one contact method lastly
6:34 let's talk about reliability and
6:36 veracity reliability ensures we get
6:38 consistent repeatable results if the
6:41 poll was properly done we should get
6:43 very similar results if we immediately
6:45 redid the poll all other things equal
6:48 and veracity refers to the accuracy of
6:50 the data it's not enough for the results
6:52 to be consistent we want them to be
6:54 accurate for example do they accurately
6:57 predict election outcomes or not all
6:59 right well it's a wrap for this one
7:00 until next time this has been a money
7:04 production thanks again for watching I
7:06 appreciate you so much you the real MVP
7:10 if you can help me out that like button
7:11 or tell friends about my channel I
7:13 appreciate it check out the ultimate
7:15 review packet and I will see you in the
7:17 next video