2019 US National Public Opinion Survey Of Global Strategic Partnerships and Education Diplomacy

Emerson College Polling, under the supervision of Assistant Professor Spencer Kimball
is pleased to present the Association of Marshall Scholars with the findings from a
survey of American attitudes toward strategic alliances, international partnerships and
overseas learning. All respondents interviewed in this study were part of a fully
representative sample using an area probabilistic sampling method of N= 1,600 (sample



Survey of Registered Voters Attitudes on Local Issues City of Cambridge, Massachusetts

A survey of 400 registered voters in the city of Cambridge, MA finds that a plurality of residents (47%) believe affordable housing is the most pressing issue facing the city. 13% of residents believe traffic is the most important issue in Cambridge, followed by education with 8%. The issues of bikes and bike lanes, crime, drugs, and opioids all received 5% or less. 13% of residents say their most important issue is an unspecified other.

When asked about a potential Affordable Housing Overlay District in Cambridge, 58% of residents were aware of the proposal. 38% of residents support the proposal, 32% are in opposition, and 30% are unsure. 

Residents were also asked about the success of efforts to revitalize the city’s squares. 30% of residents find there have been meaningful improvements in Central Square, 35% say there have not been improvements, and 35% are unsure. Regarding Inman Square, only 16% of residents see meaningful improvements, 33% do not, and a majority of 51% are unsure. 

The overall approval of the Cambridge City Council among residents is split, with 34% approval, 26% disapproval, 30% unsure, and 11% no opinion. 

Residents were also asked about the favorability of individual City Council members:

Marc McGovern- 43% favorable, 19% unfavorable, 24% unsure, 15% no opinion

Jan Devereux-31% favorable, 17% unfavorable, 29% unsure, 23% no opinion

Dennis Carlone- 26% favorable, 17% unfavorable, 33% unsure, 25% no opinion

Craig Kelley- 31% favorable, 18% unfavorable, 28% unsure, 24% no opinion

Alanna Mallon- 26% favorable, 19% unfavorable, 33% unsure, 22% no opinion

Sumbul Siddiqi- 23% favorable, 20% unfavorable, 31% unsure, 26% no opinion

Denise Simmons- 41% favorable, 25% unfavorable, 18% unsure, 16% no opinion

Timothy Toomey- 32% favorable, 23% unfavorable, 27% unsure, 18% no opinion

Quinton Zondervan- 22% favorable, 17% unfavorable, 36% unsure, 25% no opinion 


Full Results

Survey Instrument and Report

Allocating Undecided Voters in Pre-election Polling

Spencer Kimball, Esq, J.D., M.S., M.A.

Liudmila Yudina, M.A.

Research Assistant: Cole Mootz, Emerson College

Abstract: Is there a way to make pre-election polls more accurate? This paper seeks to test some of the most popular methods of allocating ‘undecided’ voters, based on the underlying theory that the allocation of undecided voters will improve the public’s expectations of election results and a pollster’s claims about accuracy. Polling literature states the most popular methods to incorporate undecided voters include asking a “leaner” question that follows a ballot test question or allocating the undecided proportionally to their vote preference. Both methods were used in this study, along with a third option in which an even-allocation, or essentially no allocation of undecided voters, took place. The study incorporates n=54 pre-election polls conducted in 20 different states, between October 26 and November 4, 2018, which were used to compare the three allocation methods. This includes an Absolute Error test (deviation between poll results and election results, Mosteller et al., 1949), a Statistical Accuracy test (absolute error compared with the poll’s margin of error, Kimball, 2017), and a Predictive Accuracy test (did the poll predict the actual election winner?). The study found no significant difference between the accuracy of the polls that included an allocation of undecided voters as compared to those that did not (χ2(2, N=161)=.200, p =.905), suggesting that allocating undecided voters does not detract from, nor add to the reliability and validity of a pre-election poll.

Read More

Journal of Business Diversity: Survey Data Collection: Online Panel Efficacy. A Comparative Study of Amazon MTurk and Research Now SSI/ Survey Monkey/ Opinion Access

Spencer H. Kimball

Emerson College

This study examined how the demographic composition of panels provided by Amazon MTurk compared with other prolific online panel firms (Research Now SSI (Dynata), Survey Monkey, and Opinion Access). Demographics used for comparison were gender, party affiliation, race, age, education, and regional distribution in each state. Nine polls were used from eight different U.S. states and one national US poll between June 25, 2018 and August 13, 2018. 54 chi-square tests were conducted to compare the panels and a significant difference was found in n=18 with age having the strongest relationship with a difference found in 7 of 9 polls.

Read Article:

AAPOR: Comparing Online Panels and IVR Samples

View Professor Spencer Kimball’s 2019 AAPOR presentation on three studies:

Study 1: Polls and Mode of Data Collection

Study 2: Online Panel “House Effects”

Study 3: Mix Mode and Midterms 2018

Full Presentation



2016 Presidential Statewide Polling — A Substandard Performance: A Proposal and Application for Evaluating Pre-election Poll Accuracy

Spencer Kimball, Esq, J.D., M.S., M.A.

American Behavioral Scientist

Abstract: This study implements a statistical accuracy (SA) measurement for assessing preelection poll accuracy by comparing Mosteller (1949) Method 5 (absolute difference between poll results and election results) with the poll’s margin of error (MOE) or credibility interval. The expectation is that 95% of poll results would be SA by falling between the poll’s margin of error or credibility interval and the actual margin of victory. The new measurement is described and then applied to the statewide preelection polls from the 2012 Presidential (n= 331) and 2016 Presidential (n = 539) races using n = 182 polling organizations in the last 21 days of each election cycle. This analysis finds statewide preelection polling in 2012 had a 94% SA and was not statistically different from the expected 95%, while the statewide polling in 2016 had a 77% SA and a binomial test found the distribution differs significantly from the expected 95%. There is a significant difference in SA between the two election cycles, χ2(1, N = 870) = 45.24, p < .000. The 2012 biased polls favored the Republican candidate 68% of the time; however, a binomial test found this distribution did not differ significantly from the expected 50/50 distribution, .50, p = .167 (two-tailed), suggesting this was caused by random error. In 2016, biased polls favored the Democratic candidate 90% of the time, a binomial test indicated that the proportion was higher than the expected .50, p < .000 (two-tailed), suggesting a systemic bias.

Read More

1 2 3