֭

Table of Contents

2020 College Free Speech Rankings Q&A: How we calculated our rankings

“2020 College Free Speech Rankings: What’s the Climate for Free Speech on America’s College Campuses?” features the opinions of the roughly 20,000 students surveyed by the Foundation for Individual Rights in Education, College Pulse, and RealClearEducation.

This post is the third in a series that answers questions about ֭’s 2020 College Free Speech Rankings (see part I and part II). These rankings are based on the largest survey on college student attitudes toward free speech and expression ever conducted — a survey of almost 20,000 undergraduates enrolled at 55 colleges in the United States.

In this post, I address questions about our survey items and about how we calculated the campus free speech index that determined our rankings. 

How were the survey items chosen?

In a recent So to Speak podcast, Nico Perrino, ֭’s vice president of communications, noted that our first conversation about the Free Speech Rankings project was in early May of 2019. In that meeting, we identified a number of constructs relevant to an assessment of the climate for free expression on a campus:

  • Why people support or oppose censorship;
  • Why people engage in self-censorship;
  • Why people support or oppose civil liberties;
  • What factors underlie political tolerance.

FIRE then took the lead on drafting an initial round of questions assessing these dimensions and worked with and to revise them. A pilot test of the survey questions was conducted in early March 2020 on a sample of 500 college students before launching the full survey on April 1.

Why did you not include all of the survey items in your computation of the College Free Speech Rankings?

To compute our campus free speech index (rankings method), we first performed an exploratory factor analysis (EFA) of all variables. This was followed by a series of confirmatory factor analyses (CFA). 

In our survey, a total of 24 questions were presented to every student. Each one of these 24 questions is considered a single observed variable, so there are 24 total observed variables in the dataset. 

The basic assumption of an EFA is that there are one or more underlying variables, referred to as factors, that can explain the associations between certain survey questions, or observed variables. In other words, an EFA helps identify questions that have similar patterns of responses and “hang together” in a way that suggests they are measuring the same underlying construct as typically with survey data, two or more survey questions will be significantly correlated with each other. 

To take another survey as an example, scholars have created a measure of an individual’s . This survey consists of eight different questions that all, in one way or another, ask about willingness to express one’s opinion in social settings. An EFA can reveal if all eight of these items “hang together” well enough (i.e., have strong enough associations with each other) so that the entire eight-question survey can be considered a measure of a single underlying construct, in this case, willingness to self-censor. If more than one factor is identified however, then the survey may be measuring multiple constructs. 

In an EFA, the data determine the underlying factor structure, not the analyst, and the number of factors identified is typically smaller than the number of observed variables. While similar conceptually, in a CFA the analyst predetermines the factor structure and then assesses how well that predetermined factor structure accounts for (or “fits”) the correlations within the data.

Our EFA revealed two factors that explained the majority of the variance in student survey responses.

Our EFA revealed two factors that explained the majority of the variance in student survey responses. This means that knowing how a student responded to the questions assessing openness to discuss controversial topics and also how they responded to the questions assessing tolerance of controversial speakers is highly predictive of how they answered the other questions of the survey. We labeled these two factors Openness and Tolerance. 

Openness consisted of the eight topics that students could identify as difficult to have an open and honest conversation about on campus. The Tolerance factor consisted of six questions asking about support for or opposition to controversial speakers on campus. Three additional factors were also identified: one consisting of the two items assessing Administrative Support, one consisting of the item assessing Self-expression, and one consisting of the FIRE Spotlight ratings

Using CFA we then tested the fit of three different models to the data, a “two-factor” model that included only Openness and Tolerance as factors, a “four-factor” model that also included Administrative Support and Self-expression, and a “five-factor model that also included the FIRESpotlight rating. The goal of this analysis is to identify which of the three models “fits” the data the best and offers the most explanatory power for student responses.

These analyses revealed that although Openness and Tolerance remained the most important factors, a solution that included all five dimensions was appropriate for two reasons: 1) The five-factor model fit the data just as well as the two-factor model; and, 2) It explained more of the variance in student responses. Our weighting scheme for how each factor was incorporated into the index that determined the rankings was also based on these analyses. 

There were other questions including in the survey items that were never intended to be included in the index, but rather were questions tied to timely topics and administered for information purposes, such as the questions about dating a supporter or opponent of President Donald Trump.

Conclusions

We would not be surprised if there are other additional constructs relevant to measuring the expression climate on campus that we did not consider and may want to measure in the future. We also think there are likely ways to improve the survey questions that were used to measure the constructs we already identified. In other words, we welcome conversations about our rankings in an effort to improve the methodology for future rankings efforts.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share