When survey results change drastically from one period to the next the results are often chalked up to actual changes within the population that the survey was intended to measure. However, the possibility also exists that not all changes correspond to true changes in the underlying population. Pollsters rarely survey the same respondents from period to period. If the sample of respondents is too small to be representative or if the respondents differ systematically over time, than the results from one year to the next may not be (i) comparable or (ii) representative of the larger population of interest.
An example will illustrate this point. Suppose a survey of law school graduates reports that 10% were unemployed upon graduation in 2012 and that 20% of law school graduates were unemployed upon graduation in 2013. This enormous change could result from three possibilities (or combinations thereof):
- Law school graduates actually were employed less frequently upon graduation in 2013 relative to 2012.
- Law school graduates surveyed in 2013 were systematically different from those surveyed in 2012. For example, suppose the 2012 survey only included respondents from Yale, Harvard, and Stanford while the 2013 survey included only graduates from bottom tier schools. In this example, differences in employment rates may be related to differences in the demand for such pedigrees.
- The sample of graduates surveyed was too small for one or both year’s result to be reliable. In the extreme, suppose only ten students were surveyed in each year. With such a small sample, the differences in unemployment rates hinged on a single different response (1 unemployed student in 2012 and 2 unemployed students in 2013).
With these possibilities in mind, we need to be mindful that survey results we are endeavoring to compare are both reliable and comparable. If changes are unexplained or occur for no apparent reason, we should consider whether possibilities #2 and #3 above explain the apparently large period-to-period difference.
A recently reported business school survey’s results are highly suspect for the reasons unlined above. The survey purports to measure the average MBA student’s belief about post-graduation salaries. These beliefs are broken out for nearly two dozen countries ranging from the US and Canada to Russia and Kazakhstan. The study compares results from 2013 and 2012 for each country. Average responses differ dramatically between the two years, with the largest difference of 244.8% ($137,000 in 2012 and $79,000 in 2013) coming from Kazakhstan. No obvious explanation exists for such drastic changes. In this case, it’s no coincidence that the countries with the greatest year-over-year differences had relatively fewer survey respondents (e.g. Italy, South Korea, South Africa, Kazakstan). Anyone relying on this particular survey would be wise to question whether the dramatic changes reported are a function of flaws in the survey methodology.