Michael Hicks is the George and Frances Ball distinguished professor of economics and the director of the Center for Business and Economic Research at Ball State University. His column appears in Indiana newspapers.

Over the past 30 years, American households have been answering questions about their choice of homes. In the 1980s, only about one in 10 recent movers reported choosing their current home because of schools, but today seven out of 10 do. That shift has enormous implications on matters ranging from economic development policy to school quality measures. This column will focus on measuring schools.

There are lots of ways to measure school quality. The easier ones are athletic performance or other competitive measures like band competitions. If you are more worried about academics, the issue is a lot murkier.

Test scores, such as ISTEP+ and NWEA scores, tell us fairly well how children have absorbed information. Other standardized scores, like the ACT or SAT, do the same, but with a smaller sample of kids. These tests are imperfect, but for all the criticism they receive they do measure effectively in this domain. The problem is that this does not measure how good schools are, but how well the students perform. That is not the same thing because much of individual student performance depends not on the school, but on the family.

Like several states, Indiana has adopted a growth model of school performance. This approach assigns each student to a cohort of kids that are statistically similar, and then compares their progress over a year. These scores are then averaged across a particular school or school corporation, yielding a growth score. Done well, this would be a great way to compare the impact of a particular school because it tells us how much the average student learned in a year relative to other kids with similar backgrounds. In practice, the model is very difficult to assess because it is proprietary to the consulting firm that performs the assessment.

A third way of measuring schools is to statistically estimate what their raw test scores should be given their community demographics, and then compare these predictions with the actual test scores. The difference, plus or minus, may be interpreted as the “value added” of the school. This is the most common external evaluation of schools and is often called the “Adjusted Performance Measure.” This is probably the best measure of the actual difference a school makes on a child's performance. This is also the most politically sensitive measure because it measures what many schools don’t really want measured.

There are also measures that are less quantitative and may be useful to parents. The number and pass rate of AP tests or the process for mentoring new kids also matter. Indiana uses see types of categorical measures, as do many federal and media measures.

There is no single “best” way to measure school quality. This is especially true because there is such incentive to mislead prospective parents. That’s why every community’s website touts their great schools. So, the least imperfect way to measure schools is to compare multiple rankings, perhaps over multiple years. Indiana would be well served to choose that path over the impossibility of a perfect measure.