This was something I had been thinking about for a while (as I was avoiding grading). But,over the past few days, I've had some discussions with various colleagues and students that have made me think more about this issue. So, here you go. I'm going to talk both about pre-college education and also college education.
Primary/Secondary Education
First, for those that don't know, I have lots of teachers in my family. My mom is trained as a teacher (currently substitute teaches). My sister is a teacher. Both my mother-in-law and father-in-law are retired teachers. My uncle is a retired teacher/principal. My grandmother was a teacher. Teaching is, in a sense, "in my blood".
Yet, our education system is all screwy nowadays. Just as an example,
this article recently made the rounds. To make a long story short: a guy with a bachelors degree, 2 masters, and 15 hours toward a doctorate (personally, I don't think 15 hours toward a doctorate is worth mentioning, such a small percentage of what is required for a doctorate...) and is on a school board decided to take one of the standardized tests that 10th graders are required to take. He didn't do very well - failed the math section (getting just 16%), and got 62% on the Reading section.
Being someone who is so closely connected to so many teachers, I occasionally hear complaints about standardized tests and how they create the phenomenon of "teaching the test". But, this should raise an obvious question: why is it that we have these standardized tests in the first place?
Typically, they are put in place so that
we can evaluate school systems (especially public school systems). But, tell me, isn't this process a bit weird?
People often don't like when I do this - but I'm going to.
In what field outside education do we judge success by standardized tests? I can't think of any - though that might just be because I'm not thinking very well at 7:45AM. But, think about it. If I buy a pair of shoes, I judge them not by running them through a battery of tests to find out how they will perform under various conditions. I just use them. If they serve my purposes, then my shoes were a success. The same goes for any other product I buy. When I go to physical therapy for neck/back pain, I judge them based on whether they help me get the pain under control. When I buy furniture, I judge it by whether it fits in my house and fulfills its purpose there.
Summing up: we judge every single other product based on its subjective usefulness to the consumer. No one can objectively tell me whether my shoes are "good" or not - because they don't know why I bought them (comfort v. style v. price, say). This, in turn, means that the consumer is the sole judge of whether the product is "successful" or not. If enough customers like the product well enough (regardless the reasons!), it will be profitable to produce, and production will continue. When customers stop finding it useful, they stop buying it and the producer goes out of business - so that we don't continue to waste resources on a product that no one wants.
Education - especially in its public variety - has some odd traits here, though. First, students are required to attend school of some variety (public, private, home). The compulsory nature of education immediately affects consumer behavior. Add to this that public education is tax-funded and is offered at a very low (because very subsidized) cost to the consumer (typically, just a handful of school fees). End result: the consumer level cost-benefit analysis doesn't reflect the true costs and benefits. Public education is perceived as being basically "free" when compared to private education - despite the fact that many private schools have tuition that is lower than the cost-per-pupil at some public schools. (Example: tuition - before financial aid - at the Cuyahoga Valley Christian Academy is under $9,000 per student. Cost-per-pupil at Cuyahoga Falls City Schools is over $10,000 per student. Tuition at Heritage Christian School in Canton is under $6,000 per student. At Canton City Schools, it is over $11,000.)
Now, we can talk all day about whether the tax-funded system for education is "good" in some moral sense. We can talk about whether public education fulfills some important social purpose. That's not the point I'm making. The point I'm making is that the "free" (tax-paid), "compulsory" (can only get out of it by establishing you're getting education somehow) nature of public education throws off the cost-benefit analysis when people are making their decisions about education. The end result is no surprise: we end up with a system that is costly and provides questionable benefits. (Questionable here just means "unclear", not "nonexistent".)
The last part is the key to understanding why we have standardized testing so that we can put together "School Report Cards" to evaluate public schools. (Fun fact: no school report cards for private schools - at least in Ohio.) The benefits are questionable. Few people actually "chose" to attend (or have their kids attend) the public school in any meaningful sense. It's far less personal expense than private school. And, once accounting for foregone wages, far less personal expense than homeschooling, too (for most people, anyway). It's the (seemingly) "cheapest" option by far - so the benefit doesn't have to be that great for it to be the best choice from a parent's perspective.
In the end, then, we're left with a system that educates the grand majority of children and that has questionable benefits - and that's a problem, especially as education is becoming a more central part of our lives. ("Back in the day", as long as the kid learned readin','ritin', and 'rithmetic, the school had done its job. Nowadays, we expect schools to provide some personalized combination of life preparation, vocational preparation, and college preparation.) So, now we want proof that the system "works" or doesn't. But, we don't have the most basic market signal - profit - telling us whether the system is working or not. So, we have to make one up.
So, we did. We made one up that seems, on the face of it, to be sensible. We want schools to teach students stuff. The way we find out if students are learning stuff is by testing them. So, we test the students in a school on various stuff, and that tells us whether students learned that particular stuff. Problem solved, right?
Well, no.
The more I'm in education the more I'm convinced of something... Tests work for students that don't realize that tests are really what matter.
See, the purpose of any course of instruction isn't to make sure that the student can pass a test. It's to make sure that the student learns some material with some target level of "depth". The test is just a way of finding out whether the student is learning what they should be learning. But, by their nature, tests have to be somewhat limited in the scope that they actually test. You can't test a student on absolutely everything you want them to learn. So, you take a random sample of things you want the student to learn, and test them on that. Statistics then tells us that the score on the (limited) test should be a reasonable approximation of what the score on a more comprehensive test would be.
And this system works.
As long as the students can't game it.
For example, once students start being able to figure out what will and will not be on the test, the test stops working, because the statistics get thrown off. If the questions are random from the students' perspective, then they have to study everything, and the random questions are a reasonably good reflection of their overall ability to handle the material. If the questions, however, can be foreseen by the students, then the students will focus their energy on just learning the material that's on the test. (Thus, I often get my most reviled question from students... "Will this be on the exam?") Result: the test overstates the true understanding of the material by students. (Makes me wonder how much grade inflation comes from the fact that we're writing more "gameable" exams... Multiple choice is much more gameable than essays are... Hmm...)
Now, it's probably true that elementary school students aren't yet sophisticated test-takers that are gaming the system. However, once we start evaluating schools and teachers on the basis of tests, we create an incentive for schools and teachers to start gaming the system. Do schools and teachers know how to game the system? Of course. We write tests. We're veritable experts at gaming them. So, what then do teachers do? In as far as possible, they "teach the test" - and teach students how to game tests in the process. The goal of education shifts from being... well, education... to being raising some of the best test-takers in the county.
Which, I would suggest, is a lower value skill in real life than, say, algebra is. I mean, I can use algebra to figure out how much money I can spend on Christmas gifts. But, being able to fill in the right bubble on an exam? Not a useful skill once you finally get out of school.
End result: we have schools that are emphasizing skills that have little-to-no value once school is over. Why? Because that's how we're evaluating the schools - and the tests are now "high stakes", so even principled teachers and administrators have a strong incentive to game the system as best they can.
What's the solution? True cost-benefit analysis. I'm a fan of privatizing schools entirely. (Note: I went to public schools. I liked my public schools. I think I got a good education from public schools. I am in no way saying that the quality of public schools in our current system is "worse" than the quality of private schools. I AM saying that we'd have a better school system as a whole if it were fully privatized. You can't judge system-wide changes based on differences that exist or do not exist within a current system. Example: the fact that the college-educated earn more on average than the high school educated does not prove that if everyone were college-educated then everyone would earn more. More on that later, though...) Then, people have to weigh the benefit and the true cost of each school, and make the best decision they can, based on what they care about. I'd guess that people care far more about things like job and college placement than standardized test scores.
But, even if we consider full privatization to be impossible, we do have other options that can introduce real school choice: vouchers and charter schools. On the whole, I'm not a fan of vouchers, as I tend to believe that government strings are attached to government money. However, I do admit that vouchers would probably introduce more school choice, and would deal with some of the "but the poor can't afford private school" problems. (It's interesting to me that in virtually every voucher discussion public school teachers start protesting that vouchers would "kill" the public school systems. Do they not realize that what they are really saying there is that they believe they are offering inferior education that people would run from if given the chance? If that's how public school teachers feel about the education that they themselves provide, then our system is in serious, serious trouble!) So, while it is true that we'd still end up with "overeducation" (unless the voucher is offered as a cash payment...), at least the different educational options would be compared based on their real differences.
Summing up: I think the problems in our primary/secondary educational system (in public schools anyway) are caused primarily from the fact that evaluation of public schools doesn't happen at the subjective benefit level, since so many of the costs are hidden. Therefore, we feel a need to introduce objective evaluation - but high-stakes objective evaluation leads to gaming the system, which ends up, on the whole, diminishing the quality of real education.
Higher Education
Higher education has a different problem. While it is true that subsidization does mask the true costs of a college education at many large institutions (my own among them), that distortion is actually relatively small - compared to the nearly completely subsidized education available at primary and secondary public schools.
I think higher education's big problem is that we have a system that is inherited from approximately 1,000 years ago and as a result we're not doing some, frankly, very sensible things. Instead, we're doing stupid things. Two significant ones have come to mind recently.
(1) The "Teaching/Research/Service" trifecta.
One of the most basic economic principles is the "division of labor". The division of labor is a remarkably simple thing with profound consequences. At its most basic level, it's basically this "We're more productive on the whole if we each specialize in one thing that we're relatively good at." That's it. That's why job descriptions for most tend to be a relatively small set of tightly connected tasks. Typically, if you're asked to do something less related on your job, it's only because your employer has run out of things that are more closely related for you to do. Yet, in higher education, we persist in this notion that professors should have to do teaching AND research AND service (read: administration). Why not divide these up into separate positions? Why not have some people who fully specialize in teaching, others in research, and others in service? (Of course, we do to some degree - but that's not what is "standard".)
Now, there are some answers out there - but I'm convinced they're not very good. I have yet to even run across even one non-bogus argument for why faculty are supposed to do service at all. The best answer I have is an outrageous claim from the national faculty union that more faculty governance can prevent Penn State-style child molestation. Sorry, but I don't believe my union on that one. Apart from that, I've not heard an actual argument why it's good to have a "faculty voice" in the administration of the modern University.
Now, there are
a number of possible reasons for University professors to do research (including the idea that research makes professors WORSE teachers, and that that's a good thing for the University). But, I don't see that any of these reasons are actually "good". The best one I can find is that doing research keeps professors at the "cutting edge" of their field - ensuring that the subject matter they teach is up-to-date. But, one would think you could do the same thing much more efficiently by, say, having professors attend seminars run by professional research specialists about the up-to-date research going on in their field.
So, what do we have instead? We have a number of people teaching in Universities who are splitting their time between three at-best vaguely related activities. So, they can't actually do any of them as well as they could if they were allowed to devote all their time to them.
Why do we do what we do, then? My theory: we have inherited the system from medieval Europe. At the time, there were so few educated people that it was a practical necessity to have the same people doing teaching and research. At the same time, information dispersion was slow and expensive. So, keeping updated on what others were doing was difficult. Really, the only way to stay at the cutting edge was to do the cutting edge research yourself. But times have changed. With a click, I can get all the cutting edge research I want. For less than $100 a year, I'm a member of the American Economics Association - and receive a half-dozen journals every 3 months from them as part of my membership fee. I'm drowning in cutting-edge research that other people have done. At this point, I'd suggest that doing my own research actually distracts from my ability to keep up with most cutting edge research in my field, simply because doing my own research is far more time consuming than reading others' research is. So, when I do research, I write a couple papers a year, and read (honestly, mostly skim) only papers that are closely related to those I'm writing. If instead of writing these two papers, I just read others, I'm sure I could read far more - and more broadly - than I do. Times have changed, and they've changed in a way where I suspect that doing research and teaching are not as complementary is they used to be.
(2) The "Tenure" System
In particular, I think the so-called "Tenure Track" is silly. (I say this as someone ON that track.) It's the tenure-track where you get the "publish or perish" system. And, that system is basically nonsense, even if we accept that publishing should be central to a professor's job.
Seriously, in what other system do they tell you "You have 4 to 7 years (depending on the school) to prove that you're an awesome employee. If you do, you get a promotion and have a job for life. If you don't, you're fired!"
It's a weird system, and it creates absolutely bizarre incentives. Once tenured, professors have little motivation to continue doing a good job, and some go into semi-retirement mode. Universities have to hold people to extremely high standards during tenure review since it's such a high-stakes commitment for them to grant someone tenure. Though, offsetting that, the University doesn't want the bar to be TOO high, or they'll end up getting rid of employees who are "very good, but not quite excellent". As a result, from a faculty member's perspective, the process seems almost random. And that's very annoying BECAUSE it is so high-stakes. My experience as an educator tells me an important thing about human nature: people are willing to tolerate vagueness if the stakes are sufficiently low. But, as the stakes rise, people want more and more clarity and objectivity. If you're going to either give me a job for life or fire me, I want to know PRECISELY what I need to do to end up on the right side of the line. If, on the other hand, the stakes are just "a 2% raise or a 5% raise" based on a performance assessment, then I'm not going to complain as much about the assessment being vague.
Which, of course, is what most businesses do. Every year (half-year, quarter, whatever) the employee goes up for a performance review. They are, for some vague reason, declared to be of some quality, and are given a corresponding raise. If they're going to lose their job, it's going to be for a very good reason - some serious failure to do what was expected. If a worker gets laid off, it says far more about the company's failures than the worker's. Performance reviews are very rarely high-stakes like tenure review is. If one of the possibilities is "promotion and a job for life", the other possibility should certainly NOT be "lose your job". The same person being up for either of these - and only either of these - is just bizarre.
This system cannot last.
I'm not saying that tenure should not exist ever. (Though I think it should be very rare.) I'm saying that the "tenure v. losing your job" setup is just ridiculous. It's not a sensible decision to make a University make.
Summing up: the current University system is antiquated and in need of updating. Its HR practices are disconnected from the rest of the world, and it denies the division of labor. The question in my mind is just how long the system can last in its current form. After all, tradition can carry institutions long past their usefulness...