Americans have devoted an enormous amount of effort over the centuries to devising useful baseball statistics. In recent years, Americans have talked a lot about devising useful educational statistics.
For example, I've pointed out a million times over the last decade that it doesn't make much sense to judge teachers, schools, or colleges by their students' test scores. Most of the time, all you are doing is determining which kids were smarter to start with. Logically, it makes more sense to judge their "value added" by comparing how the students score now to how they scored in the past before the people or institutions being measured got their mitts on the students.
Over the last few years, everybody who is anybody in education -- Bill Gates, Arne Duncan, you name it -- has come around to this perspective (although they won't use the word "smarter").
A big problem, however, is that this value added idea remains almost wholly theoretical because almost none of the prominent educational statistics are published in value added form.
In contrast, when Bill James was pointing out 30 years ago that Batting Average, traditionally the most prestigious hitting statistic (the guy with the highest BA was crowned "Batting Champion"), wasn't as good a measure of hitting contribution as Slugging Average plus On-Base Percentage, he could show you what he meant using real numbers that were available to everybody, even if you had to calculate them yourself from other, more widely published statistics.
Readers would say, "Yeah, he's right. For example, Matty Alou (career batting average .307, but slugging average .381 and on-base percentage .345) wasn't anywhere near as good as Mickey Mantle (career batting average only .298, but slugging average .557 and on-base percentage .421). If you add on-base percentage and slugging average together to get "OPS," then Mickey had a .977 while Matty only had .726. And that sounds about right. Mickey was awesome, but it didn't always show up in his traditional statistics. Now, we've finally got a statistic that matches up with what we all could see from watching lots of Yankee games."
On the other hand, other innovative baseball statistics from that era have faded because they didn't seem to work as well in practice as in theory. Readers would be rightly skeptical that Glenn Hubbard and Roy Smalley Jr. really were all time greats, as these complicated formulas said they were.
A couple of years ago, Audacious Epigone and I stumbled upon a potentially promising fluke in the federal National Assessment of Educational Progress test scores by state. Since these tests are given every two years to representative samples of fourth and eighth graders, then you ought to be able to roughly estimate how much value the public schools in each state have added from 4th grade to 8th grade by comparing, say, a state's 2009 8th grade scores to that state's 2005 4th grade scores.
Granted, people move in and out of states, but if you just look at the scores for non-Hispanic whites, you can cut down the effect of demographic change to what might be a manageable level.
So, how to display this data in a semi-usable form? In the following table, I've put the Rank of each state. For example, in NAEP 4th Grade Reading scores in 2005, white public school students in Alabama ranked 48th (out of 52 -- the 50 states plus D.C. and the Department of Defense schools for the children of military personnel). By 2009, this cohort of Alabamans was up to 47th in 8th Grade Reading. That's a Change in Rank of +1. Woo-hoo!
In contrast, in Math, Alabama's 4th Graders were 50th in 2005 and the state's 8th Graders were 50th in 2009, so that's a Change in Rank for Math of zero.
There are measures that are better for some purposes than Rank, but, admit it, ranking all the states is more interesting than using standard deviations or whatever.
A new idea is embodied in the last column, which reports the Difference in Rank between Math and Reading scores for 8th Graders in 2009. Because Alabama was 47th in Reading in 2009, but only 50th in Math in 2009, it gets a Difference in Rank of -3. Boo-hoo ...
What's the point of this last measure?
There's a fair amount of evidence that schools have more impact on Math performance than Reading performance. For example, math scores on a variety of tests have gone up some since hitting rock bottom during the Seventies (in most of America outside of Berkeley, the Seventies were when the Sixties actually happened). In contrast, reading and verbal scores have staggered around despite a huge amount of effort to raise them.
Why have math scores proven more open to improvement by schools than reading scores? One reason probably is that because kids only spend about 1/5th of their waking hours in school. And almost nobody does math outside of school, but some kids read outside of school. So, if you, say, double the amount of time spent in school on math, then you are increasing the total amount of time kids are spending doing math by about 98%. But if you double the amount of time spent on reading in school, there are some rotten stinker kids who read for fun in their free time, and thus you aren't doing much for them in terms of total hours devoted to reading.
Not surprisingly, a decade of the No Child Left Behind act, which tells states to hammer on math and reading and don't worry about that arty stuff like history and science, has seen continued slow improvements in math, but not much in reading -- except at the bottom (i.e., the kids who don't read outside school).
So, by 8th grade, Reading scores would likely be a rough measure of IQ crossed with bookishness (personality and culture). In contrast, 8th Grade Math scores are more amenable to alteration by schools since kids aren't waiting in line to buy Harry Potter and the Lowest Common Denominator. So, the idea behind the final column is to compare rank on 8th Grade Math to rank on 8th Grade Reading. A positive number means your state has a better (lower) rank on Math than on Reading, which might reflect relatively well on your public schools given the raw materials it has to work with relative to other states.
For example, on the NAEP, Texas ranks 11th among white 8th graders in Reading, which is pretty good for such a huge state. But, it ranks a very impressive 4th among white 8th graders in Math, for a Difference in Ranking score of +7. This suggests Texas is doing something with math that's worth checking into. Maybe they are just teaching to the test, but this is the NAEP, which isn't a high-stakes test. And there are worse things than teaching to the test. (Whatever they are doing, they are starting young, because Texas ranks 2nd in Math for white 4th Graders.)
So, here is this huge table:
As J.K. Simmons asks at the end of Burn After Reading, "What did we learn?"
I'm not terribly sure, either. Who knows enough about what goes on within the educational establishments of all the states to know whether these numbers make sense?
But, at least we have some value added numbers and aren't just still talking about how valuable they'd be if we ever got around to getting any.
For example, I've pointed out a million times over the last decade that it doesn't make much sense to judge teachers, schools, or colleges by their students' test scores. Most of the time, all you are doing is determining which kids were smarter to start with. Logically, it makes more sense to judge their "value added" by comparing how the students score now to how they scored in the past before the people or institutions being measured got their mitts on the students.
Over the last few years, everybody who is anybody in education -- Bill Gates, Arne Duncan, you name it -- has come around to this perspective (although they won't use the word "smarter").
A big problem, however, is that this value added idea remains almost wholly theoretical because almost none of the prominent educational statistics are published in value added form.
In contrast, when Bill James was pointing out 30 years ago that Batting Average, traditionally the most prestigious hitting statistic (the guy with the highest BA was crowned "Batting Champion"), wasn't as good a measure of hitting contribution as Slugging Average plus On-Base Percentage, he could show you what he meant using real numbers that were available to everybody, even if you had to calculate them yourself from other, more widely published statistics.
Readers would say, "Yeah, he's right. For example, Matty Alou (career batting average .307, but slugging average .381 and on-base percentage .345) wasn't anywhere near as good as Mickey Mantle (career batting average only .298, but slugging average .557 and on-base percentage .421). If you add on-base percentage and slugging average together to get "OPS," then Mickey had a .977 while Matty only had .726. And that sounds about right. Mickey was awesome, but it didn't always show up in his traditional statistics. Now, we've finally got a statistic that matches up with what we all could see from watching lots of Yankee games."
On the other hand, other innovative baseball statistics from that era have faded because they didn't seem to work as well in practice as in theory. Readers would be rightly skeptical that Glenn Hubbard and Roy Smalley Jr. really were all time greats, as these complicated formulas said they were.
A couple of years ago, Audacious Epigone and I stumbled upon a potentially promising fluke in the federal National Assessment of Educational Progress test scores by state. Since these tests are given every two years to representative samples of fourth and eighth graders, then you ought to be able to roughly estimate how much value the public schools in each state have added from 4th grade to 8th grade by comparing, say, a state's 2009 8th grade scores to that state's 2005 4th grade scores.
Granted, people move in and out of states, but if you just look at the scores for non-Hispanic whites, you can cut down the effect of demographic change to what might be a manageable level.
So, how to display this data in a semi-usable form? In the following table, I've put the Rank of each state. For example, in NAEP 4th Grade Reading scores in 2005, white public school students in Alabama ranked 48th (out of 52 -- the 50 states plus D.C. and the Department of Defense schools for the children of military personnel). By 2009, this cohort of Alabamans was up to 47th in 8th Grade Reading. That's a Change in Rank of +1. Woo-hoo!
In contrast, in Math, Alabama's 4th Graders were 50th in 2005 and the state's 8th Graders were 50th in 2009, so that's a Change in Rank for Math of zero.
There are measures that are better for some purposes than Rank, but, admit it, ranking all the states is more interesting than using standard deviations or whatever.
A new idea is embodied in the last column, which reports the Difference in Rank between Math and Reading scores for 8th Graders in 2009. Because Alabama was 47th in Reading in 2009, but only 50th in Math in 2009, it gets a Difference in Rank of -3. Boo-hoo ...
What's the point of this last measure?
There's a fair amount of evidence that schools have more impact on Math performance than Reading performance. For example, math scores on a variety of tests have gone up some since hitting rock bottom during the Seventies (in most of America outside of Berkeley, the Seventies were when the Sixties actually happened). In contrast, reading and verbal scores have staggered around despite a huge amount of effort to raise them.
Why have math scores proven more open to improvement by schools than reading scores? One reason probably is that because kids only spend about 1/5th of their waking hours in school. And almost nobody does math outside of school, but some kids read outside of school. So, if you, say, double the amount of time spent in school on math, then you are increasing the total amount of time kids are spending doing math by about 98%. But if you double the amount of time spent on reading in school, there are some rotten stinker kids who read for fun in their free time, and thus you aren't doing much for them in terms of total hours devoted to reading.
Not surprisingly, a decade of the No Child Left Behind act, which tells states to hammer on math and reading and don't worry about that arty stuff like history and science, has seen continued slow improvements in math, but not much in reading -- except at the bottom (i.e., the kids who don't read outside school).
So, by 8th grade, Reading scores would likely be a rough measure of IQ crossed with bookishness (personality and culture). In contrast, 8th Grade Math scores are more amenable to alteration by schools since kids aren't waiting in line to buy Harry Potter and the Lowest Common Denominator. So, the idea behind the final column is to compare rank on 8th Grade Math to rank on 8th Grade Reading. A positive number means your state has a better (lower) rank on Math than on Reading, which might reflect relatively well on your public schools given the raw materials it has to work with relative to other states.
For example, on the NAEP, Texas ranks 11th among white 8th graders in Reading, which is pretty good for such a huge state. But, it ranks a very impressive 4th among white 8th graders in Math, for a Difference in Ranking score of +7. This suggests Texas is doing something with math that's worth checking into. Maybe they are just teaching to the test, but this is the NAEP, which isn't a high-stakes test. And there are worse things than teaching to the test. (Whatever they are doing, they are starting young, because Texas ranks 2nd in Math for white 4th Graders.)
So, here is this huge table:
NAEP | Read | Read | Read Chg | Math | Math | Math Chg | Math-Read |
Public | 4th | 8th | 4th-8th | 4th | 8th | 4th-8th | 8th-8th |
White | 2005 | 2009 | 09-05 | 2005 | 2009 | 09-05 | 09-09 |
| Rank | Rank | Chg in Rnk | Rank | Rank | Chg in Rnk | Dif in Rnk |
Alabama | 48 | 47 | +1 | 50 | 50 | +0 | -3 |
Alaska | 37 | 31 | +6 | 31 | 21 | +10 | +10 |
Arizona | 41 | 29 | +12 | 36 | 27 | +9 | +2 |
Arkansas | 34 | 46 | -12 | 37 | 44 | -7 | +2 |
California | 32 | 33 | -1 | 25 | 36 | -11 | -3 |
Colorado | 9 | 9 | 0 | 13 | 6 | 7 | +3 |
Connecticut | 4 | 2 | +2 | 8 | 7 | +1 | -5 |
Delaware | 3 | 14 | -11 | 11 | 17 | -6 | -3 |
DC | 1 | +1 | 1 | +1 | 0 | ||
DoDEA | 8 | 5 | +3 | 21 | 16 | +5 | -11 |
Florida | 16 | 21 | -5 | 14 | 37 | -23 | -16 |
Georgia | 27 | 38 | -11 | 33 | 34 | -1 | +4 |
Hawaii | 40 | 45 | -5 | 40 | 48 | -8 | -3 |
Idaho | 30 | 35 | -5 | 29 | 26 | 3 | +9 |
Illinois | 13 | 10 | +3 | 28 | 18 | +10 | -8 |
Indiana | 43 | 34 | +9 | 26 | 29 | +-3 | +5 |
Iowa | 42 | 41 | +1 | 39 | 41 | +-2 | 0 |
Kansas | 33 | 19 | +14 | 10 | 15 | +-5 | +4 |
Kentucky | 46 | 37 | +9 | 51 | 49 | +2 | -12 |
Louisiana | 45 | 51 | -6 | 41 | 45 | -4 | +6 |
Maine | 36 | 39 | -3 | 42 | 39 | 3 | 0 |
Maryland | 7 | 3 | +4 | 7 | 2 | +5 | +1 |
Massachusetts | 2 | 4 | -2 | 3 | 1 | 2 | +3 |
Michigan | 28 | 40 | -12 | 22 | 42 | -20 | -2 |
Minnesota | 12 | 7 | +5 | 4 | 5 | +-1 | +2 |
Mississippi | 49 | 48 | +1 | 48 | 51 | +-3 | -3 |
Missouri | 26 | 27 | -1 | 45 | 32 | 13 | -5 |
Montana | 21 | 16 | +5 | 35 | 10 | +25 | +6 |
Nebraska | 18 | 20 | -2 | 30 | 28 | 2 | -8 |
Nevada | 51 | 49 | +2 | 44 | 40 | +4 | +9 |
New Hampshire | 19 | 24 | -5 | 20 | 23 | -3 | +1 |
New Jersey | 6 | 1 | +5 | 5 | 3 | +2 | -2 |
New Mexico | 35 | 25 | +10 | 49 | 38 | +11 | -13 |
New York | 10 | 8 | +2 | 16 | 19 | +-3 | -11 |
North Carolina | 22 | 28 | -6 | 6 | 8 | -2 | +20 |
North Dakota | 20 | 22 | -2 | 24 | 9 | 15 | +13 |
Ohio | 14 | 12 | +2 | 12 | 30 | +-18 | -18 |
Oklahoma | 50 | 50 | 0 | 46 | 46 | 0 | +4 |
Oregon | 44 | 36 | +8 | 34 | 31 | +3 | +5 |
Pennsylvania | 15 | 6 | +9 | 17 | 14 | +3 | -8 |
Rhode Island | 39 | 43 | -4 | 43 | 43 | 0 | 0 |
South Carolina | 38 | 44 | -6 | 9 | 24 | -15 | +20 |
South Dakota | 29 | 13 | +16 | 23 | 12 | +11 | +1 |
Tennessee | 47 | 42 | +5 | 47 | 47 | +0 | -5 |
Texas | 11 | 11 | 0 | 2 | 4 | -2 | +7 |
Utah | 31 | 30 | +1 | 38 | 33 | +5 | -3 |
Vermont | 24 | 18 | +6 | 32 | 22 | +10 | -4 |
Virginia | 5 | 17 | -12 | 15 | 20 | -5 | -3 |
Washington | 17 | 15 | +2 | 19 | 11 | +8 | +4 |
West Virginia | 52 | 52 | 0 | 52 | 52 | 0 | 0 |
Wisconsin | 23 | 26 | -3 | 18 | 13 | 5 | +13 |
Wyoming | 25 | 32 | -7 | 27 | 35 | -8 | -3 |
NAEP | Read | Read | Read Chg | Math | Math | Math Chg | Math-Read |
Public | 4th | 8th | 4th-8th | 4th | 8th | 4th-8th | 8th-8th |
White | 2005 | 2009 | 09-05 | 2005 | 2009 | 09-05 | 09-09 |
| Rank | Rank | Chg in Rnk | Rank | Rank | Chg in Rnk | Dif in Rnk |
As J.K. Simmons asks at the end of Burn After Reading, "What did we learn?"
I'm not terribly sure, either. Who knows enough about what goes on within the educational establishments of all the states to know whether these numbers make sense?
But, at least we have some value added numbers and aren't just still talking about how valuable they'd be if we ever got around to getting any.
My published articles are archived at iSteve.com -- Steve Sailer
Hunting moose is good for chidren.
ReplyDeleteThough, to be fair, nothing could be finer than to be in Carolina.
ReplyDeleteAre these just the scores for non-Hispanic whites? If so, this doesn't argue in favor of greater neighborhood homogeneity, considering the low scores of such traditionally segregated places as Alabama, Mississippi, Louisiana and Tennessee.
ReplyDeleteHere’s one thing to consider: Among school kids who learn to read well, there are usually two things going on. At school they are doing the reading groups, the phonics lessons, and whatever the current fad is, while at home the parents are reading to the kids, helping them sound out words, and answering their questions about words. However, if you take away the home stuff the kids will likely never learn to read well; take away the school stuff and it doesn’t hurt at all. While schools can probably teach basic literacy, thinking that kids learn to read well because of the actions of teachers is like a primitive tribe thinking that the annual virgin sacrifice is what keeps the sun rising.
ReplyDeleteThe Blackadder Says:
ReplyDeleteThe following states are positive for all three categories: Alaska, Arizona, Kansas, Maryland, Minnesota, Montana, Nevada, Oregon, South Dakota.
The following states are negative for all three categories: California, Delaware, Florida, Hawaii, Michigan, Virginia, Wyoming.
It's interesting to see where the two alternative measures, M4-M8 and R8-M8, disagree most strongly. The highest difference between the two in each direction are MT and SC, I think.
ReplyDeleteAccording to math improvement, MT scores an awesome +25. According to math/reading difference, it scores a good +6. So, does MT have a good school system or the best school system in the country?
According to math improvement, SC scores a crappy -15. But on math/reading difference, it scores a great +20. So, are SC's schools really bad or among the best in the country?
I have no opinion one way or the other, but these are the two states I would look at first if I were interested in finding out which of these two measures is better.
look at that huge -20 math drop for michigan between 4th grade and 8th grade. could that be due to white flight out of the state following the collapse of the auto industry there?
ReplyDeletenorth carolina is the smartest state in the south. probably due to lots of yankees moving there for the research triangle and the banking industry in charlotte. virginia was probably the previous leader in southern brainpower.
texas has a LOT of engineers for the energy industry. they make engineer kids. i've posted before about how significantly this field is overlooked in HBD land. engineers merely make the world work, and are not awarded nobel prizes, nor do they have loud voices in the media or government. their visibility is low - almost on purpose. everything they build has to be seamless background technology that you use, not notice or think about.
Just ever so slightly off-topic, but in addition to being saddled with substandard teachers, it looks like the losers of life's lottery are about to get substandard doctors, as well:
ReplyDeleteTech offers quick MD program
Wednesday, March 24, 2010
By Sarah Nightingale | AVALANCHE-JOURNAL
lubbockonline.com
...The university unveiled Tuesday a three-year medical degree to help address a shortage of primary care physicians in West Texas and across the nation. The new program will allow medical students to complete their degree in three years, rather than the typical four. They'll also receive a $13,000 scholarship to cover tuition and fees during their first year, Tech Chancellor Kent Hance said.
"They get a scholarship in the first year, and don't have to take the fourth year, so the cost is half," Hance said at a Tuesday press conference. "This is really innovative, and to see we're the first ones doing it makes me really proud"...
Meanwhile, back in Gotham City:
Jackson's doctor collected drug vials before calling 911, witness tells authorities
By Andrew Blankstein
March 22, 2010
articles.latimes.com
A security guard for Michael Jackson told Los Angeles police investigators that Dr. Conrad Murray collected vials of medication from the singer's bedroom before the guard called authorities the day Jackson died, a source familiar with the case told The Times on Monday.
The guard's statements mark the latest allegations involving the final moments of Jackson's life and the actions of Murray, his personal physician, who has been charged with involuntary manslaughter in the pop star's death...
The coroner's office had previously said Jackson died from "acute propofol intoxication" in combination with the use of sedatives.
The documents obtained by Associated Press described the level of anesthetic as enough to render a patient unconscious for "major surgery."
Murray told investigators that Jackson, 50, was a chronic insomniac who had depended for years on propofol -- a white liquid that the singer called "milk" -- to sleep, according to police affidavits filed in court.
But an anesthesiologist consulted by the coroner's office wrote in the report that she knew of "no reports of its use for insomnia relief"...
States certainly have education policies that affect their classrooms, but the real problem with measuring at the state level would seem to be the huge variations district-to-district and classroom-to-classroom.
ReplyDeleteIt would be easy (well, reasonably easy) to measure the effectiveness of Texas education vs. Mass education if all the grade level 5th grade math classes in Texas were doing exactly the same thing on the same day. But they're not. Hell, Mrs. Brown is probably doing something fairly different than Mrs. White, who is teaching a theoretically identical class next door.
by the way, i read an amazing number yesterday. in 1982, only 4 people scored a 1600 SAT. i don't recall the exact figure, but i remember something like 23 people scored 1600 in 1994. so there was already a lot of improvement at the top in only 12 years.
ReplyDeletethis jives with my academic experience. the smartest kids are getting smarter, but the average kid is getting dumber. both trends are running concurrently.
You may have found some numbers, but they don't seem very useful. Given that the public schools all over the country are being taught by the same sorts of people using the same methods and under the same constraints of public law, it doesn't seem likely that any major differences would show up between states.
ReplyDeleteOne thing which might be showing up in the statistics is the general economic health of a state. States in economic decline during the four years in question (the rust belt) might be suffering from a brain drain as the more educated classes seek greener pastures. States where the economy is growing might be benefiting from an influx of such people.
One reason that Texas might be doing better is the robust health of the private school sector. (I assume that private schools are included in the data.) Most private schools in Texas are not elite schools geared toward academics, but religious schools. Even those with high academic standards do not seem to insist on rigid admission requirements. It seems to me that most parents when shopping for a suitable private school are pretty canny about selecting a school where their child will fit in both academically and socially. It is this parental selection that allows some grouping by ability which the public schools forbid. This gives me hope for the future. If political correctness destroys the public schools completely, a voucher program might quickly put things right.
Steve,
ReplyDeleteInteresting ideas. However, the obvious problem with using the first of your metrics (the change in reading or math at the 4th and 8the grade level) as a value-added measurement is that a state which goes up in the ranking might just do an especially poor job with the 4th graders and an adequate job with the 8th graders. Sure, it still works for an intra-state comparison (the middle school teachers in X state are doing a better job than elementary teachers in X state where the rank rises significantly). Still, it's a starting point.
Your other metric (comparing the difference between math and reading ranking) suffers from a similar potential problem. A big difference between math and reading rank might merely be an artifact of especially poor reading programs (though reading ability is probably less movable then math ability, educational quality still makes a difference). Thus a big difference in this rank could have more to do with poor reading programs compared to math programs rather than a state doing something particularly right with education in general.
Excellent post though.
Wouldn't it be vastly more useful to compare changes not in rank but in, say, mean state scores as measured in SD from the national mean? In certain regions of the curve, a significant change in rank might be a trivial change in underlying state mean as measured in SDs from the national mean. Likewise, in certain regions of the curve, an appreciable change in SD might not be reflected (or not much) in a change in rank.
ReplyDeleteAlternatively, you might use changes in state mean percentiles.
Right, the table looks at only non-Hispanic whites to get a more apples-to-apples comparison across states.
ReplyDeleteThe race of students so completely dominates most educational achievement statistics that you have to filter out the effects of race somehow to get a chance of seeing the wispier effects of schools.
"Interesting ideas. However, the obvious problem with using the first of your metrics (the change in reading or math at the 4th and 8the grade level) as a value-added measurement is that a state which goes up in the ranking might just do an especially poor job with the 4th graders and an adequate job with the 8th graders."
ReplyDeleteRight, which is a problem in general with the now-fashionable "value-added" approach.
If I'm a school teacher and my salary is dependent upon my raising test scores for my 4th graders over what they scored in 3rd grade, do I want to get assigned the kids who had a good third grade teacher or a bad 3rd grade teacher? I don't know. If they had a bad 3rd grade teacher who depressed their 3rd grade scores below what they would have scored under an average 3rd grade teacher, I could make a lot of money just from regression toward the mean as they naturally catch up to where they would have been without the bad 3rd grade teacher. Or maybe a bad 3rd grade teacher has a long term effect and they depress their scores in 4th grade too. I could imagine how it could go either way.
Anyway, that's why I put real numbers out there, because they stimulate more incisive thinking, like this comment, than just theorizing without actual data.
Smart parents read to their kids, and have a fair number of quality books on the book shelves.
ReplyDeleteReally smart parents have games and books that improve math scores, as well.
Anonymous 1 says:
ReplyDelete" It would be easy (well, reasonably easy) to measure the effectiveness of Texas education vs. Mass education if all the grade level 5th grade math classes in Texas were doing exactly the same thing on the same day. But they're not. Hell, Mrs. Brown is probably doing something fairly different than Mrs. White, who is teaching a theoretically identical class next door."
Anonymous 2 says:
" You may have found some numbers, but they don't seem very useful. Given that the public schools all over the country are being taught by the same sorts of people using the same methods and under the same constraints of public law, it doesn't seem likely that any major differences would show up between states."
Good points! Fundamentally contradictory, but good points!
That's why it's useful to have real numbers, even if we don't what they mean yet. They generate hypotheses.
Instead, the Obama Administration is handing out $5 billion in Race to the Top money to help stimulate the use of Value Added statistics, but almost nobody has ever seen Value Added statistics, so few have much experience thinking about them or noticing their complexities and imponderabilities.
liberalbiorealism says:
ReplyDelete"Wouldn't it be vastly more useful to compare changes not in rank but in, say, mean state scores as measured in SD from the national mean? In certain regions of the curve, a significant change in rank might be a trivial change in underlying state mean as measured in SDs from the national mean. Likewise, in certain regions of the curve, an appreciable change in SD might not be reflected (or not much) in a change in rank."
Right. In the middle of the range, it's easy to hurdle a lot of states in the rankings with modest changes in absolute scores. At the left and right edges of the bell curve of states, it's harder to change rank. For example, you'd have to work pretty hard at being bad to dislodge West Virginia from last place among whites.
On the other hand, state rank is a "grabbier" statistic than standard deviations, and I wanted the meaning of the Change and Difference columns to be readily apparent from the raw ranking columns.
If you just want to measure the effectiveness of the schools, you should find the mean student IQ and compare it to the mean achievement. Whoever has the most favorable comparison of IQ to achievement is doing the best they can with the hand they have been dealt.
ReplyDeleteThis is how they do the scoring in bridge tournaments. Each hand is evaluated and your score is a measure of how well you do with what you are given.
I know my (Nation) State isn't adding any value. Heck, it's even deducing value from me.
ReplyDeleteJT
The reason that math scores have changed and narrowed over the past couple of decades is that the tests were changed. The old tests were aimed at abstract reasoning. Girls and blacks don't do so well. The new tests are aimed a computation. Girls do better at rote computation than abstract reasoning. See that's not so hard.
ReplyDelete" Girls do better at rote computation than abstract reasoning." Thank you, Mr Summers.
ReplyDeleteThe reason that math scores have changed and narrowed over the past couple of decades is that the tests were changed. The old tests were aimed at abstract reasoning. Girls and blacks don't do so well. The new tests are aimed a computation. Girls do better at rote computation than abstract reasoning. See that's not so hard.
ReplyDeleteTZZZSSSSSSS...
That's the sound of the air being let out of the tires on this bus.
au contraire, mon frère?
ReplyDeleteI'm assuming that these statistics are for public schools only. Could different states have different levels of private school enrollment due to the presence of coloreds?
ReplyDeleteIf the smarter Whites in a state check out of the public school system, wouldn't that drag down a state's ranking?
Since these scores are for non-Hispanic Whites, shouldn't we post the nHW IQ score for each state next to the value added scores to see what kind of mental material the schools are working with when they "add value"?
"The reason that math scores have changed and narrowed over the past couple of decades is that the tests were changed. The old tests were aimed at abstract reasoning. Girls and blacks don't do so well. The new tests are aimed a computation. Girls do better at rote computation than abstract reasoning. See that's not so hard."
ReplyDeleteWhy would you lump (white) girls and blacks in the same category? The differences between "abstract reasoning" pr math-related abilities in males and females of the same race are rather small and inconsequential when you compare them with another race. Females tend to score better in verbal categories than males, but so what? And there are more males concentrated on both the lower and higher end of the bell curve too....The difference between the average white male and average white female isn't really that staggering. In fact at my high school (graduated 2 years ago) 16 students in my graduating class scored above a 30 on the ACT...out of that 16, only 5 were male (although to be fair the two highest scores, 36 and 35 were both from guys)...And don't claim that the girls only did better because of the reading/english sections...in order to receive that high of a score (at least a 30) one has to do pretty decent (at least 26 and up) in the combined math/science sections too, even if those scores may be slightly lower than the reading/english scores.
No, there's not much difference between white boys and girls but the feminists have bitchec about it for years. The pc test givers solved that problem by changeing the test. That was my only point. The blacks come out a standard deviation below whites. Whites, slightly below east Asians. The blacks are a hopeless mental underclass and always will be. Do you really think bussing and Head Start made any difference. Liberal wishful thinking.
ReplyDelete"Smart parents read to their kids, and have a fair number of quality books on the book shelves.
ReplyDeleteReally smart parents have games and books that improve math scores, as well."
In addition to the fad Everyday Math curriculum my kid gets at school, we have her doing Singapore Math at home 40 minutes a night. Although Michelle Malkin seems to hate the EM because of its PC cache I think it's actually good at getting kids to use their brains. The times tables and basics I reinforce with flashcards just in case. There is no one ideal math curriculum -- they all sorta work if your kid is average smart or above. But in the early years, parents should sit down with their kids every night and keep restating the basic material in every way they can think of until their child responds.
But that's half the fun of being a parent -- getting a do-over of elementary school and figuring out how you should have studied.
Why is DC broken out in these studies?
ReplyDeleteIt makes no sense to compare one of the wealthiest white urban enclaves in the world with entire states.
These state metrics are pretty meaningless too given the gross inequality in specific schools quality and talent for student bodies within states. People send their kids to specific schools in particular school districts whose nature is rarely captured by state-wide metrics.
I'd be more interested in a more fine-grained and meaningful analysis.
These state metrics are pretty meaningless too given the gross inequality in specific schools quality and talent for student bodies within states. People send their kids to specific schools in particular school districts whose nature is rarely captured by state-wide metrics.
ReplyDeleteYes and no. If quality is determined at the level of districts or individual schools, there's still no necessary reason for that finer-grained quality to be evenly and randomly distributed among the states. Perhaps state laws, regulations and politics set the range of variance among schools. State rankings at least give us a place to start looking.
I'd be more interested in a more fine-grained and meaningful analysis.
I, too, demand more and better free ice cream! Really you're right, but the commentariat is hardly providing Steve with a salary and a staff to go do one.
I've been staring at that table looking for an original insight and I don't have any yet. But what I'm thinking about between that and the comments is whether public schooling is even salvageable. You've got to have an education degree to teach, and education faculties have their own orthodoxy. Teacher's unions have a monopoly on employment. Setting aside test scores and graduation rates, the really important measure is percentage of attendees who achieve functional literacy and numeracy. If literacy and numeracy is lower in the public schools than in open-admission parochial schools and home-schoolers, than the only sure reform is to opt out personally, and encourage others to opt out generally.
ReplyDeletejody said...
ReplyDeletelook at that huge -20 math drop for michigan between 4th grade and 8th grade. could that be due to white flight out of the state following the collapse of the auto industry there?
----------- Not necessarily. What was the pattern in Michigan before this drop? Between 2000 and 2005 for example Michigan's white population only dropped by about 2%, hardly a dramatic example of "white flight."
http://www.census.gov/popest/states/asrh/tables/SC-EST2005-03-26.csv
Today Michigan still remains about 80% white. Interestingly enough, a large proportion of those are the much touted "Nordic" whites from German, Irish, and Britain. Yet overall these "pace setters" have produced unimpressive performance according to your data above.
Jody:
by the way, i read an amazing number yesterday. in 1982, only 4 people scored a 1600 SAT. i don't recall the exact figure, but i remember something like 23 people scored 1600 in 1994. so there was already a lot of improvement at the top in only 12 years. this jives with my academic experience. the smartest kids are getting smarter, but the average kid is getting dumber. Both trends are running concurrently.
-------- Doesn't necessarily jive with your theory. If the top scores went up so much between 1982 and 1994 then you have an upward trend, and far from the smarter getting smarter, it could just as well mean that the more average kids were getting into the top ranks. The low end has shown some increases too over the same rough time period. For example, Black students' Scholastic Aptitude Test (SAT) scores climbed 17 points from 1987-1996, according to the College Board. You have to demonstrate that AVERAGE kids showed a DROP between 1982 and 1994 in SAT's for your theory to make sense, if not, well, it is just "jive."
[Jody said] by the way, i read an amazing number yesterday. in 1982, only 4 people scored a 1600 SAT. i don't recall the exact figure, but i remember something like 23 people scored 1600 in 1994. so there was already a lot of improvement at the top in only 12 years. this jives with my academic experience. the smartest kids are getting smarter, but the average kid is getting dumber. Both trends are running concurrently.
ReplyDeleteJody, the SAT was dumbed down several times over the past 30 years, including the 1982-1994 period. That's why there are more perfect scores now. You now don't have to answer every question correctly on either the math or verbal tests to get the 800 score, whereas previously you did.
Anecdotally, however, I agree with your overall point. Stratification of the social classes is more pronounced today; when I was a high school student in the late 1960s, social classes appeared to be converging as widespread quality education pulled kids up. Nowadays it appears that schools pull nobody up except for elite kids going to elite schools.
This study is bogus. The standard error due to sampling is too large to permit ranking the states on NAEP scores. For a discussion see:
ReplyDeletehttp://pareonline.net/genpare.asp?wh=4&abt=stoneberg