You are currently browsing the category archive for the ‘Student Bashing’ category.

In CEO’s report on racial disparities in UW admissions, they highlight an extremely misleading statistical concept — that of “odds ratios” — to leave the false impression that black and Latino applicants to UW are hundreds of times more likely to win acceptance than whites. They also dump more than a thousand students of color out of their applicant sample, inflating admissions percentages for blacks and Latinos by excluding weak and unqualified applicants from that pool and distorting statistics on Asians by excluding all applicants of Southeast Asian origin from their study.

In addition to all that, they engage in a variety of petty manipulations of data, as when they scale their admissions rates chart to begin at 50% rather than 0%, thus dramatically enhancing the visual impact of the graph at the expense of accuracy and readability.

Strangely missing in all this statistical sleight-of-hand is any straightforward statement of the magnitude of the supposed advantage that black and Latino applicants have over whites. At no point in the report do they compare — for instance — the chances of admission of two students, each at the midpoint of the applicant pool, one white, one black. (Neither do they directly compare the chances of admissions of students by criteria other than race under which white applicants have a structural advantage — those of legacy admits vs. non-legacies, for instance.)

At one point they inch toward such a comparison, with a chart listing the number of students of various races rejected with SATs or ACT scores and class rank higher than the median black admittee, but since that chart fails to list how many students in that category were accepted from each race, it’s impossible to translate the chart into actual comparative data.

In fact, there is only one section of their report in which they offer a direct comparison of the chances of admission of two groups of students, and it’s a comparison whose terms have been cherry-picked to provide the impression that they are hoping to leave.

In the report’s section on “Probabilities of Admission” they provide a chart comparing the chances of admission for groups of white, black, Latino, and Asian students — one chart each for in-state and out-of-state applicants. So far so good.

But each chart compares only a small sliver of the actual applicant pool. Beyond the exclusions I mentioned in previous posts, these charts leave out female applicants, who represent well over half of total applicants. They leave out the substantial fraction who took the SAT rather than the ACT. They leave out all legacies, a mostly white group with significant advantages in the admissions process. And as in the previous chart they set the bar for comparison at the median ACT score for black admittees.

There’s a basic principle in statistics that the farther away from the middle you get, the weirder your numbers are going to turn out. If you compare the chances of two students near the middle of the pack, you’re going to get stats on their odds of admission that reflect the fact that they’re similarly situated. But if you go looking for outliers, things start to get wacky.

To understand how this works, let’s do a thought experiment. Imagine that only one student whose first and last names both begin with the letter Z was admitted to Wisconsin in a particular year, and that this student happened, by chance, to have the second-worst grades and test scores of the entire entering class. Of all those students whose numbers were worse, only one was admitted, while 2000 were turned down. And among those 2000, by coincidence, there was a second student with a ZZ name.

Among ZZ-named students with grades and test scores as bad as or worse than our admittee, then, one out of two was admitted, giving that group odds of admission of one in two, or 50%. Among non-ZZ students with similar grades and test scores, only one in 2000 was admitted, giving  admission odds of 0.05%. ZZ-named students at that grade/score level, in other words, were one thousand times more likely to be admitted than non-ZZs.

And what does this tell us? Pretty much nothing. If that ZZ student happened to be 100th from the bottom rather than second, the exact same formula would show that ZZs had odds twenty times better than non-ZZs, instead of a thousand times better. One-hundredth from the bottom and second are damn near identical in terms of actual numbers, but we’re so far out on the statistical distribution tail that even a slight change in real-world data produces huge swings in the reported odds.

The folks at CEO understand this. They understand that because the vast majority of UW’s applicants are white, and because black applicants tend to have somewhat lower test scores, choosing the black admittees’ median as your starting point will produce more dramatic contrasts than using the median of all applicants. They also understand that the smaller you make the pool, the more random variation you get. And so they made the pool small and unrepresentative.

To be clear, I don’t know what the numbers would look like if CEO were to crunch the data in a useful way. I don’t know how many times more likely to gain admission a black or Latino applicant with an application at the middle of the total pool would be than a white student with identical numbers. I suspect that such a student would have a considerable advantage.

But here’s the thing. CEO does know the answer to this question. They do have the data. They know what admissions rates look like if you compare students of different races from the middle of the pack, just as they know what the plain-language version of their misleading “odds ratio” claim would be.

They know all this stuff. They’re just choosing not to share.

Huffington Post and Time magazine released stories this week with near-identical headlines: College Plagiarism Reaches All Time High: Pew Study (HuffPo) and Survey: College Plagiarism Is at an All-Time High (Time). But neither the study the two articles cite nor the press release that accompanies it makes that claim.

What the study does say is that fifty-five percent of American college and university presidents, when asked, estimated that plagiarism has risen in the last decade. (Forty percent say it’s stayed the same, two percent said it’d fallen, and thirteen percent had no opinion.) They weren’t asked, and they didn’t offer, their opinions on how this generation of students compares to earlier ones.

A 55-42 split is nothing huge, by the way. And there’s also reason to be skeptical about how informed college presidents are about rates of plagiarism. Even if reports of cheating have risen — and again, we don’t know that they have — that could reflect changes in professors’ tolerance, advances in policing of the practice, or simply the ease with which clumsily cut-and-pasted passages from online sources can be detected.

If you ask a group of senior faculty and administrators whether students are better (smarter, more committed, more ethical, whatever) than they were in years gone buy, you’re rarely going to get a positive answer. So this survey is, in the absence of actual supporting data, pretty close to meaningless. But even setting that aside, the story and its coverage bear almost no relationship to each other.

Which leads one to an uncomfortable question. If the survey made no reference to plagiarism reaching an “all-time high,” and two different headline-writers at two different news organizations both used at that same phrase to characterize it …

Is someone at Time or HuffPo plagiarizing stories about plagiarism?

Update | Time’s story went up yesterday, the Huffington Post’s this afternoon, so if there’s any plagiarism going on here, it would appear that Time isn’t the culprit.

What say you, HuffPo?

This year, like every year since 1998, a couple of profs at Beloit College have released a “Mindset List” describing the world that the new crop of incoming first-years grew up in. Here’s a few things they left out:

The average first-year college student in the United States this fall was born in 1993. For them…

College presidents have never been expected to stay in their positions for long, and have always had onerous fundraising responsibilities.

Pell Grant funding has always been under attack.

Colleges have always been required to keep public statistics on campus crime, and have always evaded those requirements with impunity.

Grad students have always been boosting enrollment with jokey-sounding course names.

Conservative commentators have always been appalled.

The presence of significant numbers of students of color on campus has always been treated as a new development.

NCAA rules violations have always been a headline-grabbing crisis.

College athletes at high-ranking Division 1 schools have always been pampered and cynically exploited.

The connection between the above two realities has always been the subject of hand-wringing op-eds.

Which have never translated into serious reform.

Tenured professors who came of age in the late sixties have always been exaggerating their own activist exploits, and deriding contemporary student organizing.

The drinking age has always been 21.

Binge drinking by under-21s has always been epidemic.

Returning students have always been a growing campus demographic.

And have always been ignored in lists like this.

Remediation has always been a handy cudgel for enemies of open enrollment.

Middle-aged people who spent their youth desperate for sexual gratification have always been decrying the rise of hook-up culture.

The proportion of state budgets devoted to higher education has always been plummeting.

The extent of rape in the dorms and at frat parties has always been the subject of whispered rumor.

Adjunct hiring has always been growing.

Adjunct pay has always been unsustainable.

Free public higher education has always been a distant memory.

Faculty and administrators have always been inexplicably surprised to discover that the new incoming class is roughly a year younger than the previous one.

Charlie Webster, the state chair of the Maine Republican party, has produced documents claiming to show that over two hundred of the state’s college students have committed fraud by voting in Maine while paying out-of-state tuition.

This is a lie. It’s an evil lie. It’s just … jeez.

Here’s the deal. If you move to Maine for college, you have to pay out-of-state tution your first year. And your second. And your third. And your fourth. And your fifth. You have to pay out-of-state tuition forever, in fact, until you demonstrate that you have “established a Maine domicile for other than educational purposes.”

And as long as you’re attending college full-time, you’ll be “presumed to be in Maine for educational purposes and not to establish a domicile.” Again: Forever.

You can arrive in Maine fresh out of high school, move into your own place, live there 365 days a year. Work there, spend summers there, get married there. Finish your undergraduate degree, go on to grad school. But as long as you’re still a student, you’re “presumed to be in Maine for educational purposes and not to establish a domicile,” and the burden of proof is on you to show otherwise. (“No one factor can be used to establish domicile,” by the way. “All factors and circumstances must be considered on a case-by-case basis.”)

Paying out-of-state tuition isn’t evidence that you don’t live in Maine, in other words. It’s not evidence of anything at all. Out-of-state tuition is a revenue stream for the university and the state, and as such, it’s designed to put every possible burden on the student who’s looking to get out from under it.

Which brings us back to Charlie Webster.

What Webster is doing here is deploying a state regulation designed to deprive Maine’s college students of their money as a mechanism to deprive them of their votes. There’s no other way to describe it. Take their money, take their votes. Justice, fairness, and the Supreme Court of the United States be damned.

It’s really that simple.

When the brouhaha over the Psychology Today “Why Black Women Are Less Physically Attractive Than Other Women” article broke, I wrote a quick blogpost pointing out some of author Satoshi Kanazawa’s most ludicrous, obvious mistakes. But now someone with a bit more competency has gone back to look at the actual data Kanazawa used, and discovered that the problems with his “study” go much deeper.

Much, much deeper.

Basically, Kanazawa completely misrepresented the data. His source material just flatly doesn’t say what he says it says.

Here’s the deal. Kanazawa drew his conclusions on the relative attractiveness of black women from the “Add Health” study, a long-term survey of American adolescents. He claimed that the study showed — proved — that black women were less attractive than women of other races. But that’s not the case.

The attractiveness “data” is itself suspect, for one thing. It consists of the subjective judgments of interviewers who were asked to rate their interviewees’ appearance. There’s no effort in the numbers to control for the interviewers’ (unstated) ethnicity, no protocol for their judgments, no reason to believe that their conclusions are in any way representative. It’s just their opinion, and different interviewers reached dramatically different conclusions about the same interviewees’ attractiveness.

Let me underscore that last bit. According to a review of the original data, most of the difference in attractiveness between individuals in the study can be explained by different interviewers “grading” the same interviewee differently.

But it gets worse.

This study is, as I noted above, a study of American adolescents, tracked through early adulthood. And though Kanazawa portrayed his article as a study of the attractiveness of adults, the samples he used included children as young as twelve. He based the majority of his conclusions on data on the youngest two groups, who had an average age of just sixteen.

Still with me? It gets even worse.

Kanazawa admitted that the supposed difference in attractiveness was less in “Wave III” than in “Wave I” and “Wave II,” though he actively concealed the fact that Waves I and II weren’t adults at all. (He labeled the relevant charts “Wave I: Men,” “Wave II: Men,” “Wave I: Women,” and “Wave II: Women,” even though the vast majority of those subjects were teenagers and pre-teens.)

What he didn’t admit was that there’s a Wave IV.

Wave IV, it turns out, is the only wave composed entirely of adults. And an analysis of the Wave IV data shows that it doesn’t support Kanazawa’s thesis.

At all.

In Wave IV there is no difference between the perceived attractiveness of the black women and that of the other ethnic groups examined.


At all.

And again, I want to underscore something. Wave IV is composed of the same interviewees as the previous waves. So what the data really shows is that some (presumptively white) interviewers thought that the black adolescent girls in the study were a little less cute than the white, Asian, or Native American girls.

But when interviewers went back and spoke to the same women as adults, that “attractiveness gap” disappeared. Completely.

This isn’t just shoddy statistics. This isn’t just crap reporting. This isn’t just incompetence. It’s scholarly malfeasance.

It’s fraud.

About This Blog

n7772graysmall is the work of Angus Johnston, a historian and advocate of American student organizing.

To contact Angus, click here. For more about him, check out
%d bloggers like this: