Archive for the ‘education’ Category

A Limited Defense of Affirmative Action

May 29, 2011

I am a beneficiary of affirmative action. These days, so they say, I should be ashamed to admit it. It implies, after all, that I was not otherwise qualified for some benefit I obtained only because of being some kind of “minority.”

I have actually benefited from affirmative action on two different counts–as a woman, and as a Hispanic. Every now and then that gives me a slight edge on the competition. That doesn’t bother me particularly. I’ve been discriminated against as a woman more times than I can remember (or, probably, than I have ever known) beginning at least with my first permanent job, which I obtained only by giving the right answer to the employment agency’s question about what method of birth control I used. (For those too young to remember that era, the right answer was not “none of your damn business.” It was “the Pill.”) On another job, I was sexually harassed before there was even a word for it, much less a cause of action. So I figure any benefit I get from the double x chromosome is just a matter of restitution.

I have also been discriminated against, I’m pretty sure, for being Jewish. This, of course, gets me no affirmative action points, but that kind of makes up for the fact that I do get points for being a Hispanic (both my parents were born in Cuba, and my family is essentially bicultural) even though I have never been discriminated against for that fact. (As a matter of fact, since I am a natural blonde and speak English without an accent, nobody knows I am Hispanic unless I choose to tell them, and I normally do that only where I will get extra points for it. Which is generally in jobs where my ability to speak Spanish really is a plus.) And most recently, I have probably been discriminated against for my age, which is illegal, but for which I get no affirmative action points. So I will take those points where I can get them, without embarrassment and without feeling that my competence is in any way in question.

I went to a good college and made Dean’s List my last two years. I scored in the 98th percentile on my LSATs. But when I applied to law school, I was admitted to a school in which 45% of my class was female, in the mid-’70s, and rejected by another school which had a far lower percentage of female students in that year. The evidence seems clear; I was almost certainly admitted to the former because of my gender, and rejected by the latter for the same reason. My objective qualifications were equally irrelevant to both schools. Probably all those qualifications got me in the second school was a rejection further along in the process than some of my less-qualified sisters (and my totally-unqualified brothers.)

Realistically, of course, nobody ever challenged my academic competence, or that of any other woman I know who has been accepted into any academic program under an affirmative action program. Even the most neanderthal of male supremacists will grant that women on the average do better in school, except in mathematics and the hard sciences, than men. The reason women have historically been discriminated against in academic admissions is that we are not expected to be able to do much of anything useful with our knowledge and academic credentials after we get them.

So the affirmative action issue really only gets raised, where women are concerned, when one of us is promoted to a position of power, beyond the glass ceiling. Then the innuendoes fly–quotas, sleeping with the boss, the supervisor is a leg man, somebody’s sister, somebody’s daughter, somebody’s wife. Most of us, however, would still rather live with the humiliation of possibly having been promoted because of our gender than with the equally potent and much less remunerative humiliation of not having been promoted for the same reason.

Stephen L. Carter’s misgivings

Which is why I have trouble with people like Stephen L. Carter. His Reflections of an Affirmative Action Baby is a thoughtful and well-written book with a good sense of the complexities of inter-ethnic relations in the United States of the 1990s. But I have a few problems with its basic premises. Don’t expect the Establishment to make special standards for you, he tells young African Americans. It’s humiliating that we should think we need that. Meet their standards, beat their standards, and demand to be accepted on their terms. For Blacks and Hispanics, who are popularly expected to be less competent in academic achievement, it may actually be a source of humiliation to be admitted to a respectable school under an affirmative action program because of their ethnicity. However, most of the “affirmative action babies” I know would say that it is no more humiliating than being rejected because of that same ethnicity, and pays a lot better.

Carter’s advice takes the Establishment’s claims of devotion to meritocratic standards at face value. Which gives a lot more credit than it deserves to an Establishment that has never really believed in those standards, and has espoused them only when doing so would serve the purpose of keeping a particular group of outsiders outside.

The reason Carter has not seen this hypocrisy is that he is looking at the experience of only one group of outsiders. If he were to consider that of three others–women, Asians, and Jews–whose ability to meet meritocratic standards has rarely been questioned by anybody, he would discover that the Establishment has never had any difficulty excluding them, or severely limiting their upward mobility, on some other grounds.

The merit system: now you see it, now you don’t

For instance, in the 1930s, Harvard Medical School discovered that, if academic qualifications were to be the only criteria for admission, its entire entering class would be Jewish. Indeed, they would have had to double the size of the entering class to get in more than few token gentiles. So they suddenly discovered that there was more to being a physician than “mere” academic excellence. Arbitrarily, they set a quota of 22% for Jewish applicants, a quota which remained in effect until the ’60s, when, like the Jewish quotas in many other educational institutions, it was replaced with a larger and slightly less transparent quota on students from large cities, especially New York City, under the rubric of “geographical distribution.” Those quotas still exist today in many schools.

The experience of women is in some ways even more blatant. When my classmates and I graduated from college in the early ’60s, we frequently looked for jobs before and between graduate school, in the public sector. We took the civil service exams, scored at or near the top, and were repeatedly beat out for the actual jobs by men who had scored a good deal lower, before using their veterans’ preference points.

When I was at college, in the late ’50s and early ’60s, it was a truism, repeated to us regularly by faculty and admissions honchos, that men scored higher than women on the math section of the SAT, but women scored higher than men on the verbal section. It didn’t, of course, get us much. There were fewer places available for women at good colleges (or any other colleges, actually) than for men, and less scholarship money available for us. So nobody thought much about it. But twenty years later, when the various controversies about the biases of the SAT arose, I was startled to hear everybody, on all sides of the dispute, saying that women scored lower than men on both sections of the SAT. Even the American Association of University Women, in its otherwise beautifully researched study of discrimination against women in education, could only conjecture about what happened, by the end of high school, to the clear lead in reading and verbal skills, that girls have over boys in elementary school. What had happened–a couple of very well-hidden and quickly forgotten news stories revealed–was that in the middle ’60s, ETS changed the verbal section of the SAT, substituting scientific essays for one or two of the fiction selections in the reading comprehension test. Female scores promptly dropped to their “proper” place–visibly below those of their male classmates–and have stayed there ever since.

Asians are the most recent victims of similar policies. Several West Coast schools, most notably the University of California at Berkeley, have experimented with ceilings on the number of Asian students within the last 10 years. A university, the administration proclaims, has the right to put “diversity” above “mere” academic excellence.

In short, the history of other groups of outsiders suggests strongly that if an entire generation of African American young people followed Carter’s advice to meet meritocratic standards and beat them, the Establishment would have no trouble finding some other pretext to exclude all but the most presentable tokens among them from the precincts and perquisites of power–either by changing those standards, or suddenly discovering the greater importance of some other factor.

That does not, of course, invalidate Carter’s advice. It does make one wish Carter were a little more careful about truth in advertising, however. I tend to prefer Malcolm X’s more honest approach, when he advised his followers to read anything they could get their hands on and get all the education they could, even if all it got them was the proud position of best-educated person in the unemployment line.

Was there ever a merit system?

Before the phrase “affirmative action” ever found its way into our vocabulary, the reality of affirmative action was already as American as apple pie. After all, what else is veterans’ preference, if not an affirmative action program for (in the post-World War II era in which it was born) men? What else is seniority, if not an affirmative action program for older workers? I have never known a veteran, or an experienced union man, who was in the least ashamed to have benefited by those affirmative action programs.

Nor should they be. Before the rise of the meritocratic mythology of the ’70s, any American old enough to have held a job at all knew that nobody gets a job solely by virtue of being the most qualified candidate for it. In an economy which has never even aspired to full employment, most available jobs have several well-qualified candidates on hand. Most employment discrimination does not involve hiring an unqualified person in preference to a qualified one, but rather choosing between more-or-less equally qualified candidates on the basis of factors unrelated to the job.

The Jewish Establishment’s position

Many established Jewish community organizations, like many other high-minded, principled opponents of affirmative action, really believe that they are espousing a pure meritocracy as against a system of arbitrary choice. To take that position, they have to presume that, before the 1969 Civil Rights act, all male Jews had the jobs they were entitled to, by reason of their meritocratic qualifications. They also have to presume that all Jews are male, white, anglo, and middle-class and have nothing whatever to gain from affirmative action. They have to, in fact, ignore the experience of considerably more than 53% of the Jewish community. They even have to advocate giving back to the same academic and professional Establishment that subjected Jewish males to explicit, exclusive quotas until the early ’60s, the power to do it again.

Two cheers for affirmative action

Most supporters of affirmative action see it as a lesser evil. But, unlike its opponents, they recognize the realistic alternative as a greater evil. Affirmative action is not a matter of substituting for a pure meritocracy a system of choices among qualified candidates according to standards unrelated to job or scholastic requirements. It is a substitution of one set of arbitrary choices for another.

The alternative to affirmative action in real life is the divinely-ordained and legally-protected right of the employer or supervisor to hire people who remind him [sic] of his best friend, or people who fit his stereotyped image of the “proper” telephone operator or waittress or whatever. We know that most people who get jobs get them for reasons only distantly related to their ability to perform. In fact, the most serious downside of affirmative action, so far as I can tell, is that it denies future generations a really useful index of professional excellence. When I meet a doctor, or a lawyer, or a CPA, who is female or non-white (or better still, both) and who got his or her professional credential before 1970, I know I am dealing with a superlatively qualified professional, because only the best women and non-whites were able to survive the discriminatory professional screening processes in those days. For professional women and non-whites with more recent qualifications, alas, I have to take my chances, just as I would with a white male of any age.

So we sincerely hope that the people into whose hands we put our lives, our fortunes, and our sacred honor are in fact qualified to do their jobs. But as a practical matter, we know that we are at least as much at risk from incompetents who were hired or promoted for being the boss’s brother, or being tall, or not being Hispanic, or having an officious-sounding British accent, as from those hired or promoted for being female, Black, or Hispanic–quite possibly more, since the latter are usually watched more closely. In fact, these days I am beginning to suspect that American-born doctors can no longer be presumed to be as competent as doctors with foreign accents, since the latter are subjected to much tougher screening standards.

Well, maybe two and a half

We may see ourselves as winners or losers, and we may attribute our situation to other people or to our own deserts. Human beings generally have never had any trouble taking credit for their own good fortune or blaming others for their misfortunes. More recently, “new age” thinking has led many of us to take the rap for our own misfortunes, often in truly preposterous ways (“How have I created this reality?” the cancer patient asks.) But it is difficult for any of us to admit that our good fortune may be the result of some totally unearned “break” from the outside world–being white, for instance, or male. That is the real threat of affirmative action–that it requires us to consider the possibility that (even if, as is likely, we aren’t as well off as we would like to be) we haven’t “earned” even the few goodies we have. For those of us raised in the Jewish tradition, which teaches us that the Land promised to us by the Holy One is ours only on loan, and that we were not chosen to receive it because of any particular merit on our part, that shouldn’t be too much of a leap. It should make us more willing to grant similar unearned goodies to other people. “Use each man according to his deserts,” says Hamlet, “and who should ‘scape whipping?” Or unemployment, as the case may be. Even us, the few, the proud, the overqualified.

Red Emma

Advertisements

A New Look at Child Labor

May 12, 2011

I googled “child labor” recently, and all I could find was stuff on how much of it there still in the world, and why it was so bad. Nobody seems to be looking at, or even for, an upside. Okay, maybe this is kind of like looking for the upside of the Third Reich (the Volkswagen?) or the reign of Caligula (no bright ideas at all here.) But I think there are actually a few good things to be said about child labor, at least within proper limits.

Depending on what you mean by “labor.” If what you mean by “labor” is doing something that will wear you out and use you up within 20 years or less, no matter what age you start doing it at, then, no matter when you start, it’s a bad idea. Like coal mining, for instance. It was bad when 9-year-old kids were pulling coal carts in 19th-century England, and it’s just about as bad today when 45-year-old men die of Black Lung after 20 years of it, in Kentucky. The use of child labor instead of adult labor has all kinds of nasty side effects, such as lowering the general wage rate (under the odd misimpression that it‘s okay to pay for the same work at lower rates when a smaller person does the work,) and increasing the unemployment rate among adults. And working employees too hard and too long to allow them any kind of personal life or education is bad, whether you do it to kids or adults. Paying them so little that they have to supplement their wages with the only kind of “moonlighting” they have the time and energy for, namely prostitution—whether you do it to women or children—is as immoral as it gets. For further information, read Dickens.

In short, I’m not sure there is any way in which the bad side of child labor for the child is any worse than the bad side of adult labor for the adult worker. Since adults make the laws, and since one of the bad sides of child labor for adult workers is lowering wages and increasing unemployment, that didn’t really matter much once the groundswell against child labor started to grow. Progressivism and New Deal trade unionism both leaned strongly in the direction of getting people other than prime-working-age adult white males out of the workforce, using whatever rationale happened to be handy at the moment. Which was often good for families, good for adult workers, and good for The Economy.

But societies that have banned child labor (not to be confused with societies that have actually eliminated it) have created problems of their own. The most notable is that, in such societies, children are an economic liability to their parents, and may suffer abuse or neglect from them as a result. In places like China, if you can’t sell your child’s labor, you may end up selling the child instead. In places where nobody’s buying, you may simply abandon the child, either in some exposed place or in some “orphanage.” Either way, the child may die young or never develop its full mental and physical potential.

But as long as poverty persists among families, banning child labor is unlikely to completely eliminate it. Child labor persists in the US in fast food joints, on farms, and most notably in criminal enterprises, where the fact that a juvenile will get no more than a nominal punishment for conduct that could put an adult away for a long time makes “shorties” really desirable employees for look-out and courier duty.

Oddly enough, most families affluent enough not to need to put their children into the legal, semi-legal, or illegal workforce, tend not to expect much labor from them at home either. My mother, who was #5 of 8 children, once told me that her mother told her, “Once your oldest daughter is 8 years old, it doesn’t matter how many more you have; one of the older ones will always be able to take care of the younger ones.” Both because such large families are rare today, and because middle-class Americans disapprove of anybody under 12 doing any kind of child care or major domestic chores, this doesn’t happen any more. Child development “experts” generally believe that children should be expected to help out around the house and clean up after themselves, and should not get their allowance as “wages” for these tasks, but they get listened to only slightly more on this subject than on the topic of corporal punishment, which isn’t much.

Okay, that’s pretty much the adult side of the issue. What about the kids? The advantage of writing about children, of course, is that even if you’ve never raised one, you and every other person on the planet has been one. (Original sin consists of having been born with parents, which is why Adam and Eve escaped it.) Do y’all remember the time in your early teens and the years just before that when you really really really wanted to do something real and significant and useful and necessary? There are long stages of child development in which the child’s play consists of nothing but imitating (to the best of her knowledge and ability) the adult’s work. Sometimes that knowledge and ability can be pretty impressive. The computer skills of people we usually regard as “kids” can be downright amazing, and sometimes even remunerative. The Wired Daughter, between ages 15 and 18, got herself a job in a social service agency working with runaway youth, doing all kinds of statistical correlation and record-keeping, much more skilfully and assiduously than most adults I have known doing the same kind of work. Because it was a non-profit, nobody worried much about child labor laws, least of all our daughter, who was having the time of her life. Once she turned 18 she turned (temporarily, thank heaven) into a slacker. But not letting her do the work of her choice before that would have been a real injustice to her. When my nephew was the same age, he worked until well after the official closing time in a local restaurant, and found it both enjoyable and liberating. When I was the same age, I was learning to sew, and type, and cook, and write. All of us, of course, were also going to school and doing pretty well at it. None of us were dependent on earnings from such work. Which gets rid of most of the downside of child labor. I think that’s just a stage of development kids go through, with or without compensation, and it’s a good thing for all of us that they do.

On the other hand…

As more and more “middle-class” families in the US find themselves sliding out of the bourgeoisie, the role of child labor in such families will become more and more difficult. Most middle-class and even working-class families today do not expect their children to contribute to the household income, even by paying rent when they are working full-time and living with their parents. Most middle-class parents are really uncomfortable sharing the financial realities of their lives with their children (often, even after the said “children” have long since reached adulthood.) The whole point of being “middle-class” in this culture’s families is that the parents never have to admit to their children that they can’t “make it” in this economy, or even seriously discuss what it would mean not to ”make it.” No doubt it’s comfortable for a child to believe that the parents will always be able to “manage,” just as it’s comfortable for the child to believe that Daddy can beat up any other guy on the block. Until recently, the majority of American kids had no reason to disbelieve either proposition. Now, child development “experts” are taking on these issues, with varying degrees of success. It would probably help them, and the parents they advise, and the children who do or don’t benefit from that advice, if we could start talking more explicitly about what children can do to help their families in a bad economy, and why letting them do it isn’t unthinkable.

Red Emma

Kids Having Kids, Grannies Raising Kids; or Leapfrog Parenting in Our Future?

February 7, 2011

The River City Syndrome

“Friends, we got trouble
Right here in River City,
And that starts with T, and that rhymes with P
And that stands for….Pregnancy?”

Everybody talks about teen pregnancy, but nobody can figure out what to do about it. Newt Gingrich had it figured out 17 years ago or so—take the babies away from their mothers and raise them in orphanages. Then he looked at the price tag. Modern standards for what we now call group homes would turn his plan into a bigger entitlement program than Social Security or MediCare. Forget that.

They have it figured out in continental Europe. Teens there actually have more sex than American teens. But they are diligent about contraception, and have no problem resorting to abortion as a back-up if necessary. So their teen pregnancy rate is much lower than ours. This is not to be confused with their out-of-wedlock pregnancy rate, which is really high in Scandinavia, but not among teenagers. Middle-class American girls operate pretty much the same way.

They had it figured out in the 1950s in the US. I remember that system very well. It was the reason I didn’t go to the local public high school. The year before I would have started there, half the girls in the graduating class were pregnant. Most of them got married, very quietly, and then lied about the date. The young men involved all got the satisfaction of having done the honorable thing. The girls got the wedding ring. The babies got their legitimacy. There may have been a couple of girls whose partners did not do the honorable thing, so instead they took a six-month vacation with an aunt in some other state. Most of the girls in question hadn’t planned on college anyway.

The Maternity Dress with the Blue Collar

So far as I know, almost nobody operates that way any more. Blue-collar girls, regardless of race, creed, or color, just stay home (and stay in school as long as it isn’t too much trouble) and have the baby. What has made the difference? Two things, as nearly as I can tell. One is that nobody approves of “shotgun weddings” any more. Even the Catholic Church is reluctant to perform marriages where the bride is pregnant. The statistics on such marriages are discouraging. Both abuse and divorce are much more likely than in the general population of married couples. So the young man in question is under absolutely no pressure to marry the girl. It is no longer considered “the honorable thing.”

The second thing, counter-intuitively enough, is Roe v. Wade. Yes, I know blue-collar girls are very unlikely even to consider abortion. (This is not necessarily because of parental pressure. Indeed, sometimes it is despite parental pressure. When I worked at juvenile court, I once represented a girl whose father had thrown her out of the house for refusing to get an abortion.) But the fact that, in spite of the legality and availability of abortion, they don’t get one, marks them as “good girls,” in their own eyes and those of their peers, in spite of having gotten pregnant. It gives them some moral leverage they would not otherwise have. The Catholic Church recognizes, with a surprising degree of rationality, that anything that makes unmarried pregnancy more difficult makes abortion more likely. So Catholic schools go out of their way to make life easy for pregnant students. Public schools do too, though for different reasons—they just really want to keep the girls in school as long as possible. See http://www.city-journal.org/2011/21_1_teen-pregnancy.html. A pregnant teen who finishes high school is in a much solider situation than one who drops out. Many of the bad things that happen to single mothers and their children are less likely to happen when the mother finishes high school, or better still, goes on to college, at least for a year or two.

All in the Family Way

Most of the pregnant teens who manage this do so only with the help of major parental (mostly maternal) support. If mother and daughter can remain on good terms for the duration (which is not always easy for either one), the baby will have the benefit of two adults caring for her, and often, of two incomes supporting her, just like the child of a properly married couple. I know of no source for statistics on the prevalence of split-ups between mother and daughter in this situation, compared with the stats on divorce after a shotgun marriage, but my guess is that it is somewhat less frequent.

According to AARP, one in every twelve children in the US is being raised in a household with one or more grandparents. These statistics do not distinguish between households in which the child’s mother is also residing and caring for the child, and households in which the mother is for some reason absent (death, incarceration, drug addiction, general flakiness, military service, or single-minded pursuit of education and career goals.) Nor do they provide any information on the increasing number of children being raised by their great-grandparents. But they do suggest a solution to some of the problems besetting the modern family.

The Murphy Brown Syndrome

In blue-collar families, pregnancy happens “too early”, all too often. By “too early,” we mean before socioeconomic maturity, often before finishing school, or even instead of finishing school. In white-collar families (regardless of race, by the way—professionally-educated African-American women have the lowest birth rate in the country), pregnancy often happens “too late.” By “too late”, we mean after socioeconomic maturity, after finishing one’s education and getting established in a career, and after the height of female fertility in the late teens and early twenties. Often, we mean after the precipitous decline of female fertility in the mid- or late thirties. In which case, “too late” may mean not at all. But even if it doesn’t, it often means having children who will be starting college just as the parents would otherwise be starting to think about retirement.

New Supporters of Early Marriage

Early marriage by choice rather than because of an unplanned pregnancy is occasionally discussed among religious groups that frown upon premarital sex (see http://www.christianitytoday.com/ct/2009/august/16.22.html?start=1), and presumed among others such as the Amish who discourage post-high school education anyway, as well as among some immigrant groups. For the rest of us, it seems to further complicate what is already the most complicated period of most people’s life, from age 12 through 25.

Alternative #1: Leapfrog Parenting

But there are a couple of alternatives worth considering. The obvious one, already discussed above, is for women to bear their children early, raise them with the assistance of their mothers, complete their education, start their careers, and then marry. This could even be organized so that grandmother, having finished raising her daughter’s children, would be able to retire just as the daughter is ready to start raising her daughter’s children. Think of it as “leapfrog parenting.” Biologically, we are told, the best age for women to conceive is from 18 to 25. Socioeconomically, the best age for a person, male or female, to raise children is from 35 to 55. The numbers point to one ideal conclusion: bear your own children at 18, and start raising your daughter’s children at 36.

Make Room for Daddy

What place does this scheme leave for the fathers of all these children? I am tempted to say, whatever place the particular man in question wants, since that seems to be what happens anyway. Not being forced (sometimes at gunpoint) to do “the honorable thing” is probably an improvement in our ideas about family life. Not knowing quite what to do when one’s girlfriend gets pregnant definitely isn’t. Fortunately, the country is rife these days with all kinds of projects and programs for, and studies of, teenage fathers. Lots of us are looking for answers to this question, and with any luck, we may find one. (more…)

You Go To Work With the Workforce You Have

September 30, 2010

Not, as Donald Rumsfeld would have said, the one you’d like to have. (Or, as the little boy asked his mother after first seeing a classical ballet, “Why don’t they just get taller girls?”) We keep seeing all sorts of opining about how, in order to reduce unemployment, we need to have a smarter, more flexible workforce to participate in the new information marketplace. And admittedly, the diminishing proportion of American-born graduates from US science and engineering programs is scary, and betokens a deplorable trend to laziness and anti-intellectualism in our younger generations ( See http://www.freerepublic.com/focus/f-news/1933011/posts.)

But we cannot educate our way to full employment. Yes, more employers are requiring college degrees, or at least “some college” for new hires. Often the jobs for which such higher education is required are pretty much the same ones our parents got on the strength of a high school diploma and on-the-job training. But there are still plenty of jobs out there with just such requirements (See http://www.insidehighered.com/news/ 2009/02/23/stimulus ) A lot of them get filled by immigrants, often at substandard wages. And a lot of the “information marketplace” jobs are also getting filled either by immigrants, or by foreigners with graduate degrees telecommuting from their home countries.

These uncomfortable facts tell us that the American employer’s current reluctance to “create jobs” for American workers does not just reflect the inadequate educational system that produces them. American business isn’t looking for better-educated American workers. It also isn’t looking for competent blue-collar American workers. In fact, it isn’t looking for American workers. Or at any rate, it isn’t looking for workers demanding a living wage in the American economy. It isn’t looking for workers who demand the American Dream: home ownership, two cars, health insurance, retirement benefits, paid vacations and sick time, and enough money to send the kids to college, all in the USA. The nice thing about hiring immigrants is that they will accept a lower standard of living. The nice thing about hiring foreigners in non-European countries is that they often have a lower cost of living.

The oncoming new economic paradigm is the Arabian petroplutocracies, countries that have succeeded in pretty much abolishing poverty among their own citizens through oil subsidies. All this means is that they have had to import a whole population of poor people from other countries (mostly Asia) to do poor people’s jobs.

Assuming that all American workers are either able or willing to undergo the education necessary to fit them for the “information marketplace,” doing it would not eliminate poverty. It would just put us in the position of Saudi Arabia, having to import poor people to care for our children, elders, and invalids, collect the garbage, deliver the mail, and clean the streets.

Aside from which, trying to educate our way out of unemployment requires too many unsustainable assumptions to be worth the trouble. Assume we had the resources to run the educational system that could produce universal scientific and technical literacy. Yeah, right. Assume that, even if we did, all American workers, young and old, were willing and able to achieve such literacy. Yeah, sure. The one assumption no one is even trying to suggest is: assume that American business were willing to pay their workers enough to live like middle-class Americans. In your dreams. We’ve already given up on the “family wage”—one which will enable one person to support an entire family in the style to which the American Dream once made us accustomed. Now we are expected to forfeit the “demi-family wage”—one which would enable two adults to provide for their family at that level.

Trying to turn the American workforce into what the “information marketplace” is looking for, assuming it were possible, would just give us a higher class of unemployed people. Malcolm X, of all people, once proclaimed in a somewhat different context that being the best-educated person in the unemployment line was a goal worth striving for, no matter how bad the job market might be. And there is considerable moral and philosophical validity to that approach. It may be what our children and grandchildren will have to settle for. But at the very least, we need to tell them that, however much education and competence will improve their daily lives, it cannot be counted on to raise their income, or even provide them with one in the first place.

CynThesis

Socrates and the Pig

September 2, 2010

I am about to say something scandalously heretical. If anybody reads it, I may be ostracized from polite society, or branded on the forehead with the letter C, for crank. Here goes: Education is a private act between consenting adults. It is nobody’s business except that of the student and the teacher involved.

Let’s begin at the end: “adults.” I am not saying that the teaching of reading, writing, arithmetic, and table manners ought to be available only to those aged 18 or over. On the contrary, these subjects, as well as at least one foreign language, ought to be taught to children as soon as the latter are neurologically capable of learning. They may never again be so capable. But that is not education, it is pedagogy. Regardless of who does the teaching and who pays the bill, it is a public activity, because it is the way we make children into citizens and rulers of a democracy. I do not endorse the way it is done now, but I do endorse doing it as a public enterprise, accountable to the children themselves, their parents, and all the other citizens of the community within which these children will ultimately function as co-citizens.

That enterprise is entirely different, and should be entirely separate, from what we now call “liberal education” or “college education.” The point of that enterprise is to turn citizens into thinkers. Not everyone wants or needs to be thus transformed. Not everyone is capable of it.

Citizens who have undergone that transformation are not thereby rendered any more useful to church, state, family, profession, or employer. On the contrary, they may become totally subversive of all of those institutions in their current condition. We have some abstract faith that such people may be useful in the long term to our culture as a whole or one of its basic institutions. So far, by and large, over the very long term, that faith has been borne out.

But that usefulness must not be confused with what an employer wants in a prospective employee. Usually, that boils down to (a) training in technical skills, and (b) the discipline to perform difficult and often boring work designed and assigned by someone else to fulfill someone else’s purpose. A good liberal arts education not only will not inculcate such skill and discipline, it may well deprive the student of any inclination or ability to acquire them.

At any rate, I am quite convinced that, except for the occasional child prodigy, no one should be admitted to college before age 21. This would, in the first place, dispose forever of the idea of a university being in loco parentis, and being required to police the sex lives and alcohol consumption of its students. Actual criminal behavior among college students should be dealt with like any other kind of crime. The university should be neither expected nor allowed to mediate with law enforcement authorities on behalf of its students. Rape is rape, drug-dealing is drug-dealing, theft is theft. Period. The other side of this coin, of course, is that matriculation at a university should not be conditioned on compliance with civil and criminal law, except to the extent that the university itself or individual students, faculty, or staff are victims of violation of that law.

In the second place, an adults-only admissions policy would almost certainly result in a student body whose members already had some experience of paid employment and some sense of a life and career path. They would have a context to plug their education into, would know what they want/need to learn and why. Which should go far toward deterring cheating. What, after all, is the point of fooling a teacher into thinking you have mastered the material, if you are still going to have to learn it anyway?

So much for “adults”–as for “consenting”, no one should be required to attend college to get a decently-paid job. If an employer wants employees with a certain kind of training and discipline, that goal is best achieved by apprenticeship and on-the-job training, both conducted by the employer. Over the last century, business and industry discovered that they could shift the burden of providing such training onto colleges and universities, and the burden of paying for it onto the individual student or the taxpayer. Only very recently have they discovered that this system often produces untrained or badly-trained employees, and that sooner or later the employer will still have to pay pretty much the same cost to train them. They are now experimenting with various “partnerships” with local public and private universities, and with tuition reimbursement for their employees. Clearly, none of this is liberal education, though all of it may be useful to employer, employee, and educational institutions. I would be comfortable with this arrangement, if the institutions participating in it did not call themselves colleges and universities. Why not “Institute of Vocational Education” or “Generic Apprenticeship”?

Which brings us to “private”: anything an institution–whether or not it calls itself a university or a college–does to prepare its students for employment outside academia, is not liberal education. Anything an institution does for which it has to account to the student’s employer is not liberal education. Any course that shows up on a transcript which is sent to any non-academic institution is not liberal education.

One of the more interesting corollaries of this rule is financial in nature. Virtually all of the inflation in the cost of “higher education,” and most of the money paid by students for tuition, are paid for things that are not liberal education. Liberal education, by itself, is pretty inexpensive. I found this out at first-hand some years ago, when I took up “auditing” university courses that interested me. I discovered that, in most universities, a student can attend all the classes and labs in a course, can even take the tests and write the papers, for a minuscule fee, or sometimes none at all, so long as s/he does not expect the university to verify that s/he has engaged in these activities. A little simple arithmetic will demonstrate by extension that an enterprising autodidact can acquire all the knowledge and thinking habits of a college graduate for slightly less than the cost of an elderly used car, so long as s/he does not demand any credential verifying for third parties that s/he has done so. It follows that most of what the student pays for in a college “education” isn’t the education, it’s the credential. Liberal education, without that credential, is probably one of the best–and best-hidden–bargains around.

It is this credentialing function that provides most of university’s funding and forges most of its ties with the Powers That Be. Jacques Barzun has stated that the student uprisings in the ’60s against the draft and the Vietnam War were misdirected, in that the evils they opposed had nothing to do with the university. But those universities were regularly providing the Selective Service System with information about the enrollment and progress of all of their male students, in order to verify their eligibility for student deferments from the draft. A real “institution of higher learning” would never have considered that information to be anyone else’s business. They might have agreed–on the specific request of an individual student–to forward such information. They would never have made a universal and regular practice of doing so without even asking the student. Today, of course, there is no student deferment, and no active draft. But male students must still register for the draft to be eligible for financial aid from the federal government, and, often, even from a university’s “private” resources. Of course, most of the need for that financial aid is generated by the university’s credentialing function anyway. But no real “institution of higher learning” would condition its scholarships (an admittedly anachronistic locution) on compliance with a federal regulation totally unrelated to the purposes of education.

Similarly, academicians who like to think of themselves as old-fashioned ivy-covered curmudgeons decry organized student opposition to military and CIA recruitment on campus, as “politicizing academia.” But no real “institution of higher learning” would allow such recruiters–or any other would-be employers of its graduates–onto its campus in the first place. That is the real politicizing–and commercializing–of academia.

Other aspects of a university’s existence are necessarily political and economic, just like parallel elements of the existence of any organization, group, or individual. Those elements may be larger than they would be without the uinversity’s credentialing function, but even the purest liberal arts institution would still have to own or rent property, invest its funds, and deal with local government like any other resident. All of those decisions have political implications–it is no less political to invest funds in a bank which invests in South Africa than to pull one’s funds out of such a bank. Probably a properly-sized liberal arts institution wouldn’t have enough money to be worth investing on a larger scale than in a local savings and loan or credit union. But that too is a political decision, as far as it goes.

Similarly, it is the university’s credentialing function which has made the grading system what it is today. If we assume that education is a private matter between consenting adults, the student certainly has a right to feedback from the teacher on how well s/he has mastered the material. But no one else has a right to that information unless the student chooses to communicate it. This, of course, eliminates all incentives to cheating on the student’s part and grade inflation on the teacher’s, other than aberrant individual psychodynamics.

The person who has completed a real liberal arts course of study has a right to certification of this achievement by the teachers who have facilitated it. This certification should be mostly for the student’s benefit, and partly for the benefit of similar institutions which may wish to retain him/her as a teacher. Aside from that, such certification should be totally useless, and possibly even counterproductive, in the “real world” (depending on the current state of the “real world.”) The liberal arts graduate might even feel obliged to conceal his/her achievement like a secret vice. (To a certain extent, this already happens. Our college “business departments,” like our law offices, are full of closet poets and novelists who take business and law degrees for protective coloration.)

Obviously, a system of real liberal arts education could not promise lucrative employment or opulent working conditions to its teachers or its graduates. It would, in fact, be a throwback to its monastic origins, vow of poverty and all. The current system at least allows a few students and teachers to sneak some of “the best that has been thought and said” into their generic apprenticeship, while still making a living wage. On the other hand, the business of training employees for business and industry, freed of any pretense to academic detachment, would turn into a commercial West Point, as devoted to indoctrination as to information. (Yes, I know that’s not fair to West Point, whose liberal arts faculty is pretty good by even the strictest standards, whch is saying a lot for what started out as an engineering school.) Probably the pressure-cooker business schools of the Far East have already laid out that path.

The point of a liberal education, really, is to figure out whether there is any reason for doing anything, other than the pursuit of money or power. If there isn’t, then West Point and business school make perfectly good sense. For some reason, we aren’t quite comfortable with that. Maybe what we really need to do is figure out how much we need to leaven vocational training and business indoctrination with more traditional learning, and why we would consider that a good idea. Why do we feel obliged to pretend to be transmitting a liberal education to people who mostly don’t want it, and whose employers actively oppose it? Hypocrisy is the homage that vice pays to virtue, according to LaRochefoucauld; the pretense of liberal education is the homage that vocational training pays to the Life of the Mind. The Life of the Mind, in turn, enables us to figure out why vocational training is not enough.

Jane Grey

Lunch Ladies at the Killing Fields

July 29, 2010

Our local public radio station did a piece this morning about a public school in Chicago that, because of the students’ failure to achieve standardized testing goals, is being subjected to a process known as “turnaround.” Turnaround involves firing all of the school staff, allowing them to reapply for their jobs (with the understanding that re-hiring is going to be very rare), and starting from scratch for the next school year.

This all sounded at least arguably promising to me at first. Okay, can the principal. Can the teachers, and everybody in between. They have, apparently, failed at their job. Time to try somebody new.

But this procedure is being applied, not only to the professional educators whose job it is to help the students meet their testing goals, but also to security officers, custodians, and lunch ladies.

Asked why this draconian procedure is being applied to people who have no official responsibility for student test results, a Chicago Public Schools bureaucrat responded that it was necessary to change the entire ”culture” of a failing school, from top to bottom, in order to get better results. Too many people left over from the previous culture meant the school would end up right back where it started.

What, I wondered, do janitors and lunch ladies have to do with a school’s “culture”, much less with test results? I spent much of today trying to figure out what this reminded me of. It kept nibbling at the edges of my mind like a mosquito. On my way home from the office, it hit me. That’s how utopians think. B.F. Skinner, in Walden Two, says someplace (or one of his characters does, anyway) that the flaw at the base of his colony is that its founders were not raised in Walden Two.

A lot of the early socialists and communists said the same sort of thing. Some of them even acknowledged that, for that reason, they would not be able to create the “New Socialist Man” [sic] in a single generation, but would have to keep working toward him [sic] by a long series of approximations. Pol Pot and his buddies found a faster way—wipe out as many as possible of those who had been raised in traditional Cambodian culture or any of its westernized and industrialized variants. You can read and write? You wear glasses? You wear shoes? Off to the killing fields with you.

Or, more logically, we don’t really know what makes a culture of exploitation, or a culture that causes children to fail in school. So the only shot we have at changing it is to change ALL of it. So far, amazingly, nobody is talking about burning down the school buildings. Probably that’s because firing lunch ladies and replacing them (probably with, as the radio piece pointed out, lunch ladies who had been fired from other failing schools) is cheaper than demolishing buildings and reconstructing them. Arguably, buildings have a lot more to do with the “culture” that happens in them than lunch ladies. But, clearly, these guys have not thought this through.

CynThesis

The Zapping of the American Mind

March 15, 2010

In Chicago some years ago, a controversy arose within the police department about its normally energetic and successful drive for employee contributions to the United Way. That year, many police officers decided not to make their usual generous donations. The cause of this mass defection was a United-Way-funded program making an attorney available by telephone to consult with people who have just been arrested for any misdemeanor or felony. The time frame in question is the interval between arrest (and the concomitant, mandatory recitation of the Miranda warnings in the suspect’s primary language) and the arraignment/bail hearing. And the purpose, according to the proponents of the program, was to make sure the suspect really understands that he has the right to remain silent.

Many police officers were outraged at a program which virtually guaranteed they would get no information from criminal suspects. Since that is precisely the point of the Miranda warnings–with which the police have coexisted more or less comfortably for over forty years–I find it hard to sympathize with them (although, of course, they have every right to choose not to fund such a program.)

But as an English teacher, I am deeply concerned by the necessity of such a program. Face it, by the time an offender is old enough to be tried as an adult (even in Illinois, which sometimes allows such arrangements for 12-year-olds), he has probably heard the Miranda warnings recited at least fifteen hundred times on prime-time cop shows and in movies, quite aside from any occasions on which he may have heard them “live” in the course of personal encounters with the law. If he still needs a lawyer to tell him that “you have the right to remain silent” means “shut up until your lawyer gets there,” that says something very disturbing about the way the average American processes information.

This problem arises in plenty of places other than police stations, and is not confined to poorly-educated people with scanty knowledge of standard English. In one of my college classes, an intelligent, literate student who speaks standard English with no accent whatever barely escaped disaster on my one-hour midterm exam. The instructions indicated clearly that questions 1 and 2 were 15 minutes each, and question 3 was 30 minutes. The student told me afterward that she read the instructions carefully (which I always remind them to do), including the time limits–and then spent more than 50 minutes on the first question.

In my other incarnation, as an attorney, I once had a client who promised to make a substantial payment on fees he already owed me, if I would represent him in a pending hearing. After the hearing, I asked him when I could expect the payment. He said something like “I won’t be paying that.” I asked him whether he recalled making the promise–he did–and then I asked him what he had meant by it. “I don’t know what I meant,” he replied. Now, that may have been only his rather ungraceful way of avoiding admitting that he had made that promise solely for the purpose of inducing me to represent him when he really needed it, and had never had any intention of paying me. Which, however disturbing it may be, is a problem in ethics, rather than in processing information.

But I have seen too many parallel cases to believe that. The problem–which probably has some Greek-root neurological name known only to Oliver “The Man Who Mistook His Wife for a Hat” Sacks–is not a perceptual disability, but the inability to allow information to influence behavior, even in the most crucial situations (like that of the recent arrestee in the first example.) Socrates, who held that all evil results from ignorance, would be dumbfounded.

Or is it inability? Is it perhaps a habit, long-hidden from consciousness and therefore almost impossible to break, a habit of resisting the impact of information on our behavior? Is it, perhaps, a necessary but overused defense mechanism arising out of a culture in which we are bombarded with constant demands that we stop, go, don’t smoke, see, hear, visit, buy, above all buybuybuy? Is ignorance the last refuge of the free mind? That would certainly explain the fact that most Americans–even highly educated intelligent people in intellectual occupations–will not admit to having learned anything in high school. And indeed, most of the first two years of college in this country (unlike the rest of the world) for all but the smallest elite, consist of a hasty review of what the students were taught in high school, precisely because they either did not learn it, or because they felt obliged to forget it immediately upon receiving their diplomas (in rather the same way the Pythagoreans postulated the soul, going from its previous incarnation into rebirth, was required to forget everything it had learned in its previous lives.)

There is similar resistance to allowing oneself to be influenced by religious objurgations. (Indeed, the willingness to actually pay attention to “preaching” and change one’s behavior as a result, is often regarded as proof positive of having joined a “cult”) and political speeches. Jurors regularly ignore judicial instructions (though studies indicate that may really be a problem of comprehension), and many of them also ignore evidence.

Unfortunately, of all of the sources of information in our universe, advertisers actually seem to have done the best job of circumventing customer resistance, apparently by casting their message as much as possible in terms other than informational. They try to provide either a non-cognitive esthetic experience which leaves the customer with a good feeling about the product, or a non-cognitive bonding experience which–especially among young male customers–builds loyalty. Information is not only irrelevant to those kinds of messages, it actually gets in their way.

Another source of such resistance may be the American legal system, with its proliferation of unenforced and often unenforceable laws. “It’s none of anybody’s business how fast I want to drive!” one respondent was quoted as saying in a newspaper article on people who drive 55 mph in the left lane (Chicago Tribune, Section 2, p. 1, 5/1/95). “It is…judgmental to decide how fast another driver should or should not travel. If and when I am stopped for speeding, I have no quarrel with receiving a ticket….However, I don’t appreciate another citizen justifying traveling just the speed limit in the passing lane in order to keep my speed in check.” Another respondent said “If I choose to risk a ticket by traveling at a more efficient rate of speed, the only people who need be concerned are myself and the local state trooper. To those who mistakenly believe that they are in danger simply because I am going faster than the arbitrarily-set speed limit, all they need to do is move over. To those who feel it is their job to keep me within the limits of the law, butt out!”

If, for instance, Chicago’s ban on downtown street parking (in effect for the last fifteen years) had been enforced, there would be no need for the various physical barriers erected outside the federal building after the Oklahoma bombing to prevent car bombings. Similarly, there have been numerous cases of a legislator proposing a criminal statute, only to find out (usually from his embarrassed research staff) that the conduct it would penalize was already forbidden by another law currently on the books but long forgotten. The NC-17 movie rating (not a law, but a voluntary regulation of the film industry) essentially means “R–but we really mean it this time!” If they had really meant it last time, it would be unnecessary. We try over and over again to command changes in behavior, and the only result is the piling of one ineffective prohibition on top of another (something the behaviorally savvy Jewish tradition specifically forbids, by the way. If you are going to eat bacon, you don’t have to have the pig ritually slaughtered.)

“Preaching,” “scolding,” and “nagging” are the words we use for any kind of discourse intended to change our behavior when we don’t want to change it. But ultimately, all information gets treated like “nagging” by most people most of the time.

And, as noted earlier in the case of the Delinquent Defendant (the Cashless Client?), this resistance to information affects not only how we deal with what we hear, but also what we say and how (if at all) it relates to what we mean. If I will not change my behavior because of what I hear or read, I also won’t change it because of what I say, nor will I expect you to pay any serious attention to what I say. (In the words of the old song, “How could you believe me when I told you that I loved you, when you know I’ve been a liar all my life?”) Probably the most outrageous example of this phenomenon in recent legal history was a case in Juvenile Court in Cook County, Illinois about twenty years ago. The state’s child welfare agency sent two neglected children who were in its wardship to an out-of-state foster home, and then made virtually no attempt to oversee their care. They kept filing reports, though–based on absolutely no information–that the children were doing well. A couple of years later, one of the children died as a result of abuse by the foster family, and the other was hospitalized with severe injuries. The office of the Public Guardian sued the child welfare agency for gross negligence in failing to check on the children regularly and report accurately. The child welfare agency responded by challenging the right of the Public Guardian’s office to bring the case, on the basis of a conflict of interest, because the Guardian’s office had believed the reports! (The Court didn’t buy the argument, fortunately.) We seem to have accepted all too readily the oriental maxim “Fool me once, shame on you; fool me twice, shame on me.” Anybody fool enough to believe anybody about anything (even once) deserves to be deceived, exploited, and railroaded. The deceivers are merely doing business as usual and cannot be held responsible for the consequences of the behavior of others who choose to believe them.

I have spent a fair amount of time in class explaining the legal consequences of various kinds of mendacity in the media, and my students have no trouble grasping that what Rush Limbaugh says about Hillary Clinton is probably libellous and what Ronco says about the Vegematic is probably fraudulent. But most of them have real trouble grasping why it matters. “Of course people lie on television,” they tell me. “That’s what television is for.” Which is a close relative of the old joke about how you can tell when (name your favorite crooked politician) is lying–”His lips are moving.” The medium is the utter absence of any reliable message. Orwell’s prediction of Newspeak–the total corruption of language to make a vehicle of political control–turns out to have overshot the mark. In our efforts to avoid Newspeak, we have turned American English into Nospeak–a vehicle of nothing at all. To choose to believe any communication, and modify our behavior in accord with it, is a tremendous and terrifying leap of faith, which most of us make once or twice in a lifetime, at most.

What, if anything, can a teacher do to breach the barrier between perception and behavior? How do we get across to students hardened against “nagging” the antiquated notion that letters have sounds, words–and combinations of words–have meanings, and it really matters whether you say “uninterested” or “disinterested”, “lie” or “lay”? Does repetition do it? (Probably not, or they’d come out of high school knowing a lot more than they do.) Is there a way to slip under the barrier, using the techniques of advertising? This is precisely what “Sesame Street” has done, with pretty good results. Can an individual teacher, without the high-powered special effects and resources of a national television show, do nearly as well? Or can a more subtle esthetic approach quietly dissolve the barrier, without the student even realizing it? I have known this to happen, often under the influence of poetry (either reading/hearing it or writing it.) “To every door,” says the Talmud “there are many keys, but the greatest key of all is the ax.” Or do we just keep throwing out bits of information and hope that some of them stick? The barrier is almost never completely impermeable (that way lies autism), but the things that penetrate it are likely to be an oddly-assorted and not especially useful conglomeration. Merely throwing out as much high-quality information as possible in hope of raising the quality of the total mix is too haphazard to be satisfying to most teachers (or for that matter, preachers, writers, poets, and politicians.)

Most commentators who predict the demise of literature, or of a particular literature or language, have done so largely in hope of getting credit for single-handedly reviving it. The current generation of English teachers and professional “naggers” has no such high-flown expectations. Most of us would be happy if high-school graduates would remember their sixth-grade grammar long enough to complete a sentence whose subjects and verbs come out even. We pity the high school teacher whose job seems to amount to painting the Mona Lisa on sand. We try to talk faster, so as to keep our instructions within the limits of our students’ current attention span, but we are not yet capable of being the “One-Minute Teacher,” and it’s probably just as well. Long ago, an unnamed talmudic wise guy asked the scholar Hillel to tell him the whole law “while I stand on one foot.” Hillel actually had an answer: “Love the Lord your G-d with your whole heart and soul and strength, and love your neighbor as yourself. This is the whole law and the prophets. Everything else is commentary.” But he could not resist adding, “Go and study it.” Presumably sitting down.

Jane Grey

“And Such Small Portions…”

October 3, 2009

Obama wants to lengthen the school year and the school day. Probably the school week is the next target. Like the educational experts who have been suggesting all this extra time for the last twenty years, he has several different rationales. First, of course, the original school year was set up for an agrarian society in which the kids had to be home during the summer to tend to the crops. That’s quite true, and makes the classic June-through-August vacation an anachronism. Moreover, the data are pretty clear that such a long time out between school terms really does cause most kids to lose some of the learning they had accomplished the previous spring by the time they come back in the fall. So the longer school year has some solid facts behind it.

But, as the AP article points out (http://news.yahoo.com/s/ap/20090927/ap_on_re_us/us_mor), a longer school year doesn’t have to mean a longer school day: “Kids in the U.S. spend more hours in school (1,146 instructional hours per year) than do kids in the Asian countries that persistently outscore the U.S. on math and science tests — Singapore (903), Taiwan (1,050), Japan (1,005) and Hong Kong (1,013). That is despite the fact that Taiwan, Japan and Hong Kong have longer school years (190 to 201 days) than does the U.S. (180 days).” And AP isn’t even looking at European nations that start kids in school later (in Denmark and Sweden, as late as age 7), or with shorter school days than ours (in France, 8:30 AM to 2:30 PM with a 2-hour lunch break until age 11), that still have better literacy and graduation rates than we do. Whatever it is that American schools are doing in the classroom, it is not at all clear that more of it would be helpful to the students. It sounds too much like the irate restaurant patron who complains that the food is terrible and the portions are too small.

In fact, anyone who has worked with adult literacy programs can verify that it shouldn’t and usually doesn’t take anywhere near 12 years—even with current summer vacation and school day schedules–to teach what most high school graduates come out knowing. Note also that summer vacation has already suffered considerable abbreviation in many school districts. The whole idea of Labor Day as the last long weekend of summer (meaning, presumably, children’s summer vacation) now makes no sense at all. Almost all public schools start up around the middle of August these days, and many don’t shut down for the summer until nearly the end of June. (Which is a good argument for moving Memorial Day into late June and Labor Day into early August, but I digress.)
Compressing the school year even more than we are already doing may make sense, but lengthening the school day really doesn’t.

In fact, the justifications for a longer school day are entirely different, and a lot less plausible, than those for a longer school year. Obama (and most of the other people one hears declaiming on the subject) are mostly concerned, not about education, but about safety. “Those hours from 3 o’clock to 7 o’clock are times of high anxiety for parents,” [Secretary of Education Arne] Duncan sa[ys]. “They want their children safe. Families are working one and two and three jobs now to make ends meet and to keep food on the table.”

Yes, those safety concerns are valid. But what do they have to do with school? Why should professionally-educated adults who should be home grading today’s papers and preparing tomorrow’s lessons have to function as baby-sitters to keep Johnny from getting shot? That’s the job of recreation directors and supervisors, or at worst, of cops. And even more to the point, why should kids have to be on task and programmed for 8 hours a day, just because their parents are? At that point, the arguments against child labor start to fade into insignificance. When do the kids get to just “chill with their friends”? Or have we already accepted the premise that an unsupervised, unprogrammed child is a child at risk of crime, sex, drugs, or obesity, and that the only way to save our children from these dread fates is to subject them to the same scheduling that has already shredded the emotional and physical health of their parents and destroyed the family life that used to keep the children safe?

But, now that we have decided that all more or less able-bodied adults must spend at least 40 hours a week being paid to work for somebody else, we have also decided that anybody who looks after children has to be paid. For more recent data, see:
http://www.google.com/hostednews/ap/article/ALeqM5intayNnUex0u2p2aPfl95SZitE9QD9B15QLO0 and
http://news.bbc.co.uk/2/hi/uk_news/8277378.stm

And, no, none of this has anything to do with feminism. It is mainly connected to stagnant wages and rising fixed living costs such as housing, health care, transportation, and education. Most stay-at-home mothers don’t view themselves as having chosen to stay home. See: http://www.chicagotribune.com/news/chi-census-momsoct02,0,3742466.story for why most stay-at-home mothers are younger, less educated, and have more children than the rest of the female population, so that their prospective earnings are lower and their child care costs would be higher.

I mention these stories because they have all hit the Web in the last 24 hours. But they do a lot to prove my point that the main reason we really want our kids to spend more time in school is that it’s the cheapest way to free parents to put in 40+ hours a week earning money. Once cryogenics has been perfected, we can keep our kids in a deep freeze until a couple of years before they are old enough to work. We can spend the intervening time teaching them what today’s high school graduates know, and go on from there. O brave new world, that hath such people in it!

CynThesis

Things the Bible Would Have Said if the Author Had a Better Quote Book

September 8, 2009

Warning: this is yet another rant from Jane Grey on people who cite the Bible without bothering to read it.  If you’re not in the mood, go buy some popcorn.

That Other Blog Over There just attributed “hate the sin but love the sinner” to Jesus.  The Other Blogger Over There is usually much more biblically literate than that.  A swift resort to Google tells us that nobody knows who really said it first, but everyone who bothered to check it out reports that it is not to be found anywhere in the Bible.  Which is consistent with my own research.

“God helps those who help themselves,” OTOH, is definitely Ben Franklin.  “To thine own self be true” is definitely Shakespeare.  “With malice toward none, with charity to all” is definitely Lincoln. All of them have, at one time or another, been attributed to the Bible.

The Bible, similarly, says absolutely nothing about abortion, and nothing directly about same-sex marriage.  And everything it says about homosexuality, it says in paragraphs adjacent to pronouncements about adultery, for which it recommends essentially the same punishments (except for the Sodom and Gomorrah story, which can be read several different ways, and which Jews and Christians in fact do read very differently.  The traditional Jewish reading of the story sees the Sin of Sodom as powerful people doing it to powerless people, rather than men doing it with men.)

The finer points of modern textual criticism enable us to determine that, even if all that stuff about wives submitting to their husbands is in the Bible, it wasn’t really Saint Paul who said it, but some cheap knockoff, which is kind of nice.  And, while ignoring Revelation may be easy for us Jews, we don’t get off that easily from looking at Daniel, which was in fact one of the sources of Revelation.  (Arguably, Revelation is a cheap knockoff of Daniel, in fact.)

But then, one of my dearest friends, of blessed memory, once talked a Jehovah’s Witness missionary off his doorstep by quoting scripture at him in English and Hebrew until the poor guy gave up.  Let’s hear it for a little learning (not, BTW, a little knowledge.  See Pope’s “Essay on Man.”  Not the Bible.)

Jane Grey

More About Health Care, or Grist for the Ill

August 17, 2009

First of all, some senator, whose name now escapes me, says the “death panels” are a bad thing because doctors shouldn’t be doing end-of-life counseling anyway, that’s the job of Jesus Christ and your minister.  Well, aside from the fact that there ARE no “death panels,” and that many Americans are not Christian and therefore do not look to Jesus Christ for anything connected to the end of their lives, he actually has a point.  We SHOULD be making these decisions in conjunction with our imam, rabbi, high priestess, or pastor, if at all possible.  These guys may not know exactly what kind of life-extending treatment is available, but they certainly know their way through an ethical conundrum. That’s what they DO.  My father, of blessed memory, wrote a living will with the help of his pastor, who witnessed the document.  I relied on it during Dad’s final illness.  Despite the intrinsic sadness of the situation, I was enormously glad to have the document in front of me while dealing with the hospital.  Dad was, admittedly, much better than most people at advance planning in all areas of his life, which made life a whole lot easier for me and my brother.  But note that he didn’t ask his doctor about this stuff; he asked his pastor. And the pastor, relying on the “no heroic measures” language of the pre-Vatican II Catholic church, advised no resuscitation and no artificial ventilation, more than twenty years ago, long before it was a hot topic in political circles.

I think ALL religious organizations should be educating their clergy (and laity, for that matter) about their particular views on end-of-life care, and encouraging people to consult their clergy about these issues.  If they’re not good for that, whatthefrack ARE they good for?

Jane Grey