Archive for the ‘community’ Category

Old Age Should Burn and Rave at Close of Day

October 7, 2011

Back when I was an English teacher, one of the best writing tips I gave my students was to write the last paragraph of an essay first, and the first paragraph last. Remember Benjamin Button (recently played by Brad Pitt)? The guy who was born old, grew younger every year, and finally faded into infancy and unborn-ness? Maybe that’s how most of us live. The teenage brain has no sense of the long term. With most of his three-score and ten years still ahead, the teenager lives as if there were no tomorrow. Developing a sense of the future, and then the ability to plan for it, is the project of young adulthood. Some of us do it better than others. But by the time we reach the post-householder age (as the Hindus define it), there really are very few tomorrows left, but we live as if our future were completely fixed and defined.

Admittedly, the post-householder age is not what it used to be. There’s a lot more of it. When Prussian Chancellor Bismarck, back in the 1880s, introduced government-funded old age pensions, he set them up to begin at age 65—because by that age, the average German citizen was dead. Today, Social Security coverage would have to begin at 80 to accomplish the same goals. And most of us are still in pretty good health until shortly before death. The average American spends half of his total lifetime medical expenditure in the last six months of life. For the 14.5 or so years before that, most of us are in pretty good shape.

So here we are, we older Americans, with 14.5 years of able-bodied life ahead of us, free of workplace obligations, educated by experience to know which way the wind is blowing without the aid of a meteorologist, and, often, more economically secure than we have ever been before. “The last of life for which the first was made,” as Robert Browning presciently called it. We are the natural revolutionary cadre. We can no longer leave it to the college kids, who are overworked and economically terrorized, desperate to build a future they cannot imagine. We have the security. We have the education and experience. Above all, we have the time.

What we don’t have is a romance of revolution. The Arab Spring is rooted in societies where the median age is 30 or under—pretty much like “the ‘60s” in the US and Europe. In 1966, Time Magazine named the youth of the ‘60s its “Person of the Year.” We still think of revolution as the task of youth. That’s a luxury our country as a whole can no longer afford. The median age of the American population is now close to 40. And everybody under 60 or thereabouts is expected to be either working for pay, or trying to find work. We geezers and crones are the only people allowed to do anything else useful with our time, and even availed of the necessities of life while we do it.

My brief perusal of the coverage of Occupy Wall Street in New York and its parallel protests in other cities tells me the media see the protesters as “students” and “youth.” I’m a bit skeptical of this depiction. Back in The Day, I spent a fair amount of time in protests and rallies myself. I was at the time an English teacher, respectably married, and generally went to such events wearing skirt, blouse, and jacket, hose and shoes. Most of the people I hung out with were similarly employed and attired. But we never showed up in the coverage. Had there been one single long-haired scraggly hippie among us, he was, invariably, the one who would turn up on the evening news. So if the media want to define this round of protests as completely youth-oriented, mere facts won’t stop them. And if the Raging Grannies and the Gray Panthers and our other age-mates happen to be turning out in respectable numbers, we will probably be operating under cover of media-generated ignorance for at least the first year or two, and that may be just as well. Invisibility is a useful tool and an excellent weapon. Let’s hold off on public Dodder-Ins for a while yet.

In the meantime, I have just managed to get some assistance with taking care of Mr. Wired, so I will be able to spend more time practicing law and getting to Shabbat services. This is going to be an interesting year. Peace and light to you all.

Red Emma

Original Sin and the Market Economy

June 20, 2011

Many years ago I used to work with a Jesuit, who told me once that after his first month of hearing confessions, he no longer believed in original sin. “Nothing original about it,” he told me. “Just the same thing over and over.” We know, from centuries of observation, that the market economy is basic to human nature. Put a bunch of people on a desert island, and within a generation, if not less, they will be buying and selling in complete obedience to the law of supply and demand. Every experiment in non-market economies that post-industrial humanity has tried—from the communes of Oneida and Amana to the Soviet Union—has dissolved into its fundamental market essence. The only partial exception is the Israel kibbutzim, and they merely replace the individual, as a player in the market economy, with the group.

It has become fashionable to conclude from this factual situation that the market economy is not only natural to the human personality and society, but a Good Thing, as Cellar and Yeatman (authors of 1066 and All That) would say. In fact, that is a wholly separate question.

Virtually every religious tradition recognizes that human nature is flawed. Many of the things that are natural to the human being are Bad Things. The depth and reparability of the flaw may be defined differently in different theologies, but even the most optimistic—that of Rousseau, for instance—cannot escape the reality that this fundamentally good, free being has somehow managed to produce a society everywhere that puts people in chains. Even those who have defined our world as the best possible do not necessarily believe it is good. So why do we so optimistically list the market economy among the good things humanity has invented (like indoor plumbing and the smallpox vaccine) rather than the obviously bad things (like war and torture) or even the dubious achievements (like the beeper, the boom box, and the singing commercial)?

I suspect the answer is that the people most interested in encouraging the acceptance of the market economy as a Good Thing are those who profit from it the most—who can also, by definition, afford the biggest PR budget. They can even afford to disguise a large portion of that budget as subsidies to academic and managerial research.

From the point of view of, you should pardon the expression, the Judeo-Christian tradition, the undiluted market economy is nothing short of an abomination. One of the central premises of that tradition is the ultimate importance of every human being, made in the divine image and likeness. The market economy takes the position that everything is either a commodity, with a market value, or worthless. The market value of a person is defined by what s/he owns or by what s/he can produce for other people. Those people who own nothing and can produce nothing (because they are too young, too old, too sick, or lacking any marketable skills) are worthless. The undiluted market economy, as adumbrated by Ayn Rand, for instance, has no room for them. The minute such an economy feels any strain of scarcity—in a war, for instance—it will dispose of them, or at any rate do nothing to keep them alive. A pure market economy whose members feel affluent at the moment may continue to support these “useless mouths”, either from force of habit or sentimentality or most likely from the diluting influence of some non-market theology, whether it be what we normally recognize as religious, or one of the more secular theologies of socialism or communism. But we define that generosity as a luxury, a failure in our otherwise clear-headed realism.

However, our society (like almost every society on the face of the earth) still likes to think of itself as being based on non-market religious or quasi-religious values. Like just about every other society, except some of the socialist- and communist-related ones, it is not willing to recognize its religious tradition as being anti-market, even in part. It is not willing to recognize even the possibility of conflict between its supposed moral foundations and its economic system, much less to try to explicate a right relationship between the two.

That’s a relatively new problem. In the Middle Ages, all the major Christian theologians had something to say about the right conduct of economic affairs. So, by the way did the eminent Muslim theologians of the same period. So did the rabbis who compiled the Talmud, and their successors who commented on it until fairly recently. We now consider that kind of discussion fundamentally illegitimate, or at any rate pointless. Marx and Lenin are largely responsible for this problem. “Scientific socialism” means that economic systems no longer require moral underpinnings, any more than the law of gravity does. What the followers of Marx and Lenin seem to have missed is that the scientific laws of economics (whatever they may be) can be misused by bad people just as the law of gravity was misused by the people who dropped Jan Masaryk out the window. And science has no argument to offer against this misuse.

In fact, whether Marx and Lenin were conscious of it or not, the “scientific socialism” they formulated was a theology, a statement of how things ought to be rather than how they are. By refusing to recognize that reality, they undermined of their own ideas, and the legitimacy of any moral critique of the natural behavior of human beings and human societies.

We need to take up that hallowed task again, from the point of view of the tradition with which I am most familiar, the Jewish tradition. Since our society still pays lip service to the “Old Testament” as part and parcel of the Christian tradition to which it also pays lip service, must of this analysis is at least consistent with the official values of the larger society.

So let’s begin with Genesis I, 27: “in the image of God He created them…” The creation narrative has always been interpreted in the Jewish tradition to point to the infinite significance of every human life. One of the reasons God began the creation of the human race with a single couple, we are told, is to emphasize that saving a single life is equivalent to saving the whole world. The rabbis tell us that we are all descended from Adam so that we cannot tell each other “my ancestors were greater than yours.”

From there, we can look at the Book of Deuteronomy, with its extensive social legislation. It makes two main social policy points: 1) no property is permanent. Slaves are to be liberated every seven years. The Jubilee, every fifty years, requires the return of land to its original owner and the forgiveness of debts. What we call ownership is more of a long-term lease from the real Owner of everything—God. And 2) several classes of people are under special divine protection: the widow, the orphan, the stranger, the poor, and the Levite. What they have in common is that they do not—cannot—own land, which, in an agrarian society is the only link to the means of production. Lacking that link, they are given instead certainly divinely-legislated rights: in the case of Levites, to tithes and sacrifices; in the case of the widow, the orphan, the stranger, and the poor, tithes, gleaning, and the leftovers of the olive and grape harvests, as well as the right to be paid before sundown on the day their work is performed. Arguably, this was the first affirmative action legislation. The author recognizes that it is not natural for human beings or human societies to grant these rights to their least fortunate, most “worthless” members. So the book drives its legislation home with dire threats of the penalties for disobedience—military defeat, exile, drought, famine, plague and disorder.

This theme recurs among most of the major prophets—Isaiah, Jeremiah, Micah, Amos. They inveigh against “doing what comes naturally”, whether the lawless behavior in question is sexual, culinary, juridical, or economic. The whole point of being the Jewish people is not “doing what comes naturally” in any of these areas. That is our side of the Covenant. The Other side is that, if we violate our obligations, we will suffer exile and destruction—but eventually (after we clean up our act) we will be forgiven and restored.

The Jewish tradition has its own way of talking about “doing what comes naturally.” We call it the Yetzer ha-Rah, the evil impulse. Some Christian theologians equate it with original sin. But the rabbis are a lot more pragmatic. “If it were not for the evil impulse,” we are told, “No one would ever get married, produce children, do business, or build a house.” In short, “doing what comes naturally” is essential to individual and collective human existence. “Natural” economic behavior—the market economy—is one of the essential things the evil impulse enables us to do. But the evil impulse has to be carefully controlled, by the Yetzer ha-Tov, the good impulse, and the divine commandments. Including, of course, the commandments protecting poor and powerless people.

In short, the impulses that power the market economy are no worse than sexual libido, but they are also no better. The market economy does an efficient job of creating and distributing wealth among the people who can afford to participate in it. It does a terrible job of providing for people who can’t afford to participate. Which is fine, as long as we have some other way to provide for such people. As long as we do not make claims for the market economy beyond its real competence.

Today’s discussion of welfare revolves around the words “work” and “responsibility.” They are the shibboleths of both sides. Nowhere in the rhetoric of either side do we hear words like, “the widow, the orphan, and the stranger,” still less any intimation that people in these categories have a divinely-legislated right to our support. Nowhere do we see any recognition, on either side, that some people—the very young, the very old, the disabled, the unskilled—cannot reasonably be expected to work or bear responsibility for their own support, and nonetheless have a right to live and therefore a right to our support. We have allowed the market to dictate not only our economy but our morality. We would be better off deriving our morality from Newtonian physics (which tells us that what goes around comes around) or the kindergarten code (take turns, clean up your own mess, don’t hit), the game of checkers (as enunciated by the Baal Shem Tov—make one move at a time, always move ahead rather than backward, but when you get to the final rank, you can make any move you want), or even the sign the Chicago Transit Authority used to post at the back door of buses: “Wait for light, then push.” The market economy is driven by the flaws in our nature. To make a livable society, we must place that economy under the limits set by our better natures and the commandments.

CynTbesis

A Limited Defense of Affirmative Action

May 29, 2011

I am a beneficiary of affirmative action. These days, so they say, I should be ashamed to admit it. It implies, after all, that I was not otherwise qualified for some benefit I obtained only because of being some kind of “minority.”

I have actually benefited from affirmative action on two different counts–as a woman, and as a Hispanic. Every now and then that gives me a slight edge on the competition. That doesn’t bother me particularly. I’ve been discriminated against as a woman more times than I can remember (or, probably, than I have ever known) beginning at least with my first permanent job, which I obtained only by giving the right answer to the employment agency’s question about what method of birth control I used. (For those too young to remember that era, the right answer was not “none of your damn business.” It was “the Pill.”) On another job, I was sexually harassed before there was even a word for it, much less a cause of action. So I figure any benefit I get from the double x chromosome is just a matter of restitution.

I have also been discriminated against, I’m pretty sure, for being Jewish. This, of course, gets me no affirmative action points, but that kind of makes up for the fact that I do get points for being a Hispanic (both my parents were born in Cuba, and my family is essentially bicultural) even though I have never been discriminated against for that fact. (As a matter of fact, since I am a natural blonde and speak English without an accent, nobody knows I am Hispanic unless I choose to tell them, and I normally do that only where I will get extra points for it. Which is generally in jobs where my ability to speak Spanish really is a plus.) And most recently, I have probably been discriminated against for my age, which is illegal, but for which I get no affirmative action points. So I will take those points where I can get them, without embarrassment and without feeling that my competence is in any way in question.

I went to a good college and made Dean’s List my last two years. I scored in the 98th percentile on my LSATs. But when I applied to law school, I was admitted to a school in which 45% of my class was female, in the mid-’70s, and rejected by another school which had a far lower percentage of female students in that year. The evidence seems clear; I was almost certainly admitted to the former because of my gender, and rejected by the latter for the same reason. My objective qualifications were equally irrelevant to both schools. Probably all those qualifications got me in the second school was a rejection further along in the process than some of my less-qualified sisters (and my totally-unqualified brothers.)

Realistically, of course, nobody ever challenged my academic competence, or that of any other woman I know who has been accepted into any academic program under an affirmative action program. Even the most neanderthal of male supremacists will grant that women on the average do better in school, except in mathematics and the hard sciences, than men. The reason women have historically been discriminated against in academic admissions is that we are not expected to be able to do much of anything useful with our knowledge and academic credentials after we get them.

So the affirmative action issue really only gets raised, where women are concerned, when one of us is promoted to a position of power, beyond the glass ceiling. Then the innuendoes fly–quotas, sleeping with the boss, the supervisor is a leg man, somebody’s sister, somebody’s daughter, somebody’s wife. Most of us, however, would still rather live with the humiliation of possibly having been promoted because of our gender than with the equally potent and much less remunerative humiliation of not having been promoted for the same reason.

Stephen L. Carter’s misgivings

Which is why I have trouble with people like Stephen L. Carter. His Reflections of an Affirmative Action Baby is a thoughtful and well-written book with a good sense of the complexities of inter-ethnic relations in the United States of the 1990s. But I have a few problems with its basic premises. Don’t expect the Establishment to make special standards for you, he tells young African Americans. It’s humiliating that we should think we need that. Meet their standards, beat their standards, and demand to be accepted on their terms. For Blacks and Hispanics, who are popularly expected to be less competent in academic achievement, it may actually be a source of humiliation to be admitted to a respectable school under an affirmative action program because of their ethnicity. However, most of the “affirmative action babies” I know would say that it is no more humiliating than being rejected because of that same ethnicity, and pays a lot better.

Carter’s advice takes the Establishment’s claims of devotion to meritocratic standards at face value. Which gives a lot more credit than it deserves to an Establishment that has never really believed in those standards, and has espoused them only when doing so would serve the purpose of keeping a particular group of outsiders outside.

The reason Carter has not seen this hypocrisy is that he is looking at the experience of only one group of outsiders. If he were to consider that of three others–women, Asians, and Jews–whose ability to meet meritocratic standards has rarely been questioned by anybody, he would discover that the Establishment has never had any difficulty excluding them, or severely limiting their upward mobility, on some other grounds.

The merit system: now you see it, now you don’t

For instance, in the 1930s, Harvard Medical School discovered that, if academic qualifications were to be the only criteria for admission, its entire entering class would be Jewish. Indeed, they would have had to double the size of the entering class to get in more than few token gentiles. So they suddenly discovered that there was more to being a physician than “mere” academic excellence. Arbitrarily, they set a quota of 22% for Jewish applicants, a quota which remained in effect until the ’60s, when, like the Jewish quotas in many other educational institutions, it was replaced with a larger and slightly less transparent quota on students from large cities, especially New York City, under the rubric of “geographical distribution.” Those quotas still exist today in many schools.

The experience of women is in some ways even more blatant. When my classmates and I graduated from college in the early ’60s, we frequently looked for jobs before and between graduate school, in the public sector. We took the civil service exams, scored at or near the top, and were repeatedly beat out for the actual jobs by men who had scored a good deal lower, before using their veterans’ preference points.

When I was at college, in the late ’50s and early ’60s, it was a truism, repeated to us regularly by faculty and admissions honchos, that men scored higher than women on the math section of the SAT, but women scored higher than men on the verbal section. It didn’t, of course, get us much. There were fewer places available for women at good colleges (or any other colleges, actually) than for men, and less scholarship money available for us. So nobody thought much about it. But twenty years later, when the various controversies about the biases of the SAT arose, I was startled to hear everybody, on all sides of the dispute, saying that women scored lower than men on both sections of the SAT. Even the American Association of University Women, in its otherwise beautifully researched study of discrimination against women in education, could only conjecture about what happened, by the end of high school, to the clear lead in reading and verbal skills, that girls have over boys in elementary school. What had happened–a couple of very well-hidden and quickly forgotten news stories revealed–was that in the middle ’60s, ETS changed the verbal section of the SAT, substituting scientific essays for one or two of the fiction selections in the reading comprehension test. Female scores promptly dropped to their “proper” place–visibly below those of their male classmates–and have stayed there ever since.

Asians are the most recent victims of similar policies. Several West Coast schools, most notably the University of California at Berkeley, have experimented with ceilings on the number of Asian students within the last 10 years. A university, the administration proclaims, has the right to put “diversity” above “mere” academic excellence.

In short, the history of other groups of outsiders suggests strongly that if an entire generation of African American young people followed Carter’s advice to meet meritocratic standards and beat them, the Establishment would have no trouble finding some other pretext to exclude all but the most presentable tokens among them from the precincts and perquisites of power–either by changing those standards, or suddenly discovering the greater importance of some other factor.

That does not, of course, invalidate Carter’s advice. It does make one wish Carter were a little more careful about truth in advertising, however. I tend to prefer Malcolm X’s more honest approach, when he advised his followers to read anything they could get their hands on and get all the education they could, even if all it got them was the proud position of best-educated person in the unemployment line.

Was there ever a merit system?

Before the phrase “affirmative action” ever found its way into our vocabulary, the reality of affirmative action was already as American as apple pie. After all, what else is veterans’ preference, if not an affirmative action program for (in the post-World War II era in which it was born) men? What else is seniority, if not an affirmative action program for older workers? I have never known a veteran, or an experienced union man, who was in the least ashamed to have benefited by those affirmative action programs.

Nor should they be. Before the rise of the meritocratic mythology of the ’70s, any American old enough to have held a job at all knew that nobody gets a job solely by virtue of being the most qualified candidate for it. In an economy which has never even aspired to full employment, most available jobs have several well-qualified candidates on hand. Most employment discrimination does not involve hiring an unqualified person in preference to a qualified one, but rather choosing between more-or-less equally qualified candidates on the basis of factors unrelated to the job.

The Jewish Establishment’s position

Many established Jewish community organizations, like many other high-minded, principled opponents of affirmative action, really believe that they are espousing a pure meritocracy as against a system of arbitrary choice. To take that position, they have to presume that, before the 1969 Civil Rights act, all male Jews had the jobs they were entitled to, by reason of their meritocratic qualifications. They also have to presume that all Jews are male, white, anglo, and middle-class and have nothing whatever to gain from affirmative action. They have to, in fact, ignore the experience of considerably more than 53% of the Jewish community. They even have to advocate giving back to the same academic and professional Establishment that subjected Jewish males to explicit, exclusive quotas until the early ’60s, the power to do it again.

Two cheers for affirmative action

Most supporters of affirmative action see it as a lesser evil. But, unlike its opponents, they recognize the realistic alternative as a greater evil. Affirmative action is not a matter of substituting for a pure meritocracy a system of choices among qualified candidates according to standards unrelated to job or scholastic requirements. It is a substitution of one set of arbitrary choices for another.

The alternative to affirmative action in real life is the divinely-ordained and legally-protected right of the employer or supervisor to hire people who remind him [sic] of his best friend, or people who fit his stereotyped image of the “proper” telephone operator or waittress or whatever. We know that most people who get jobs get them for reasons only distantly related to their ability to perform. In fact, the most serious downside of affirmative action, so far as I can tell, is that it denies future generations a really useful index of professional excellence. When I meet a doctor, or a lawyer, or a CPA, who is female or non-white (or better still, both) and who got his or her professional credential before 1970, I know I am dealing with a superlatively qualified professional, because only the best women and non-whites were able to survive the discriminatory professional screening processes in those days. For professional women and non-whites with more recent qualifications, alas, I have to take my chances, just as I would with a white male of any age.

So we sincerely hope that the people into whose hands we put our lives, our fortunes, and our sacred honor are in fact qualified to do their jobs. But as a practical matter, we know that we are at least as much at risk from incompetents who were hired or promoted for being the boss’s brother, or being tall, or not being Hispanic, or having an officious-sounding British accent, as from those hired or promoted for being female, Black, or Hispanic–quite possibly more, since the latter are usually watched more closely. In fact, these days I am beginning to suspect that American-born doctors can no longer be presumed to be as competent as doctors with foreign accents, since the latter are subjected to much tougher screening standards.

Well, maybe two and a half

We may see ourselves as winners or losers, and we may attribute our situation to other people or to our own deserts. Human beings generally have never had any trouble taking credit for their own good fortune or blaming others for their misfortunes. More recently, “new age” thinking has led many of us to take the rap for our own misfortunes, often in truly preposterous ways (“How have I created this reality?” the cancer patient asks.) But it is difficult for any of us to admit that our good fortune may be the result of some totally unearned “break” from the outside world–being white, for instance, or male. That is the real threat of affirmative action–that it requires us to consider the possibility that (even if, as is likely, we aren’t as well off as we would like to be) we haven’t “earned” even the few goodies we have. For those of us raised in the Jewish tradition, which teaches us that the Land promised to us by the Holy One is ours only on loan, and that we were not chosen to receive it because of any particular merit on our part, that shouldn’t be too much of a leap. It should make us more willing to grant similar unearned goodies to other people. “Use each man according to his deserts,” says Hamlet, “and who should ‘scape whipping?” Or unemployment, as the case may be. Even us, the few, the proud, the overqualified.

Red Emma

Forethoughts

May 29, 2011

Recommended Reading

I have a client who now resides in a nursing home and is in the early-to-middle phases of dementia. She is also a sci-fi fan, so whenever I clean out my bookshelves, I take the proceeds to her. I am discovering that, while that improves the quality of my life, it doesn’t necessary change hers all that much. Because one of the few so-far-unheralded upsides of dementia, at least in its early phases, in that you get what I have always wanted—multiple opportunities to read the same book for the first time.

Among the books I have especially wanted multiple shots at in this way are John Brunner’s line of speculative novels: Stand on Zanzibar (1968), Jagged Orbit (1970), The Sheep Look Up (1972), and The Shockwave Rider (1975.) And I spent a fair amount of time wishing there was somebody around right now who writes that kind of stuff, preferably in batches rather than an occasional one-off like Orson Scott Card’s Empire and Hidden Empire (okay, that makes them a two-off, I guess.) I think I’ve found one—John Barnes, author of Mother of Storms, Directive 51, and The Man Who Pulled Down the Sky.. Unlike Brunner and Card, he does dabble in the Irwin Allen school of writing (one damn disaster after another), but in the process he takes a serious look at the trajectories of current social, technological, economic, and political phenomena. Consider this a recommendation.

CynThesis
***************************************************************
The Unknowing God

For a period that lapped over into my college years, the existentialists told us that the human race is engaged in a frantic effort to become god. As I think about it these days, I am increasingly convinced that many of us already are god, and we are failing to notice it (and falling down on the job) to a dangerous extent. Me, for instance. Most of my days I spend working, on the phone, on the computer, at the office, in court, at home running around finding things (and of course losing things and not realizing it till later), shopping, and so on. If in the middle of all this, I sit down and call the Wired Cat, and she comes over to me, sits down at my feet, and reaches out her front paw to pat my leg, to which I respond by reaching down to rub her head between her ears and down to her neck, for her this is a religious experience. Her divinity has taken time out from managing the universe to communicate with, relate to, and pleasure her. Sometimes, like most divinities, I do things she really doesn’t like, such as taking her to the vet. She seems to accept this as good for her in some way that I understand and she doesn’t. She’s lucky enough to have a divinity who doesn’t do any of the awful things to her that one hears about on Animal Planet (Mr. Wired is an Animal Cops junkie and a hard-core groupie of Anne-Marie Lucas.) But if it did, she’d probably accept that too, as most domestic animals seem to. The ones who have been too utterly traumatized retreat into the animal counterpart of atheism—the feral life. (Atheism is not actually the right word—I am not the first to wonder if there is a word for somebody who believes in the Holy One but just doesn’t like H* very much.)

And of course, to our children, and to most of the children we come into extended contact with (as teachers, for instance, and maybe as pediatric health professionals), we are also god. (Note the lower-case initial, used—as Grace Slick explained when she named her kid “god”—so we won’t get stuck-up about it.) So far as the kids can tell, we (especially parents but adults in general to a considerable extent) run the universe, and occasionally take time out from doing that to interact with the kids, for better and for worse.

The Bible actually plays with this idea. For instance there are two or three references to judges as gods. (One suspects some of the human authors of these passages spent some time on the bench themselves—certainly ordinary human judges have always tended to see themselves as some kind of deity.) Moses is told that he is going to be “in the place of G-d” to Pharaoh, and that his smoother-talking brother Aaron will be his “prophet.”

And there is a story about a rabbi (Hasidic, I think) who, upon being told that somebody he knew was an atheist, said something like “Well, that’s good. It means that if he sees somebody who is poor or in trouble, he won’t just say ‘G-d will help him,’ he’ll get up and actually do something for the guy.” Even professionally religious people may have a kind thought for people who, not believing in a divinity, feel obliged to fill in for H*.

Which, if you accept the hard-core deterministic schema of the behavior of all non-human entities, means that human beings and their actions are the only preserve of free will in the universe, and thus also the only rational place for the divine to operate, by inspiration and impulse. Many rational religious people have trouble believing that the Holy One has ever made the sun stand still or water run uphill, but will accept a divine push toward extraordinarily decent human behavior—in other words, that we are not exactly in the hands of G-d, sometimes we are the hands of G-d.

Jane Grey
*********************************************************************
War is the End, part II

Does anybody else remember the study that told us we could have won the hearts and minds of the Vietnamese people by giving $10,000.00 to every man, woman, and child in that country, and still have spent less than the $686 billion we actually spent on the war? (Another sourcing problem, obviously.) Anyway, Cecil Adams, of “The Straight Dope” has heard from a history scholar who says the North could have bought and freed all the slaves in the then-US for something like $72 billion in present-day dollars, which was also considerably less than the overall cost of the Civil War, especially if you reckon costs and damages on both sides, which of course all ultimately came out of US GNP. This once more tells us that wars are almost never “about” their official causes and purposes, which could almost always be implemented a lot more cheaply, easily, and with less violence. War itself, or some so far unknown concomitant of war, makes it an irreplaceable element of human polity.

Red Emma

*********************************************************************
Life Among the Condonauts

I just opened a mysterious envelope from a fellow resident of our condominium building, to discover that, as a member of the condo association, the Wired Household is being sued by another member of the association and by our really heroically estimable janitor, for the alleged misconduct of the erstwhile chair of the association, our upstairs neighbor. This is a peculiarity of condo law-—in order to obtain a remedy for some misbehavior by condo association officers, you have to sue the association, even if you are a member of it. Which means that you are, in a sense, suing yourself. You are certainly costing yourself money. All the costs of defending the suit come out of the pockets of the residents. We could even wind up paying the costs of the other side. This damn thing has got to be mediated, ASAP.

I am the only attorney I know who lives in a condo (for 31 years now) and has never served on the board. I really want to keep it that way. Lawyers are easy marks for pleas of communal obligation. But condo boards are a time sink. I just sent a frantic email to the plaintiffs asking them to please consider mediation. Yikes!

CynThesis

A New Look at Child Labor

May 12, 2011

I googled “child labor” recently, and all I could find was stuff on how much of it there still in the world, and why it was so bad. Nobody seems to be looking at, or even for, an upside. Okay, maybe this is kind of like looking for the upside of the Third Reich (the Volkswagen?) or the reign of Caligula (no bright ideas at all here.) But I think there are actually a few good things to be said about child labor, at least within proper limits.

Depending on what you mean by “labor.” If what you mean by “labor” is doing something that will wear you out and use you up within 20 years or less, no matter what age you start doing it at, then, no matter when you start, it’s a bad idea. Like coal mining, for instance. It was bad when 9-year-old kids were pulling coal carts in 19th-century England, and it’s just about as bad today when 45-year-old men die of Black Lung after 20 years of it, in Kentucky. The use of child labor instead of adult labor has all kinds of nasty side effects, such as lowering the general wage rate (under the odd misimpression that it‘s okay to pay for the same work at lower rates when a smaller person does the work,) and increasing the unemployment rate among adults. And working employees too hard and too long to allow them any kind of personal life or education is bad, whether you do it to kids or adults. Paying them so little that they have to supplement their wages with the only kind of “moonlighting” they have the time and energy for, namely prostitution—whether you do it to women or children—is as immoral as it gets. For further information, read Dickens.

In short, I’m not sure there is any way in which the bad side of child labor for the child is any worse than the bad side of adult labor for the adult worker. Since adults make the laws, and since one of the bad sides of child labor for adult workers is lowering wages and increasing unemployment, that didn’t really matter much once the groundswell against child labor started to grow. Progressivism and New Deal trade unionism both leaned strongly in the direction of getting people other than prime-working-age adult white males out of the workforce, using whatever rationale happened to be handy at the moment. Which was often good for families, good for adult workers, and good for The Economy.

But societies that have banned child labor (not to be confused with societies that have actually eliminated it) have created problems of their own. The most notable is that, in such societies, children are an economic liability to their parents, and may suffer abuse or neglect from them as a result. In places like China, if you can’t sell your child’s labor, you may end up selling the child instead. In places where nobody’s buying, you may simply abandon the child, either in some exposed place or in some “orphanage.” Either way, the child may die young or never develop its full mental and physical potential.

But as long as poverty persists among families, banning child labor is unlikely to completely eliminate it. Child labor persists in the US in fast food joints, on farms, and most notably in criminal enterprises, where the fact that a juvenile will get no more than a nominal punishment for conduct that could put an adult away for a long time makes “shorties” really desirable employees for look-out and courier duty.

Oddly enough, most families affluent enough not to need to put their children into the legal, semi-legal, or illegal workforce, tend not to expect much labor from them at home either. My mother, who was #5 of 8 children, once told me that her mother told her, “Once your oldest daughter is 8 years old, it doesn’t matter how many more you have; one of the older ones will always be able to take care of the younger ones.” Both because such large families are rare today, and because middle-class Americans disapprove of anybody under 12 doing any kind of child care or major domestic chores, this doesn’t happen any more. Child development “experts” generally believe that children should be expected to help out around the house and clean up after themselves, and should not get their allowance as “wages” for these tasks, but they get listened to only slightly more on this subject than on the topic of corporal punishment, which isn’t much.

Okay, that’s pretty much the adult side of the issue. What about the kids? The advantage of writing about children, of course, is that even if you’ve never raised one, you and every other person on the planet has been one. (Original sin consists of having been born with parents, which is why Adam and Eve escaped it.) Do y’all remember the time in your early teens and the years just before that when you really really really wanted to do something real and significant and useful and necessary? There are long stages of child development in which the child’s play consists of nothing but imitating (to the best of her knowledge and ability) the adult’s work. Sometimes that knowledge and ability can be pretty impressive. The computer skills of people we usually regard as “kids” can be downright amazing, and sometimes even remunerative. The Wired Daughter, between ages 15 and 18, got herself a job in a social service agency working with runaway youth, doing all kinds of statistical correlation and record-keeping, much more skilfully and assiduously than most adults I have known doing the same kind of work. Because it was a non-profit, nobody worried much about child labor laws, least of all our daughter, who was having the time of her life. Once she turned 18 she turned (temporarily, thank heaven) into a slacker. But not letting her do the work of her choice before that would have been a real injustice to her. When my nephew was the same age, he worked until well after the official closing time in a local restaurant, and found it both enjoyable and liberating. When I was the same age, I was learning to sew, and type, and cook, and write. All of us, of course, were also going to school and doing pretty well at it. None of us were dependent on earnings from such work. Which gets rid of most of the downside of child labor. I think that’s just a stage of development kids go through, with or without compensation, and it’s a good thing for all of us that they do.

On the other hand…

As more and more “middle-class” families in the US find themselves sliding out of the bourgeoisie, the role of child labor in such families will become more and more difficult. Most middle-class and even working-class families today do not expect their children to contribute to the household income, even by paying rent when they are working full-time and living with their parents. Most middle-class parents are really uncomfortable sharing the financial realities of their lives with their children (often, even after the said “children” have long since reached adulthood.) The whole point of being “middle-class” in this culture’s families is that the parents never have to admit to their children that they can’t “make it” in this economy, or even seriously discuss what it would mean not to ”make it.” No doubt it’s comfortable for a child to believe that the parents will always be able to “manage,” just as it’s comfortable for the child to believe that Daddy can beat up any other guy on the block. Until recently, the majority of American kids had no reason to disbelieve either proposition. Now, child development “experts” are taking on these issues, with varying degrees of success. It would probably help them, and the parents they advise, and the children who do or don’t benefit from that advice, if we could start talking more explicitly about what children can do to help their families in a bad economy, and why letting them do it isn’t unthinkable.

Red Emma

The Blockhead’s Market

May 1, 2011

“No man but a blockhead ever wrote, except for money.” Samuel Johnson

Arianna Huffington is being sued by some of her former unpaid bloggers. Jonathan Tasini and the other members of his class action against her complain that they created the value of the Huffington Post with their unpaid writing, and she then sold it to AOL for $315M. The bloggers, of course, got none of that money. The plaintiffs want a cut, at least $105M.
(more…)

Surviving in Third-World America

April 12, 2011

Do you ever get tired of hearing that the U.S. is the only western industrialized country that (doesn’t have handgun control/doesn’t have a national health care program/has an infant mortality rate over __%/imprisons more than __% of its citizens/pick one)? After hearing so many of the pronouncements indicating that we trail the industrial West in good stuff and lead them in bad stuff, are you starting to wonder whether the U.S. really is a western industrialized nation any more? Is it possible that we’ve become, or are at least well on the way to becoming, a Third World country? After all, we are no longer the world’s wealthiest nation, nor its healthiest, nor its best educated. Now that the Soviet Union is no longer marking the boundary of the First World, maybe we are. And how long will it be before we mark that boundary from the wrong side?

I’m willing to leave the geopolitical and macroeconomic implications of all that to the politicians. What concerns me is what concerns just about any ordinary person–how to make it from day to day in a Third Wold, or nearly Third World, country. Obviously, the best way to research this question is to ask people who’ve done it, more or less successfully, all their lives–the ordinary, would-be middle-class people from various Third World countries. Or at least to learn as much as possible about them.

So, based on what we know about real life from Third Worlders, here are some basic suggestions:

* In unity there is strength. Extend your family as far as you can. Begin with real relatives, by blood or marriage, and then quasi-relatives (exes and steps and their families) and then what anthropologists call “fictive kin”–godparents and foster siblings and so on. Cultivate these relationships and use them for the benefit of all concerned.

* One of the most important ways to do this, of course, is to share living space, especially if somebody in the family has a large, fully-paid-for house. This gets everybody economies of scale in housing, utilities, and food. It also puts people who have both jobs and small children within easy reach of potential baby-sitters with neither.

* If you can’t extend your family, you can at least create one. Get married. Form close friendships. Join cooperatives of all kinds. Join the church/ synagogue/mosque/ coven/whatever of your choice. Making it will be hard enough in the company of others. Alone, you’re probably a dead duck.

The only possible exception to this rule is children. Third Worlders typically have them–lots of them, if possible–for retirement insurance. But Third Worlders generally are required to expend fewer resources up front on their kids than American child labor and compulsory education laws allow. Give this one some thought.

* Stay out of the official dollar economy as much as possible. The IRS, of course, frowns on “off the books” income and untaxed barter. But even they have not figured out how to tax you on the value your do-it-yourself activities add to your assets. The official money economy in Third World countries is rigged to underpay the non-rich to the point of starvation while extracting from them in prices and taxes more than they can possibly afford. The only way to survive in such an economy is to stay out of it, both for production and for consumption (including credit–borrow from family, borrow from friends, borrow from your business colleagues, and then let all of them borrow from you, but stay out of the official credit market if you possibly can. Likewise, don’t lend in that market–that is, keep your savings out of banks.)

* Play the lottery–but not very much. It is true, of course, that your chances of winning are slightly less than your chances of getting struck by lightning. But they are also only slightly less than your chances of attaining the American Dream in any of the official legal jobs likely to be open to you. Buying a ticket nearly doubles your chances. It’s hard to beat odds like that. But buying more than a couple of tickets a week is a bad investment of money you should be using elsewhere. By the way, if the prior history of American lotteries is any indication, this batch will be around only another fifteen years or so, so take advantage of them while you can.

* Use public amenities creatively, while there still are any. Their days too are numbered, but while they last, public schools, public libraries, public parks, public hospitals, and similar amenities are usually perks of living in a particular locality. Therefore, given your limited stock of housing dollars, you are usually better off spending them on cramped, shabby accommodations in an affluent town or neighborhood than on a commodious, well-appointed place in the slums. Besides, your well-off neighbors are more likely to have jobs for you–both long-term and free-lance–than slum-dwellers are. And they are usually canny shoppers, so the assortment of merchandise available to you in local stores will be higher quality at lower prices. You and your family will have a much better chance of making good business contacts too. In short, unless you have ambitions in local politics, it is better to be the poorest person in a rich neighborhood than the richest person in a poor neighborhood.

* Education will still pay off, but will be a lot harder to get, and won’t necessarily bring your income above the poverty line. Nevertheless, get as much of it as you can, and try to keep as much of it as possible in four-year colleges, which produce slightly more respectable credentials than community colleges. You may want to consider going outside the U.S., at least for your bachelor’s degree–it could be cheaper.

* Don’t plan to retire. You will probably never get a private pension, and the value of your Social Security grant will almost certainly diminish rapidly. So be prepared to look for the odd jobs you are still physically capable of doing, most notably childcare and other home help for employed family members.

* Stay healthy. If you can’t stay healthy, at least try to stay out of the official health-care system, which you probably can’t afford, and which probably can’t do much for you anyway. Better you should spend your health care dollars on (a) studying self-care; (b) alternative practitioners recommended by people you trust who have not yet died of their own health problems; or (c) if you must use “official” practitioners, use the lowest professional level available to you–that is, better a Physician’s Assistant than a physician; better a Nurse Practitioner than a PA; better a Registered Nurse than a NP; better a Licensed Practical Nurse than an RN. The lower down on the professional scale you go, the more personal attention you are likely to get. Whenever possible, stick with practitioners you pay out of your own pocket–they’re cheaper, they are accountable directly to you rather than some insurance company, and they still realize you have the option of not coming back next time if they screw up this time.

* Stay morally connected. Be active in religious, neighborhood, civic, and volunteer organizations. They will remind you–when it is very easy to forget–that there is more to life than survival, and that, even if the big corporations that control the few remaining permanent full-time secure jobs consider you less than the dust beneath their big wheels, there are plenty of people around you to whom you are not merely valuable but essential.

* Similarly, if you have some sort of artistic or intellectual talent and can’t get the official purveyors of culture to take notice of it, don’t let that stop you from putting it to work in blogs, local newsletters, murals, amateur theatricals or whatever, which are probably the only art your friends can afford. Who knows–someday it may get noticed by the official critics. But even if it doesn’t, you have given and received pleasure.

* You may have to do a lot more groveling than you are used to. It is possible to be marketably obsequious and still keep your self-respect, simply by maintaining your objectivity behind your mask (see W.E.B. DuBois.). We Americans have long believed that people who have attained wealth and prominence must be more deserving than the rest of us. As long as ordinary people had a reasonably decent chance of achieving some wealth and prominence of their own, that was a harmless delusion. Now, however, it is dysfunctional and can even be deadly. The only way to survive psychologically and morally in a Third World United States is to be absolutely certain that, as a human being and a citizen, you have the same ultimate value as any other human being and citizen. Do not allow yourself to become part of any institution that undermines that conviction, unless it pays you relatively well. And withdraw your attention and your allegiance from the artifacts of commercial culture that undermine your sense of your own value as a human being. Nobody, after all, is paying you to watch television, so your family loses nothing if you stop watching.

* Do your homework–speculative fiction is rich in models for the world we are moving into, from the novels of John Brunner (The Sheep Look Up, Stand on Zanzibar, The Shockwave Rider) to Philip Jose Farmer’s “Riders of the Purple Wage” to Robert Heinlein’s future histories. Not to mention, of course, Orwell’s 1984 and Aldous Huxley’s Brave New World. See also Strieber and Kunietka’s Warday and Nature’s End. These are just my particular favorites–there are lots more where they came from. If there is one thing we have learned in the past fifty years, it is that if the warped mind of a speculative fiction writer can imagine a shape for a future dystopia, the grasping hands of a political or economic establishment can implement it.

* Furthermore, there are plenty of ways to learn more about how today’s real-life Third-Worlders are managing. Among the goodies available in public libraries are magazines and newspapers from such places, many of them in English (which is, after all, one of the official languages of India, the Philippines, and many African countries.) Much of the fiction of modern India, the West Indies, and Africa was written in English, and much of it is richly informative.

* And note, by the way, that used books are probably one of the cheapest forms of recreation available. The only thing cheaper are the public domain books available for free on your computer or iPhone.

* Learn to like rice and beans. Together they make up the complete protein necessary for good nutrition, as well as containing lots of fiber. With a little celery, onion, and garlic, they can provide most of your nutritional needs for literally pennies a day. They’re probably healthier than whatever you’re eating now.

* Don’t drink the water. Not unfiltered, anyway, and not bottled—that’s just a waste of money and of valuable natural water imported from many places that need it badly themselves, like Florida. Pick up a used scouting handbook and find out all the cheap and quick methods to purify questionable drinking water. Note that, if you live in the country, the air may smell better, but your drinking water may already be dangerously contaminated with pesticide and chemical fertilizer runoff. Urban problems will be different, but just as serious.

I am not suggesting that the ThirdWorldization of the United States is a good thing, or only trivially harmful. On the contrary, for most ordinary people, it can mean perpetually living on the edge of catastrophe and occasionally slipping over it. But it is time we started getting prepared for it, while we still can.

Red Emma

Getting Real About Democracy

February 17, 2011

In 1959, Vice President Nixon and Russian Premier Khrushchev met in a model American kitchen in a Moscow exhibit, and talked about democracy. The actual specifics of the discussion were rather more subtle than what the public, on both sides of the Iron Curtain, ultimately got out of it. What the US audience heard was that democracy is better than communism because the average capitalistic American can afford a refrigerator and the average communist Russian can’t. But even in the subtleties of real conversation, the fact that an ideology supposedly based on dialectical materialism considered certain kinds of public ideals more important than material comforts, while a culture supposedly founded on the ideals of freedom and equality valued its kitchen appliances more than either, never came up. (Nor, oddly, did the fact that, for a large part of the year in a large part of the USSR, people don’t really need refrigerators, they just need an unheated space for storing perishables. But I digress.)

Is freedom valuable only because and to the extent that it produces material prosperity? Some philosophers decry this outlook as a merely “instrumental” view of something valuable in its own right. Aristotle talks about it surprisingly often. Indeed, freedom is not the only ideal we value for its economic advantages. Max Weber attributed the same advantage to particular kinds of Protestant religiosity. So do many on the Religious Right today. Are freedom, and virtue, good because they are good for business? Only because they are good for business?

This is no mere abstraction. The more prosperous parts of the Middle East are doing some serious soul-searching on the subject as we speak. And so, closer to home, are the citizens of Chicago. It’s hard to find, or even to coin, a term for the form of government which has ruled Chicago (with a brief interregnum) for 50 years, but it sure as hell isn’t a democracy or even a republic. Maybe a political monopoly.

The closest parallel I can think of is an altercation I had many years ago with our newspaper. We were signed up for delivery, and our delivery service was outrageously spotty. I kept calling the paper circulation department to complain, with no results. The circulation people told me they had contracted delivery out to some private service. So couldn’t you contract it out to some other service, I asked, baffled. Well, no, they couldn’t, because there was no other service. And, they pointed out, no other service was likely to spring up, because the current service had an exclusive monopoly on delivering both Chicago papers. That’s a pretty good analogy to Chicago politics over most of the last half-century. The Machine has a monopoly on government. And no alternatives are likely to come into existence, because there is no niche for them to occupy. Now Daley II has announced his retirement, and next week we have to vote on his successor. We all seem to be really awkward about this. After all, no Chicagoan under the age of 40 has ever had the experience of a serious multi-candidate mayoral election.

There’s more to it than that, in both Chicago and the Middle East. From the outside, business and other political entities find a monopoly (or dictatorship, or kleptocracy) easier to deal with than a democracy. You know where to send the bribes. (I’m only partially joking here. Chicago took longer to get television cable service than just about any other large city in the US because that phase of development happened during the “interregnum” mentioned above, when Jane Byrne, Harold Washington, and Eugene Sawyer held the office of mayor, and the political scene ran from free-form to chaotic. As a result, the cable companies couldn’t figure out whom to bribe in order to get the concession.) You know where to apply the pressure. As a result, sometimes the citizens of a monopolistic polity really are more materially prosperous than those of a poor but honest democracy. Once the Daley dynasty was back in charge, Chicagoans had no trouble getting cable.

Back in my English teacher days, I often used my students as guinea pigs in informal sociological studies. Since I taught in both private and community colleges, and often at satellite campuses in suburbs and obscure neighborhoods, I actually had a pretty good demographic spread of subjects. For several years, I asked all of them: If you had to choose between (a) a benevolent dictatorship in which you were guaranteed a job, a good income, a nice home with all the modern conveniences, a car, health care, and education for your children, but no right to vote or choose your leaders or have any part in deciding on the laws that would govern you, and (b) a democracy in which you would have the right to vote, choose your leaders, advocate for laws and causes you believed in, with some chance of effecting the system, but a really bad economy, with poorly-paid jobs, so that most people could not afford a decent home, a car, education for their children, or health care—AND IN NO CASE COULD YOU HAVE BOTH DEMOCRACY AND A GOOD ECONOMY—which would you choose? I had to belabor the point that they could not choose “both,” on pain of a failing grade, because so many of them almost automatically assumed that democracy always produced prosperity that I could not otherwise get them to accept the premise, even for the sake of argument.

The ethnic differences were interesting. African-Americans almost always chose the poor but honest democracy. Hispanics almost always chose the benevolent dictatorship. Immigrants, from almost any country, tended to favor the benevolent dictatorship model. I could not discern any difference between men and women, or older and younger students. Everybody else pretty much split right down the middle.

If I were setting up the same survey today, I might formulate it a bit differently. Instead of a benevolent dictatorship, I might posit a state run like a corporation, in which the ordinary non-shareholding citizens got good jobs, good benefits, the right to live in a well-run company town, but no part in choosing the Board of Directors or the CEO, and no right to buy shares (kind of like George Pullman’s initial vision) versus the aforementioned poor but honest democracy. So okay, my fellow Alexandrians, which would you choose? And why?

Red Emma

The Politics of Politics

February 16, 2011

You’ve probably noticed the phenomenon yourself. Any discussion can be completely derailed, any subject can be avoided. All you have to do is say “Well, that’s just politics.” End of discussion. On to the weather and organized sports. Amazingly enough, even elected representatives can blacken one another’s reputations simply by accusing each other of “playing politics” with some important issue. Politics is a dirty word among Americans. Calling someone a politician borders on libel.

It was not always thus. Aristotle said politics is the main thing that distinguishes human beings from lower animals. (Which tells you how little Aristotle knew about cats, for instance. But I digress.) Politics, after all, is the way people make collective decisions, usually about our various visions of The Good, or about distributing scarce resources, without resorting to violence. In most other cultures, politics (a/k/a public service) is still an honored profession. In Central Europe, post-communist politics has achieved a new birth of respectability. What makes American attitudes about politics different?

Politics has been defined as the “manipulation of power,” and as “war by other means.” Usually, when we talk about “playing politics”, we are referring to something else, to what we call “party politics” and James Madison would have called “faction”–putting the success of one’s own group ahead of the merits of the issue in question. It is this sense of the word which we usually have in mind when we talk about certain things being “above politics”–for instance, that “politics stops at the water’s edge,” i.e. that foreign and military policy are “above politics.” Similarly, we appoint government functionaries through civil service, and appoint federal judges for life, to keep them “above politics”–that is, not beholden to or under the control of any particular “faction.”

But, like Madison, we tend to think “faction” is a bad thing because we see it as based on nothing but personal or group advantage. “Viva Yo,” as the Spanish put it. If a faction takes an ideological position of any substance at all, we assume that position is somehow conducive to the personal advantage of faction members, or they wouldn’t be adopting it. There is some basis for this, of course. Very few people who do any serious thinking about public policy issues arrive at positions that are likely to work against their personal advantage and survival. Most of us figure that what’s good for me is also good for just about everybody else, everybody who matters, anyway. But the real purpose of politics is not merely to allow factions to compete for advantage, but to allow divergent visions of The Good to compete for public support and power.

The other aspect of politics which most disturbs ordinary Americans is the necessity of compromise, splitting the difference, making sure everybody leaves the table still a bit hungry. To decide any issue this way, we think, is to start by presuming it can’t be very important. If it were, we would fight to the last drop of blood. Once a question transcends politics in this sense, war cannot be very far away. Once slavery stopped being a normal part of life, like breathing air, and became a moral issue for both sides, politics failed and war became inevitable.

Which puts an entirely different slant on placing anything “above politics.” That which is above politics is also beyond civil dispute. If “politics stops at the water’s edge,” then foreign and military policy lie outside the operation of democracy. Somebody–who may or may not have been popularly elected–decides what that policy should be, and our elected representatives then buckle down to supporting and implementing it. Even if circumstances change, so that a workable policy become unworkable, or a morally neutral policy becomes an abomination, the people and their representatives must continue to implement it to the bitter end. Any attempt to call a halt, for instance by cutting financial support, would be “playing politics” with national security, or so the supporters of the status quo insist.

Similarly, to say that education, or the environment, or other matters of public policy, are “outside politics” is to say either that we are prepared to “go to the mattresses” for them, or that we have unanimous agreement on The Good in those areas. No doubt there have been periods in our history when the latter was true. But, more often than not, this is simply wishful thinking among partisans of one or another vision who desperately want everybody to stop all this arguing and let them get on with their work. Merely wishing, however passionately, will not make it so.

We have to accept the fact that most communities and nations–and particularly ours–are host to numerous factions competing both for material advantage and for their visions of The Good. If we downplay the political realm as a place to play out this competition, we do not thereby eliminate competition. We merely force it to happen in other arenas and by other means. The most common alternatives are violence and money. If you cannot get a hearing for your vision of The Good within the political forum, you can always assassinate one of the more legitimate contenders, or buy off his supporters. Both of these alternatives to politics are popular in Third World countries, and both have achieved some currency even in the U.S. and industrialized Europe as well. The political realm, because its participants can so easily (and often deservedly) be accused of using public funds and facilities for personal advantage, has a hard time protecting itself against infringement by money or violence, and an even harder time distinguishing, in practice and in theory, between personal advantage and ideology.

In countries where, as here, the political realm still exists in a more or less healthy condition, it needs a few things to insure its preservation:
(1) better mechanisms for drawing more people into political dispute, especially people whose opinions are not normally solicited or listened to;
(2) a clear message that dispute is legitimate, and nothing is “above politics,” including ongoing military conflict, national security, and data and principles agreed on by scientifically-educated people; and
(3) mechanisms for public education about issues currently under public dispute, in structures accessible to any interested citizen, and encouragement of a strong ethos requiring those who take part in public debate to educate themselves first. What the “public square” did in a rather rudimentary but thoroughly personalized way in ancient Athens or revolutionary Philadelphia, the Internet is equipped to do in a somewhat shallower but far broader way. For the first time ever, we are technologically equipped to exercise democracy in cities larger than the Aristotelian fifty thousand families.

The questions that so far have been adjudged to “transcend politics” are all, of course, “controversial,” which is what we call any topic when we don’t want to discuss it. What the word actually means is that people disagree about it, and feel strongly about their opinions on all sides, but cannot imagine allowing their minds to be changed by rational argument.

So far, the U.S. has managed to form and preserve a relatively healthy political forum by keeping the really hot “controversial” topics out of it, or allowing discussion within the political realm only by properly licensed “special interest groups.” Such groups are likely to explore an issue more thoroughly and extensively, but they are not necessarily more knowledgeable than the average person on the street. On the contrary, they may just be better organized and more enthusiastic in spreading ignorance and misinformation (and sometimes even disinformation.) Which would be okay if all sides had an equal chance to be heard. But that kind of opportunity depends on all kinds of often unpredictable variables. Money helps a lot. Enough of it can guarantee a hearing. Being perceived as controlling a lot of votes or a lot of publicity is the next best thing. Absent these advantages, the best an interest group can do is try to get a lot of money or a lot of votes, and then parley them into access. Merely having strong, well-researched, carefully-thought-out, well-expressed opinions will not do the job. Maybe we need a more open political realm where it would.

Part of our problem is not merely that we distrust politicians (although, heaven knows, we do!) but that we distrust the political art, even (perhaps especially) when practiced by sincere advocates who are not pursuing their own material advantage. “Rhetoric”, which originally meant the art of persuasion, is now a synonym for the barnyard epithet. Most of us resent anyone who merely states a position without prefacing it modestly with “It’s only my opinion, but…” Anybody who has the nerve to try to change other people’s opinions–except, of course, in the mode of commercial advertising–is somehow infringing on our right to believe whatever we want. The converted are now the only people it is acceptable to preach to. Indeed, most advocacy activity these days is specifically directed only toward inactive sympathizers, and its purpose is not to change their opinions, but to persuade them to act on the opinions they already hold. The only non-sympathizers who can legitimately be confronted with one’s opinion are legislators and other public officials. The purpose of such confrontation is still not to change their opinions, but to change their official actions. We don’t really expect politicians to have opinions of their own, but only to weigh the vote-power behind the opinions of their constituents and act accordingly.

The blogosphere itself, the virtual ground on which we here confront one another, is one of the political arenas with the most potential for civil discourse among widely divergent constituencies. It can easily break down into either a commercial forum for sale to the biggest advertiser or a batch of mutually inaudible echo chambers for the narrowest possible ideologies. But the fact that nobody is paying us to be here, and that we have so far managed to refrain from both real and symbolic threats against each other, is a good augury. This may be the ground on which the American polity revitalizes itself, and we—with all our flaws, crochets, and ideosyncrasies—may be among those who can make it happen.

CynThesis

Kids Having Kids, Grannies Raising Kids; or Leapfrog Parenting in Our Future?

February 7, 2011

The River City Syndrome

“Friends, we got trouble
Right here in River City,
And that starts with T, and that rhymes with P
And that stands for….Pregnancy?”

Everybody talks about teen pregnancy, but nobody can figure out what to do about it. Newt Gingrich had it figured out 17 years ago or so—take the babies away from their mothers and raise them in orphanages. Then he looked at the price tag. Modern standards for what we now call group homes would turn his plan into a bigger entitlement program than Social Security or MediCare. Forget that.

They have it figured out in continental Europe. Teens there actually have more sex than American teens. But they are diligent about contraception, and have no problem resorting to abortion as a back-up if necessary. So their teen pregnancy rate is much lower than ours. This is not to be confused with their out-of-wedlock pregnancy rate, which is really high in Scandinavia, but not among teenagers. Middle-class American girls operate pretty much the same way.

They had it figured out in the 1950s in the US. I remember that system very well. It was the reason I didn’t go to the local public high school. The year before I would have started there, half the girls in the graduating class were pregnant. Most of them got married, very quietly, and then lied about the date. The young men involved all got the satisfaction of having done the honorable thing. The girls got the wedding ring. The babies got their legitimacy. There may have been a couple of girls whose partners did not do the honorable thing, so instead they took a six-month vacation with an aunt in some other state. Most of the girls in question hadn’t planned on college anyway.

The Maternity Dress with the Blue Collar

So far as I know, almost nobody operates that way any more. Blue-collar girls, regardless of race, creed, or color, just stay home (and stay in school as long as it isn’t too much trouble) and have the baby. What has made the difference? Two things, as nearly as I can tell. One is that nobody approves of “shotgun weddings” any more. Even the Catholic Church is reluctant to perform marriages where the bride is pregnant. The statistics on such marriages are discouraging. Both abuse and divorce are much more likely than in the general population of married couples. So the young man in question is under absolutely no pressure to marry the girl. It is no longer considered “the honorable thing.”

The second thing, counter-intuitively enough, is Roe v. Wade. Yes, I know blue-collar girls are very unlikely even to consider abortion. (This is not necessarily because of parental pressure. Indeed, sometimes it is despite parental pressure. When I worked at juvenile court, I once represented a girl whose father had thrown her out of the house for refusing to get an abortion.) But the fact that, in spite of the legality and availability of abortion, they don’t get one, marks them as “good girls,” in their own eyes and those of their peers, in spite of having gotten pregnant. It gives them some moral leverage they would not otherwise have. The Catholic Church recognizes, with a surprising degree of rationality, that anything that makes unmarried pregnancy more difficult makes abortion more likely. So Catholic schools go out of their way to make life easy for pregnant students. Public schools do too, though for different reasons—they just really want to keep the girls in school as long as possible. See http://www.city-journal.org/2011/21_1_teen-pregnancy.html. A pregnant teen who finishes high school is in a much solider situation than one who drops out. Many of the bad things that happen to single mothers and their children are less likely to happen when the mother finishes high school, or better still, goes on to college, at least for a year or two.

All in the Family Way

Most of the pregnant teens who manage this do so only with the help of major parental (mostly maternal) support. If mother and daughter can remain on good terms for the duration (which is not always easy for either one), the baby will have the benefit of two adults caring for her, and often, of two incomes supporting her, just like the child of a properly married couple. I know of no source for statistics on the prevalence of split-ups between mother and daughter in this situation, compared with the stats on divorce after a shotgun marriage, but my guess is that it is somewhat less frequent.

According to AARP, one in every twelve children in the US is being raised in a household with one or more grandparents. These statistics do not distinguish between households in which the child’s mother is also residing and caring for the child, and households in which the mother is for some reason absent (death, incarceration, drug addiction, general flakiness, military service, or single-minded pursuit of education and career goals.) Nor do they provide any information on the increasing number of children being raised by their great-grandparents. But they do suggest a solution to some of the problems besetting the modern family.

The Murphy Brown Syndrome

In blue-collar families, pregnancy happens “too early”, all too often. By “too early,” we mean before socioeconomic maturity, often before finishing school, or even instead of finishing school. In white-collar families (regardless of race, by the way—professionally-educated African-American women have the lowest birth rate in the country), pregnancy often happens “too late.” By “too late”, we mean after socioeconomic maturity, after finishing one’s education and getting established in a career, and after the height of female fertility in the late teens and early twenties. Often, we mean after the precipitous decline of female fertility in the mid- or late thirties. In which case, “too late” may mean not at all. But even if it doesn’t, it often means having children who will be starting college just as the parents would otherwise be starting to think about retirement.

New Supporters of Early Marriage

Early marriage by choice rather than because of an unplanned pregnancy is occasionally discussed among religious groups that frown upon premarital sex (see http://www.christianitytoday.com/ct/2009/august/16.22.html?start=1), and presumed among others such as the Amish who discourage post-high school education anyway, as well as among some immigrant groups. For the rest of us, it seems to further complicate what is already the most complicated period of most people’s life, from age 12 through 25.

Alternative #1: Leapfrog Parenting

But there are a couple of alternatives worth considering. The obvious one, already discussed above, is for women to bear their children early, raise them with the assistance of their mothers, complete their education, start their careers, and then marry. This could even be organized so that grandmother, having finished raising her daughter’s children, would be able to retire just as the daughter is ready to start raising her daughter’s children. Think of it as “leapfrog parenting.” Biologically, we are told, the best age for women to conceive is from 18 to 25. Socioeconomically, the best age for a person, male or female, to raise children is from 35 to 55. The numbers point to one ideal conclusion: bear your own children at 18, and start raising your daughter’s children at 36.

Make Room for Daddy

What place does this scheme leave for the fathers of all these children? I am tempted to say, whatever place the particular man in question wants, since that seems to be what happens anyway. Not being forced (sometimes at gunpoint) to do “the honorable thing” is probably an improvement in our ideas about family life. Not knowing quite what to do when one’s girlfriend gets pregnant definitely isn’t. Fortunately, the country is rife these days with all kinds of projects and programs for, and studies of, teenage fathers. Lots of us are looking for answers to this question, and with any luck, we may find one. (more…)